halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
1
8
timestamp
stringclasses
938 values
year
stringclasses
55 values
url
stringlengths
43
389
text
stringlengths
16
2.18M
size
int64
16
2.18M
authorids
sequencelengths
1
102
affiliations
sequencelengths
0
229
01470661
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470661/file/978-3-642-40361-3_65_Chapter.pdf
Peter Meulengracht Jensen email: [email protected] John Johansen Brian Vejrum Waehrens Md Shewan-Ul-Alam Proposing an Environmental Excellence Self-Assessment Model Keywords: Environmental Sustainability, Self-Assessment, Global Operations Management This paper presents an Environmental Excellence Self-Assessment (EEA) model based on the structure of the European Foundation of Quality Management Business Excellence Framework. Four theoretical scenarios for deploying the model are presented as well as managerial implications, suggesting that the EEA model can be used in global organizations to differentiate environmental efforts depending on the maturity stage of the individual sites. Furthermore, the model can be used to support the decisionmaking process regarding when organizations should embark on more complex environmental efforts to continue to realize excellent environmental results. Finally, a development trajectory for environmental excellence is presented. Introduction Not only customers, but also various stakeholders, for example, legislatures, nongovernmental organizations (NGOs) and media, all contribute to the fact that sustainability, especially environmental sustainability, has gradually established itself as a key competitive parameter [START_REF] Porter | Strategy & Society: The Link Between Competitive Advantage and Corporate Social Responsibility[END_REF]. Issues such as increasing utility prices, scarcity of resources, and climate changes have caused the environmental sustainability agenda to move into the boardroom [START_REF] Pagell | Building a More Complete Theory of Sustainable Supply Chain Management Using Case Studies of 10 Exemplars[END_REF]. However, several studies report that while companies understand well the philosophy and logic underpinning sustainability (e.g. triple-bottom line and balanced results), few companies manage to incorporate environmental sustainability into organizational practices in a coherent and systematic manner [START_REF] Bowen | Horses for Courses: Explaining the Gap between the Theory and Practice of Green Supply[END_REF]. Environmental sustainability efforts often appear as a hodgepodge of sporadic initiatives without any considerable effect on or contribution to the competitive advantage of the company [START_REF] Lubin | The Sustainability Imperative[END_REF]. This poses a great risk, since companies could lose competitive territory in the globalized economic landscape [START_REF] Porter | Strategy & Society: The Link Between Competitive Advantage and Corporate Social Responsibility[END_REF]. Empirical Background and Initiating Problem This study is carried out in collaboration with a Danish multinational manufacturing company; a privately owned organization, with sites in more than 45 countries that employ 15,000+ worldwide. The company is considered one of the frontrunners at deploying environmental sustainability. Despite achieving noteworthy environmental results, the case company is similar to others facing difficulties incorporating its efforts systematically. The challenges can be viewed from two perspectives. From a corporate perspective, there is a need to differentiate efforts and apply more sophisticated solutions at more advanced sites and more basic solutions at more immature sites. There is no "one-size-fits-all" in the global context, yet complete individualization is impossible/inappropriate from a managerial/resource-efficiency point of view. From a local perspective, there is a need to prioritize and implement the "right" solutions given for the site context and situation and to obtain results that match the resources invested. Management concern is raised whether invested resources match the realized results. The case company has previously struggled with similar challenges within the field of quality management (QM) and has experienced increases in quality performance by deploying the European Foundation of Quality Management Business Excellence Model (EFQM model) [5], which is now deeply rooted as the overall management framework in the organization. 3 Theoretical Background The EFQM model has been reviewed and revised during the past 20 years [START_REF] Williams | Self-Assessment Against Excellence Models: A Critique and Perspective[END_REF] and now takes a society-wide perspective on QM and strives to promote "business excellence," that is, improved company-wide performance across the organization, by focusing on customers, people, society and key results [5]. The EFQM model serves as a self-assessment framework to undertake continuous improvement [START_REF] Boer | CI Changes: From suggestion box to organizational learning[END_REF] by measuring progress toward the organization's long-term vision and assessing which activities are going well and which have stagnated [START_REF] Ritchie | Self-assessment using the business excellence model: a study of practice and process[END_REF]. In order to do so, the EFQM model consists of the following key elements [5]: 1. Five enabler criteria and four results criteria 2. Definitions and descriptions of the enabler and results criteria 3. A self-assessment approach based on "RADAR" logic 4. A mechanism for quantifying the organization's current state, or "maturity level," and quantifying areas for improvement Research Objective Taking the EFQM model as a structural starting point, the objective of this paper is to present and discuss an Environmental Excellence Assessment (EEA) Framework. The purpose of the EEA is to serve as a framework for undertaking continuous improvement activities, to increase environmental performance within the case company, and to overcome managerial challenges previously described. The outlook is that the EEA framework can in future assist similar companies facing similar challenges. This paper briefly presents the EEA model itself in order to provide an overview and then continues to discuss possible scenarios for deploying the model. Methodology This study is part of a three-year research collaboration following a design science approach [START_REF] Holmström | Bridging Practice and Theory: A Design Science Approach[END_REF] undertaken as action research [START_REF] Karlsson | Researching Operations Management[END_REF] and a joint collaboration between the case company and the research institution. The conception of the EEA model is a result of working closely with and within the case company for more than two years, a student MSc-thesis project, and an extensive literature study focusing on environmental sustainability. Furthermore, required inputs to the EEA model conception were provided by a three-day introductory course in the EFQM Business Excellence Model, and presentations, workshops, and discussions with key stakeholders in the case company. The literature study and the stakeholder presentations/workshops served to theoretically and empirically validate the content of the model within the constraints of action research, the embryonic stage of the EEA model development process, and the evolving nature of the environmental sustainability field. Limitations This paper does not go into detail describing logical underpinnings for the various criteria and sub-criteria of the EEA. These are presented in table form, to provide the reader an overview. Furthermore, the EEA follows the same principles and approach for self-assessment as the EFQM model. These are not described in this paper. The paper's purpose is solely to discuss prospects for deploying the EEA to solve the challenges facing the case company. Environmental Excellence Assessment (EEA) Model The proposed EEA model consists of five environmental enabler criteria and four environmental results criteria, as depicted in Figure 1. Table 1 provides an overview of EEA model criteria, definitions, and sub criteria. Results that put the organization in a better position to market its products or services. Results are communicated through appropriate challenges to customers and stakeholders. Environmental certification Transparent environmental product data Case stories People Results Results that demonstrate that the organization has the commitment of the entire organization and continuously increases capabilities required to implement the environmental strategy. Employee awareness Competence development Managerial acceptance Society Results Results that demonstrate that the organization reduces its environmental impact and demonstrate that they actively strive to go beyond regulatory compliance. Organizational reputation Environmental awards Environmental-related media coverage Key Results Results are a set of environmental performance indicators that are measurable and that the organization uses to measure its environmental performance and associated financial opportunities. Consumption and cost Payback times on environmental projects Environmental effect on environmental projects Accreditation and certification Possible Outcomes of Deploying the EEA Model Considering that the EEA model consists of a set of enabler criteria and a set of results criteria, the following four outcomes of its deployment are possible: Fig. 2. Possible outcomes for deploying the EEA Model Based on interaction and discussion with the case company, these four scenarios will be discussed in light of the challenges facing various company sites worldwide. Type 1 Sites: No integration Here, no/few efforts are dedicated to creating environmental results, and as a consequence, environmental performance is lacking. Causes are likely to be rooted in poor performance in leadership (enabler criteria 1) and strategy (enabler criteria 2). It should be evident that there is no management commitment, focus, and direction for working on environmental sustainability. These are considered prerequisites for dedicating resources to work on environmental sustainability and undertaking improvements. In light of the challenges experiences by the case company, we suggest that corporate and local efforts should be dedicated to forming environmental vision and implementing an environmental strategy for the organization. Furthermore, attention should focus on assistance with deploying a functional environmental management system to serve as a backbone and framework for undertaking environmental tasks. The next immediate step would be identification of simple "low hanging fruit," from which the organization can generate noteworthy results with little dedicated effort. Type 2 Sites: Project Orientation Here, few systematic efforts are taking place, yet the organization achieves noteworthy environmental results. It is likely that this is the result of focusing on low-hanging fruit, largely by isolated environmental departments in different locations. Focus is likely to be centered on enabler criteria 4, "partners and resources," for example, through retrofitting buildings and production technology and managing the organization's resources/utility flow. In this situation, corporate and local efforts should be directed toward continuous identification of environmental improvement projects and, very likely, installation of environmental monitoring systems to identify the next targets to hit. The next step would be to focus on generating long-term results through process integration, such as analysis and improvement of logistical set-up or the productiondevelopment process. Organizational restructuring will very likely be required to create environmental ownership and responsibility in all relevant processes. Type 3 Sites: Process Orientation Here the low-hanging fruits have been harvested, and focus is directed toward process integration and creating the "fit" between competencies, resources, and responsibilities. In order to achieve further results, sites should focus on larger organizational-change initiatives, including competence development, restructuring, and recruitment. This means that attention should be directed toward enabler criteria 3, "people and organization" and 5, "processes and projects." It is likely that the organization still has a large potential for improvement, yet those improvements will come at fairly high investments in terms of capital and resources. Corporate and local efforts should therefore be directed toward cooperation on these complex change initiatives, which are likely to require changes at several levels in the organization, such as performance metrics, objectives, headcount, process-design, etc. Type 0 Sites: Perplexity Here, dedicated and extensive efforts are taking place, yet the results are missing. This is a very unfortunate, but common, situation. It is likely that efforts taking place are scattershot (or a hodgepodge of activities), with little prioritizing based on the situational context of the organization. This scenario has several likely causes. First, lack of sound, integrated leadership and strategy. In addition, ambitions to undertake "high-profile" initiatives with news value and branding potential might cause the organization to down prioritize effective initiatives that hold less or no branding value, so-called "dirty low-hanging fruit." Corporate and local efforts should be directed toward reviewing organizational leadership and strategy and toward systematically analyzing whether buildings, technologies, etc. have been optimized to support generating environmental results. Discussion of the EEA Model Expected benefits from deploying the EEA are promising, particularly because the EEA model provides insight into which one of the four types each site currently falls into. As mentioned, relevant next steps for the types differ, and the approach, tools, competence requirements, focus, etc. also differ depending on whether the site is a Type 1, 2, 3, or 0. The role of corporate management will differ significantly depending on whether it is striving to assist a Type 1 site, just embarking on integrating environmental sustainability, or a Type 3 site, needing to define how environmental sustainability can provide business value from a strategic and operational procurement perspective. Furthermore, by deploying the EEA, local management can get an overview of whether to embark on complex, large-scale, business-process re-engineering programs or whether to prioritize efforts to retrofit existing equipment and building technologies. The intention is that deploying the EEA will yield a more nuanced understanding of the current stage of environmental maturity at individual sites, and, based on that understanding, corporate and local efforts can be customized for each of the four types, thus avoiding both "one-size-fits-all" solutions and myriad individualized solutions. Road-Mapping the Path to Environmental Excellence As described in the introduction, most companies struggle to realize results in line with their ambitions and efforts [START_REF] Bowen | Horses for Courses: Explaining the Gap between the Theory and Practice of Green Supply[END_REF] and fail to integrate environmental sustainability strategy with business strategy [START_REF] Lubin | The Sustainability Imperative[END_REF]. Reviewing the four possible site types identified by the EEA has led to the conclusion that one cause of this failure is very likely that these companies find themselves in the perplexing situation faced by Type 0 sites. The EEA provides organizations with a development trajectory, or "path," for realizing environmental results, moving from "no integration" (Type 1) to "project orientation" (Type 2) to "process integration" (Type 3), while avoiding/escaping from "perplexity" (Type 0). Figure 2 depicts this proposed development trajectory, including characteristics of the four types. Conclusion and Contribution This paper presents an Environmental Excellence Self-Assessment (EEA) model that can help diagnose organizational improvements that need to be undertaken in order to realize environmental results. Prospects for deploying the model are promising, since the EEA model can be expected to provide a nuanced understanding of the current stage of maturity at individual sites. Based on this understanding, corporate and local management efforts can be prioritized in order to implement solutions that will yield results in line with efforts. This paper adds to the body of literature on environmental sustainability and responds to a call made [3 and 11] to move beyond the tool-focused regime and address environmental sustainability as an organizational issue. The paper addresses the issue of how a focused self-assessment on environmental sustainability can be used as a "differentiated management mechanism." The EEA model has yet to be empirically tested, and therefore the validity of the model is fairly low at this embryonic stage of development. However, it is expected that the model will be deployed in the case company in the immediate future and experience will show whether the EEA model lives up to its potential. Fig. 1 . 1 Fig. 1. The EEA Model Fig. 3 . 3 Fig. 3. An environmental excellence development trajectory based on site categorization as a result of deploying the EEA model Table 1 . Overview of criteria, definitions, and sub-criteria for the EEA model 1 Enabler Criteria Definition Sub-Criteria Environmental Leadership Organizations with excellent environmental performance have leaders who communicate a clear purpose for working with environmental sustainability. They provide a clear direction and build organizational commitment through demonstrating the associated business 1a 1b 1c 1d Define the vision, purpose, and rationale Environmental management system deployment Stakeholders engagement Reinforce a culture of green thinking opportunities. 1e Provide the prerequisites for change Environmental Strategy Organizations with excellent environmental performance realize their environmental vision by implementing an environmental strategy. The strategy is based on the identification of issues with significant environmental affect and aligned with overall business strategy and stakeholder 2a 2b 2c 2d Embed environmental impact analysis Understand internal strengths/weaknesses Policy deployment Strategy alignment expectations. 2e Strategy communication and implementation Organizations with excellent environmental 3a Competence development People & Organization performance competencies and skills are continuously ensure that their people's developed. They ensure fit between competencies, resources, and responsibility. They communicate, care for, reward, and recognize environmental 3b 3c 3d Alignment, involvement & empowerment "Fit" competence, responsibility & responsibility Constructive communication & cooperation achievements. 3e Reward & recognition Resources & Partners Organizations with excellent environmental performance manage their partners, suppliers, technological resources, and material flows in order to implement environmental strategy and increase their environmental performance. 4a 4b 4c 4d 4e 4f Supplier and partner management Environmental budgets Building management Technology management Information technology deployment Material and utility management 5a Management process redesign Processes Organizations with excellent environmental performance redesign their processes to improve environmental performance. This includes management, business, and support processes. 5b 5c Business process redesign Support process redesign 5d Project management Result Criteria Definition Examples include but are not limited to Customer Results
19,739
[ "1001973", "1001974", "1001975", "1001976" ]
[ "300821", "300821", "300821", "300821" ]
01470665
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470665/file/978-3-642-40361-3_69_Chapter.pdf
Sameh M Saad email: [email protected] Julian C Aririguzo Terrence D Perera Supplier Selection Criteria in Fractal Supply Network Keywords: Original Equipment Manufacturers (OEMs) collaborate with their key suppliers in a new form of hands-on partnership. The Fractal supply network is distinct from the traditional supply chain because of the inherent congenital fractal characteristics. This paper uses the Analytic Hierarchy/Network Process (AH/NP) approach to provide a strict methodology and criteria ranking in the complicated decision-making process of exploring the suitability, selection and maintenance of few, albeit reliable and high quality suppliers prior to going into the Fractal Manufacturing Partnership (FMP). Selecting the right set of suppliers without undermining essential competitive factors and material costs is of strategic importance in forming this alliance and could help or hinder the inherent strength in the collaboration. The outcome from this research project is a simple, systematic, logical and mathematical guide to user of OEMs in making robust and informed supplier selection decision prior to going into FMP from a fractal supply network perspective. Introduction The vigorous competition in today's global markets has drawn attention on supply chains and networks [START_REF] Chan | Interactive selection model for supplier selection process: an analytical hierarchy process approach[END_REF]. Evolution in manufacturing and management, strategic alliances, technological changes and cycle time compression [START_REF] Arvind | An Application of the Analytic Network Process to Evaluate Supply Chain Logistics Strategies[END_REF] make frugal resource management relevant [START_REF] Akinc | Selecting a set of vendors in a manufacturing environment[END_REF]. Manufacturers tend to manage their suppliers in different ways leading to supplier development, supplier evaluation, supplier selection, supplier association, supplier coordination etc. [START_REF] Chan | Interactive selection model for supplier selection process: an analytical hierarchy process approach[END_REF] & [START_REF] De Boer | A review of methods supporting supplier selection[END_REF], and management of the logistics involved plays a strategic role for organizations that keep pace with market changes and supply chain integration [START_REF] Arvind | An Application of the Analytic Network Process to Evaluate Supply Chain Logistics Strategies[END_REF]. Alliances, collaborations and networks particularly between Original Equipment Manufacturers (OEMs) and their key suppliers to achieve competitive advantage especially in the face of global volatile and unpredictable markets is gaining popularity [START_REF] Noori | Fractal manufacturing partnership: exploring a new form of strategic alliance between OEMs and suppliers[END_REF]. Involving suppliers from initial product development through to final assembly reduces product development time, manufacturing expenses and improves quality [START_REF] Noori | Fractal manufacturing partnership: exploring a new form of strategic alliance between OEMs and suppliers[END_REF] by evaluating and managing the inherent logistics. OEMs increasingly hand over their non-core business to key suppliers who can demonstrate the expertise and capability necessary for the task. These key suppliers are responsible for designing, making and assembling their modular components on the assembly line, while co-owning the OEM's facility. The advantages of this new manufacturing formula have been reported to be tremendous. In FMP, OEMs focus on their core capabilities which include specification of envelop size and weight and overall supervision of the production process while handing over non-core business to key suppliers who can demonstrate the expertise and capability necessary providing the synergy and motivation required to form leaner core business units interacting to create mass customized products [START_REF] Noori | Fractal manufacturing partnership: exploring a new form of strategic alliance between OEMs and suppliers[END_REF]. Selection of the right set of suppliers is of strategic importance in forming this alliance and could help or hinder the inherent strength in the collaboration. Therefore, comprehensive framework is needed to facilitate supplier selection process and to cope with trends in various manufacturing strategies [START_REF] Noori | Fractal manufacturing partnership: exploring a new form of strategic alliance between OEMs and suppliers[END_REF] & [START_REF] Sen | A framework for defining both qualitative and quantitative supplier selection criteria considering the buyer-supplier integration strategies[END_REF]. The basis considered for supplier selection include least invoice, implicit or explicit quality, delivery reliability, lot size, paper work, returns, transportation and expediting costs [START_REF] Akinc | Selecting a set of vendors in a manufacturing environment[END_REF] & [START_REF] Lipovetsky | Priority Eigenvectors in Analytic Hierarchy/Network Processes with Outer Dependence Between Alternatives and Criteria[END_REF]. The traditional approach to this selection task in procurement situations and buyer-supplier relationships has been to maintain a competitive supplier base, keeping them at arm's length, and playing them off against each other [START_REF] Akinc | Selecting a set of vendors in a manufacturing environment[END_REF] & [START_REF] De Boer | A review of methods supporting supplier selection[END_REF]. Number of authors have developed and proposed various mathematical frameworks and system modeling such as [START_REF] Sen | A framework for defining both qualitative and quantitative supplier selection criteria considering the buyer-supplier integration strategies[END_REF], [START_REF] Sevkli | Hybrid analytical hierarchy process model for supplier selection[END_REF], [START_REF] Sevkli | An application of data envelopment analytic hierarchy process for supplier selection: a case study of BEKO in Turkey[END_REF], [START_REF] Ramanathan | Data envelopment analysis for weight derivation and aggregation in the analytic hierarchy process[END_REF], [START_REF] Tam | An application of the AHP in vendor selection of a telecommunications system[END_REF] [12], [START_REF] Petukhova | Application of the Analytic Hierarchy Process in Development of Train Schedule Information Systems[END_REF] & [START_REF] Amponsah | Application of Multi-Criteria Decision Making Process to Determine Critical Success Factors for Procurement of Capital Projects under Public-Private Partnerships[END_REF] to assist in this process. This paper ventures explicitly into the FMP or the OEM/ supplier collaboration which has until now not been addressed comprehensively. The model proposed in this paper is simple, systematic, logical and mathematical using MAT LAB to create a user-friendly interface for the supplier selection to guide user OEMs in making robust and informed choices/ decision in the selection task. Framework for Defining the Supplier Selection Criteria in FMP Determining the buyer-supplier level of integration is the most important decision in the buyer-supplier selection process [START_REF] Masella | Managing supplier/customer relationships by performance measurement systems[END_REF]. Likewise, the level of integration and closeness between manufacturers and suppliers in the FMP is of vital importance in the supplier selection process. The work by [START_REF] Sen | A framework for defining both qualitative and quantitative supplier selection criteria considering the buyer-supplier integration strategies[END_REF] is particularly significant and relevant to the FMP because it is not only investigates two basic possible qualitative and quantitative criteria, but most importantly, their approach could assist decision makers in determining the OEM-supplier integration level. This is vital in the long-term relationship inherent in the FMP. Quantitative criterion measures concrete quantitative dimensions such as cost whereas qualitative criterion deals with quality of design. Trade-offs are usually required to resolve conflicting factors between the two criteria [START_REF] Sen | A framework for defining both qualitative and quantitative supplier selection criteria considering the buyer-supplier integration strategies[END_REF]. In the FMP business partnership and integration is desired. The OEM fully interacts or cooperates with the suppliers in the long term. It is based on series of production silos arranged serially and highly coordinated with one another [START_REF] Noori | Fractal manufacturing partnership: exploring a new form of strategic alliance between OEMs and suppliers[END_REF]. The suppliers are directly involved in the manufacturing process rather than supply and leave. High level of technology facilitates both OEM and suppliers to work towards the same strategic goals. This alliance warrants sharing of business related information to ex-plore new markets with novel ideas and technologies. It also encourages more investment in R&D. It is note-worthy the different degrees of integration and how OEM-supplier integration has evolved from JIT, JIT11, modular sequencing, supplier parks to FMP [START_REF] Noori | Fractal manufacturing partnership: exploring a new form of strategic alliance between OEMs and suppliers[END_REF]. AHP Modelling Procedure The AHP was originally designed and applied by [START_REF] Saaty | The Analytical Hierarchy Process[END_REF] [17] & [START_REF] Saaty | Models, Methods, Concepts and Applications of the Analytic Hierarchy Process[END_REF]; for solving complex multiple criteria problems involving comparison of decision elements difficult to quantify [START_REF] Lami | Analytic Network Process (ANP) and Visualization of Spatial Data: the Use of Dynamic Maps in Territorial Transformation Processes[END_REF] & [START_REF] Amponsah | Application of Multi-Criteria Decision Making Process to Determine Critical Success Factors for Procurement of Capital Projects under Public-Private Partnerships[END_REF]. It considers both qualitative and quantitative criteria in a hierarchical structure (ranking) for supplier selection. AHP divides a complex decision problem into a hierarchical algorithm of decision elements. A pair wise comparison in each cluster (as a matrix) follows, and a normalized principal eigenvector is calculated for the priority vector which provides a weighted value of each element within the cluster or level of the hierarchy and also a consistency ratio (used for checking the consistency of the data). The main theme is the decomposition by hierarchies, [START_REF] Rao | Evaluating flexible manufacturing systems using a combined multiple attribute decision making method[END_REF] finds that AHP is based on three basic principles, namely; decomposition, comparative judgments, and hierarchical composition of priority. The decomposition level breaks down complex and unstructured criteria into a hierarchy of clusters. The principle of comparative judgments is applied to construct pair wise comparison of all combinations of the elements in a cluster with respect to the parent of that cluster. The principle of hierarchical composition or synthesis is applied to multiply the local priorities of elements in a cluster by the 'global' priority of the parent, producing global priorities throughout the hierarchy. Mathematical Formulation Leading to Supplier Selection Based on the AHP approach, weights of criteria and score of alternatives are called local priorities which are considered as the second step of the decision process [START_REF] Sevkli | An application of data envelopment analytic hierarchy process for supplier selection: a case study of BEKO in Turkey[END_REF]. The decision making process requires preferred pair-wise comparison concerning weights and scores. The value of weights v i and the scores r ij are extracted from the comparison and listed in a decision table. The last step of the AHP aggregates the local priorities from the decision table by a weighted sum as shown on equation [START_REF] Chan | Interactive selection model for supplier selection process: an analytical hierarchy process approach[END_REF].   i i j v R X ij r (1) R j represents the global priorities and is thus obtained for ranking and selection of the best alternatives. Assessment of local priorities based on pair-wise comparison is the main constituent of this method where two elements E i and E j at the same level of hierarchy are compared to provide a numerical ratio a ij of their importance. If E i is preferred to E j then a ij > 1. On the other hand the reciprocal property, a ji = 1/a ij , j = 1,2,3,4,….,n and i = 1,2,3...n always holds. Each set of comparison with n elements requires [n x (n -1)] /2 judgments [START_REF] Sevkli | An application of data envelopment analytic hierarchy process for supplier selection: a case study of BEKO in Turkey[END_REF]. The rest half of the comparison matrix is the reciprocals of those judgments lying above the diagonal and are omitted. The decision maker's judgments ij a are usually estimations of the exact. Hence, a consistency ratio method was introduced by [START_REF] Saaty | The Analytical Hierarchy Process[END_REF] to govern the consistency of judgments. If a decision maker states that criterion x is of equal importance to criterion y, then, a xy = a yx = 1, and if criterion y is extremely more important than criterion z, then, a yz = 9, & a zy = 1/9, then criterion z should be having the same weight to criterion z as criterion y does. However, the decision maker is often unable to express the consistency of the judgment and this could affect the analysis. Hence, [START_REF] Saaty | The Analytical Hierarchy Process[END_REF] consistency method measures the inconsistency of the pair-wise comparison matrix and sets a threshold boundary which should not be exceeded. In the non-consistent case the comparison matrix A may be considered as a perturbation of the previous consistent case. When the entries a ij changes only slightly, the Eigen values change in a similar fashion. The consistency index (CI) is calculated using equation 2. 1 max    n n CI  (2) where n is number of comparison elements, and max  is Eigen value of the matrix. Then, the consistence ratio (CR) is calculated as the ratio of consistency index and random consistency index (RI). (RI) is the random index representing the consistency of a randomly generated pair-wise comparison matrix. The consistency ratio (CR) is calculated using equation 3. ) ( ) ( ) ( n RCI A CI A CR  (3) If CR(A) < 0.1 (10%), the pair-wise comparison matrix is considered to be consistent enough. In the case where CR(A) > 0.1, the comparison matrix should be improved. The value of (RI) depends on the number of criteria being compared or considered. Modelling the FMP Supplier Selection Process The model sorts the decision problem in a hierarchical system of decision elements. Pair-wise comparison matrix of these elements is constructed, normalized principle Eigen vector is calculated for the priority vector which provides the measurement of weights (relative importance) of each element. Supplier selection criteria, sub-criteria and alternatives for the FMP have been formed based on relevant extensive literature [START_REF] Chan | Interactive selection model for supplier selection process: an analytical hierarchy process approach[END_REF], [START_REF] Sen | A framework for defining both qualitative and quantitative supplier selection criteria considering the buyer-supplier integration strategies[END_REF], [START_REF] Sevkli | Hybrid analytical hierarchy process model for supplier selection[END_REF], [START_REF] Sevkli | An application of data envelopment analytic hierarchy process for supplier selection: a case study of BEKO in Turkey[END_REF] & [START_REF] Chan | A decision support system for supplier selection in the airline industry[END_REF] reviewed and consulted for the project. They are grouped as either tangible or intangible depending on how perceptible or realistic they are and include the following; business criteria, manufacturing, quality assessment, performance assessment, organizational culture and strategy, personnel management, com-patibility and information technology. The first four are considered tangible while the rest are intangible criteria. Modelling Procedure The general modeling procedure is summarized below: (i) Construct the hierarchy system, including several independent elements. The model has four levels of hierarchy -the overall goal, main evaluation criteria, sub-criteria and alternatives. (ii) Pair-wise comparison of criteria and alternatives is done to find comparative weights amongst the attribute decision elements. The mathematical modeling utilizes the 'slider' function of MATLAB GUI (Graphical User Interface) as comparative input tool. The quantified subjective decisions are stored in allocated cells. The outcome is a ranked priority order of criteria and ranked priority order of decision alternatives under each criterion. (iii) Calculate the weights, test the consistency and calculate the Eigen vector of each comparison matrix to obtain the priority of each decision elements. Hence, for each pair-wise comparison matrix, the Eigen value of the matrix max  and Eig- en vector w (w 1 , w 2 …w n ), weights of the criteria is estimated. (iv) The last step in the modeling is finding the overall priorities for decision alternatives. This is calculated by multiplying the priority for each alternative under each criterion by the weight of each criterion (local weights). The calculations is performed from the lower level to the higher level of hierarchy where the outcome of the step is ranked in order of the decision alternatives to aid the decision making process. (iv) Validation of the model is needed to test the logical and mathematical correctness and reliability of the model. To this end, the result from the case study by [START_REF] Sevkli | An application of data envelopment analytic hierarchy process for supplier selection: a case study of BEKO in Turkey[END_REF] is imported into the project. In [START_REF] Sevkli | An application of data envelopment analytic hierarchy process for supplier selection: a case study of BEKO in Turkey[END_REF], the authors use Data Envelopment Analysis (DEA) approach and this is embedded into the analytic hierarchy process methodology. The criteria, sub-criteria, and alternatives and the scores of the comparisons are used as they are. The final outcome of the mathematical model is compared and the results show close comparison and validated to 0.07%. FMP Supplier Criteria. Supplier selection criteria, sub-criteria and alternatives for the FMP have been formed based on relevant extensive literature [START_REF] Sen | A framework for defining both qualitative and quantitative supplier selection criteria considering the buyer-supplier integration strategies[END_REF], [START_REF] Sevkli | Hybrid analytical hierarchy process model for supplier selection[END_REF], [START_REF] Sevkli | An application of data envelopment analytic hierarchy process for supplier selection: a case study of BEKO in Turkey[END_REF], [START_REF] Chan | A decision support system for supplier selection in the airline industry[END_REF] & [START_REF] Perçin | An application of the integrated AHP-PGP model in supplier[END_REF] reviewed and consulted for the project. These are considered while making optimal supplier selection for the FMP. They are grouped as either tangible or intangible depending on how perceptible and realistic they are. They form the framework on figure 1, and include the following; business criteria, manufacturing, quality assessment, performance assessment, organizational culture and strategy, personnel management, compatibility and information technology. The first four are considered tangible while the rest are intangible criteria. The input of each element is recorded in a tabular form and the AHP output is calculated once the relevant data is collected and the consistency ratio is calculated along with the AHP weights. Each relevant elements of the model is compared quantitative-ly and the result is recorded for final calculations. Due to space limitation, results and discussion will be provided in the conference. Conclusion Selection and maintenance of high quality and reliable Suppliers is key component of successful implementation of the FMP. One objective of the selection process is determination of optimal supplier criteria particularly suited to the fractal manufacturing philosophy. The fractal company advocates a great learning, 'open book' culture and more sophisticated communication link between fractals in order to maintain the transparency of information and to facilitate continuous improvement program and Research and Development. This paper has reviewed conventional criteria used mainly in the buyer-supplier/ procurement selection process and short listed some important criteria which are relevant to the FMP. These criteria are classified as tangible and intangible criteria. A mathematical argument is put forward to justify the process of the supplier selection. To further evaluate the importance of each criterion to FMP, this study utilizes the AHP methodology implemented using MAT LAB programming language to generate a framework that robustly identifies different criteria most of which are conflicting, and suppliers. The approach is flexible enough to allow decision makers to make their choices in a qualitative manner while the framework transforms the decision into quantitative results and helps in selecting the right set of suppliers without undermining the inherent strengths in the collaboration as obtained in the FMP. Fig. 1 . 1 Fig.1. Framework of supplier selection process
22,012
[ "1001980" ]
[ "485893", "485893", "485893" ]
01470670
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470670/file/978-3-642-40361-3_73_Chapter.pdf
Julien Maheut Jose Pedro A Mixed-Integer Linear Programming Model for Transportation Planning in the Full Truck Load Strategy to Supply Products with Unbalanced Demand in the Just in Time Context: A Case Study Keywords: Supply Chain Management, Automotive Industry, Full Truck Load, Case Study des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction In an increasingly globalized and internationalized world, Supply Chains (SCs) have had to adapt to remain competitive and become constantly more efficient, agile and flexible. SCs, and more specifically, all the companies involved in it, face a new problem: increasing complexity in managing products and in planning operations. Increasing the variety of products manufactured or offered to customers directly influences planning tasks, currently deployed management methods and all the processes carried out to deliver the finished good. For example, models or tools designed and optimized for a definite and characteristic situation may no longer prove efficient and/or effective when product variety widens. Increased complexity is one of the most difficult challenges for those companies currently working in tight environments because stocks are seen as wasteful and un-necessary costs if they exceed certain safety stock levels or coverage levels on days of demand, known as run-out time; that is, run-out time is the stocks coverage level of a specific product [START_REF] Bitran | Hierarchical production planning: a single stage system[END_REF]. The proper calculation of these values is not only highly relevant when demand is irregular or when various products with short life cycles have different demand levels, but can be essential when demand is uncertain; this is precisely the case of the current economic crisis. In the literature, the effect of variety on production planning, scheduling and the costs involved is a relatively developed theme [START_REF] Sun | Extended data envelopment models and a practical tool to analyse product complexity related to product variety for an automobile assembly plant[END_REF]. Nonetheless, effects on transport tasks (shipping or supplying) have not been as well addressed to the best of our knowledge. Transportation planning is often approached from three perspectives and different backgrounds: strategic transport planning, tactical transport planning and operational transport planning. At the operational level, the literature presents and identifies two main problem classifications: routing and truck-loading problems. However, the size of truck shipments may also be considered: partial shipments problems, less than truckload (LTL) shipments problems and full truck load (FTL) shipments problems. Historically, the automotive industry has used milk runs to resolve collection routes. Moreover, crossdocking systems [START_REF] Boysen | Cross dock scheduling: Classification, literature review and research agenda[END_REF] and, obviously, direct full load shipments [START_REF] Garcia-Sabater | A two-stage sequential planning scheme for integrated operations planning and scheduling system using MILP: the case of an engine assembler[END_REF] are also used. Typically at the operational level, automotive assembly plants employ three different strategies to coordinate supply strategies [START_REF] Ben-Khedher | The Multi-Item Replenishment Problem with Transportation and Container Effects[END_REF]. Suppliers that supply low-volume products can receive direct shipments from a single supplier following the LTL strategy. Alternatively, shipments from multiple suppliers can be consolidated using milk runs in the LTL or the FTL strategy. These three strategies are usually fixed according to the supplier because supply transport capacity contracts in the automotive sector are long-term contracts. They reserve a fixed transport capacity in each horizon and the carrier has to pay a fixed amount during each period, which is completely independent of the use of trucks. This long-term contracts policy is changing with new social conditions and environmental standards. For instance, optimizing fleet use has become one of the most important measurable performances. In order to cut costs and minimize the environmental impact, the general trend in the automotive industry has been to reduce the number of actors in its own SC [START_REF] Cousins | Supply base rationalisation: myth or reality?[END_REF]. These SCs' first-tier companies have attempted to move toward a single supplier per product family. The direct consequence is that supply logistics has shifted toward the FTL strategy in which each provider supplies multiple products. The full-filling trucks problem has been traditionally solved with the help of a personalized, customized spreadsheet [START_REF] Garcia-Sabater | A two-stage sequential planning scheme for integrated operations planning and scheduling system using MILP: the case of an engine assembler[END_REF]. Moreover the "follower", which is the supplier contact, is responsible for planning and supervising truck loads; moreover, it is usually responsible for determining stock levels on both sides of the SC. Each follower's working method might differ even within the same company as it takes into account not only actual product characteristics, but also different vehicle characteristics. With increasing product variety, which can form a product mix within a truck, the manual working method quickly reaches its limit and does not meet company expectations. To overcome these problems, the literature offers a range of proposed solutions [START_REF] Kiesmüller | A multi-item periodic replenishment policy with full truckloads[END_REF]. As Goetschalckx states in [START_REF] Goetschalckx | Transportation Systems Supply Chain Engineering[END_REF], Ford Motors Company and General Motors use fullsize pickup truck models, but they are not described. In the automotive sector, the FTL strategy led to overdeliveries (serve in advance), as stated Garcia-Sabater et al. [START_REF] Garcia-Sabater | A two-stage sequential planning scheme for integrated operations planning and scheduling system using MILP: the case of an engine assembler[END_REF] in the case of motors distribution planning. Liu et al. [START_REF] Liu | Two-phase heuristic algorithms for full truckloads multi-depot capacitated vehicle routing problem in carrier collaboration[END_REF] present 2-phase heuristic algorithms for the full truckloads multi-depot capacitated vehicle routing problem in carrier collaboration, but the consideration of different packaging for different products is not considered. Arunparam et al. [START_REF] Arunapuram | Vehicle Routing and Scheduling with Full Truckloads[END_REF] propose an algorithm for solving an integer-programming formulation of this vehicle-routing problem with full truckloads, but as in [START_REF] Liu | Two-phase heuristic algorithms for full truckloads multi-depot capacitated vehicle routing problem in carrier collaboration[END_REF], a complex routing problem is considered. In the literature review, Boysen and Fliedner [START_REF] Boysen | Cross dock scheduling: Classification, literature review and research agenda[END_REF] offer an interesting literature review about cross-docking problems, but in our case study, only direct shipment by the FTL strategy must be considered. To the best of our knowledge, an MILP model for procurement planning that considers packaging and the FTL strategy which contemplates loss of truck capacity has never been proposed because of product mixture in the same truck. Other concerns in the automotive industry that our model includes are stocks limitations (minimum/maximum run out-times and total stock limits). These limitations in conjunction with stock levels, unbalanced demand and minimizing the total number of trucks used in the FTL strategy have never been considered, which implies a substantial combination of products to overcome truck capacity problems. This is precisely the aim of this paper: to propose an MILP that completes these types of trucks. Considerations such as time windows, routing and different truck capacities are not contemplated. The rest of the paper is organized as follows. Section 2 offers a detailed description of the problem study. Section 3 proposes hypotheses to solve the problem, and then presents a mixed-integer linear programming (MILP) model to solve the problem. Section 4 presents a case study. Finally, the last section includes conclusions and future research lines. Problem description An engine assembly plant is not only constituted by the assembly line of engines, but also by five component production lines. These lines constitute the so-called 5Cs (cylinder blocks, cylinder heads, camshafts, crankshafts, connecting rods). To produce these finished components, raw materials, whose origins are foundries, are produced in considerably large-sized batches. This raw material has to be purchased from suppliers and adjusted because the plant cannot hold substantial stock levels of materials at the entrance of component production lines. The problem lies in deciding how to load the truck arriving from each supplier for the purpose of minimizing the total number of trucks over the year to keep the total stock below a maximum level and to also consider at least two alternative constraints: ─ Maintaining a certain number of days of stock (called run-out time in days of demand) of raw material and a minimum safety stock for all the products. ─ Considering maximum run-out times for products and considering stock restrictions because of limited storage capacity. This run-out time can be a maximum products demand peak, but also the stored holding value of the products controlled by the finance department. Other considerations are taken into account. Because of paper's length restriction, those are not present in this extended abstract. 3 Modeling the problem Hypothesis Product consumption is known and detailed for each period of the horizon. All the costs are assumed linear and known. The capacity of racks and all the trucks is also known. To avoid complicating the model presented herein, the same capacity for all trucks has been considered. Minimum and maximum run-out times are considered at all times for all products, or minimum and maximum stock levels values are determined by users and the respective stakeholders. While minimizing costs and ensuring the run-out time, the following goals are pursued: ─ Reducing the total number of trucks used during the horizon. ─ Reducing capacity penalties. ─ Reducing the level of obsolescence of the products in stock. Penalties depend on the mixture of products loaded, but simplification is considered: from two different products loaded onto a truck, truck capacity will decrease by one unit for each new separate product loaded. It is assumed that the truck should be completely filled with racks of products after taking into account the capacity loss due to the mixture of products. The minimum coverage defined must be guaranteed and cannot exceed the maximum coverage in days of demand. The next section presents the mathematical MILP model which solves this problem. MILP model Data input notation. The MILP model is specified as follows. In modeling terms, we need to define two parameters: ─ The maximum number of trucks available on day t. ─ The run-out time for one product. As this last parameter takes a different value to the minimum and maximum desired stocks levels for each product, a procedure to calculate a single parameter that fixes the minimum and maximum levels for each product in each period ( has been created. Minimum level of balanced stock of all the products on day t =1 if one product i is loaded onto truck j on day t (0 otherwise) Variable that counts the number of the different variants loaded onto truck j on day t Objective function The objective of the proposed model is to minimize total supply costs.   Z Min Costs    1 jt jt t t j t j t Costs C C C                 (2) The objective function [START_REF] Bitran | Hierarchical production planning: a single stage system[END_REF], which consists in minimizing total supply costs, may be approximated as a linear function [START_REF] Sun | Extended data envelopment models and a practical tool to analyse product complexity related to product variety for an automobile assembly plant[END_REF]. Constraints ,0 , ii y Y i  (3)   ,1 ,, it i t it ijt j y y D v i t       (4)   ,, it it it SM y SM i t    (5)     1 , , it it it it SM y it SM SM       (6)   0, , , ijt ijt v M i j t      (7)   ,, ijt jt i jt    (8)   1, , jt jt   (9)     1 , , ijt j jt jt i i v K j t R        (10) The initial inventory levels of products are known [START_REF] Boysen | Cross dock scheduling: Classification, literature review and research agenda[END_REF]. Classical continuity constraints (4) apply to the model. The stock level reached at the end of a period must be above a minimum level without exceeding a maximum level [START_REF] Ben-Khedher | The Multi-Item Replenishment Problem with Transportation and Container Effects[END_REF]. Balancing stock levels is determined as a percentage according to the values of the stock level limits [START_REF] Cousins | Supply base rationalisation: myth or reality?[END_REF]. With Constraint (7), we know if product i is loaded onto truck j on day t. Constraints ( 8) and (9) determine the number of variants loaded and the penalties associated with each truck used. Finally with Constraint (10), it is assumed that a truck's capacity in racks less its capacity penalty equals the racks loaded onto a truck. Case study This study was particularly motivated by the problem faced by a company which assembles motors in Spain and sends its end products all over the world. The complete case study is presented in [START_REF] Garcia-Sabater | A two-stage sequential planning scheme for integrated operations planning and scheduling system using MILP: the case of an engine assembler[END_REF], but the 4 week procurement model had to evolve because stakeholders needed to consider new considerations like penalty for loss of capacity and the different run out-times of products. Given length constraints, a simple case study will be evaluated: five time periods, four products and three trucks will be considered. The different costs are: Tables 4 and5 present the parameter values of the case study. This model is solved by employing Gurobi Optimiser 4.5. The results show an average running time of 305 seconds per instance using an Intel Core i7 3.22 GHz processor, 24 GB RAM and Windows 7 as the OS. The procurement planning results are presented in Table 6. As seen in the results, not all the trucks are needed in each period. Thanks to the procurement plan, we can see how capacity loss is considered and that each truck is fully loaded. Nevertheless, while implementing the real industry tool, the use of the MILP model is limited because computational times prolong exponentially when product and period numbers increase. Conclusions This paper presents an MILP model for planning supply planning in an engine assembly plant. The planning model allows different run-out times of products based on their fundamental characteristics and the arrival of loaded trucks in the FTL strategy by considering unbalanced run-out time to cover any changes in production planning and stock limits, plus truck capabilities which are penalized according to their load. A simple case study is proposed to demonstrate the applicability of the model. A future research line would be to identify other strategies for loading trucks and to evaluate the best strategy in terms of transport costs against holding costs using real data. Another future research line would be to determine the minimum run-out time to be maintained in case of data uncertainty. Table 1 . 1 Indexes and sets { } Products Periods (in day units) Trucks Table 2 . 2 Parameter notation (1) Demand of product i on day t Number of products i that can be loaded in a rack stock level of product i Load capacity of truck j Minimum/Maximum desired stock level of product i Minimum/Maximum run-out times of product i (in day units) Large number Setup costs for using a truck Penalty costs for a truck's loss of capacity Cost of unbalanced stock Table 3 . 3 Variable notation Stock level of product i on day t Number of products i loaded onto truck j on day t =1 if truck j is used on day t (0 otherwise) [ ] Table 4 . 4 Parameter values (I) Table 5 . 5 Parameter values (II) Table 6 . 6 Results Acknowledgements. The work described in this paper has been partially supported by the Spanish Ministry of Science and Innovation within the Program "Proyectos de Investigación Fundamental No Orientada through the project "CORSARI MAGIC DPI2010-18243" and through the project "Progamacioon de produccion en cadenas de suministro sincronizada multietapa con ensamblajes/desemsamblajes con renovacion constante de productos en un contexto de inovacion DPI2011-27633". Julien Maheut holds a VALi+d grant funded by the Generalitat Valenciana (Regional Valencian Government, Spain) (Ref. ACIF/2010/222).
17,435
[ "1001986", "1001987" ]
[ "300772", "300772" ]
01470673
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470673/file/978-3-642-40361-3_76_Chapter.pdf
David Sparling email: [email protected] Fred Pries email: [email protected] Erin Cheney email: [email protected] Greening Manufacturing Supply Chains -Introducing Bio-based Products into Manufacturing Supply Chains Keywords: supply chain, modularity, innovation, bioproducts 1 ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction The major challenges in introducing new or redesigned products are often associated with consumer acceptance. The situation is different for bio-based industrial products, where the main impacts are on companies rather than consumers and the motivations are reducing costs, oil consumption and the environmental impact of manufacturing processes and products. The main challenge is often adoption by supply chain members. Adoption may require significant innovation among supply chain partners, with greater interaction and exchange of knowledge through the chain. Success may also depend on the knowledge partners supporting the innovations through research, development and design. An early outcome of the bio-based economy has been the construction of ethanol supply chains and investment in the capital intensive infrastructure to support them. Manufacturing supply chains are longer than fuel chains and involve more partners and products. New bio-based chemicals can often link to existing manufacturing chains, greatly reducing the investment needed to create more sustainable manufacturing chains. This study investigates the process of greening manufacturing supply chains viewed through the lens of supply chain modularity, where inserting a bio-based chemical module (or company) into a chain can change the environmental and financial performance of the entire manufacturing chain. The study examines the innovations required to commercialize the new technologies, the location of those innova-tions within the chains and the nature and evolution of the interactions between innovating organizations and their supply chain partners. Literature Concerns over the environment and energy have moved bio-based products higher on the strategic agendas of industrial supply chains. The chemical industry in particular, offers great potential for bio-based alternatives. In 2009, global chemical industry sales (excluding pharmaceuticals) were valued at about US$2.61 trillion (ICCA Review, 2009-2010). Revenue potential for what [START_REF] King | The Future of Industrial Biorefineries[END_REF] terms "biorefinery-based chemicals" is estimated at US$ 10-15-billion by 2020 and projected to represent 8% of global chemical sales in 2012 (ICIS, 2010). Many different areas of science and technology overlap in bio-based chemicals creating a highly complex industry (Chotani (2000), [START_REF] Lorenz | Screening for novel enzymes for biocatalytic processes: accessing the metagenome as a resource of novel functional sequence space[END_REF]). While the current landscape consists primarily of smaller new bio-based technology companies, interest is growing among multinationals [START_REF] King | The Future of Industrial Biorefineries[END_REF]. Although past focus was generally on innovation strategies within the firm, there is a growing recognition that successful innovation also depends on the actions taken by customers and suppliers. [START_REF] Adner | Value creation in innovation ecosystems: how the structure of technological interdependence affects firm performance in new technology generations[END_REF] characterize the innovation environment faced by companies as an ecosystem, where the success of an innovation depends, not only on the innovation strategies internal to the firm, but also on innovations within a firm's supply chain. They hypothesized that value creation and capture depend on the position of necessary innovations relative to the focal firm. Innovation in business models and industry structure is as important for a sector as scientific innovation [START_REF] Pisano | Profiting from innovation and the intellectual property revolution[END_REF]. Baldwin and Clark (2000) and Jacobides et al. (2006) identify that industry participants can strategically re-engineer industry architecture through investment in platform technologies and [START_REF] Pisano | How to capture value from innovation: shaping intellectual property and industry architecture[END_REF] argue that changes to industry architecture is one of two critical domains where value can be captured from innovation. The theory of modularity, based on design theories of Herbert [START_REF] Simon | The Science of the Artificial[END_REF], Christopher [START_REF] Alexander | Notes on the synthesis of Form[END_REF] and recent work by [START_REF] Baldwin | Where do transactions come from? Modularity, transactions, and the boundaries of firms[END_REF], provides one lens with which to examine the introduction of bio-based technologies and products into traditional manufacturing supply chains. Commercializing some inventions may involve the simple substitution of a link in the supply chain ('drop-in' innovations), while others may require adaptation by many organizations in the supply chain. The ease with which substitutions can occur may be dependent on the modularity of the supply chain. Baldwin and Clark (2000) define a module as a group of tasks that are highly interrelated within the module but are only marginally related to tasks in other modules. [START_REF] Baldwin | Where do transactions come from? Modularity, transactions, and the boundaries of firms[END_REF] identifies and characterizes thick and thin crossing points in supply chains, suggesting thin crossing points have few, relatively simple transfers of material, energy and information and often occur between modules. Thick crossing points have numerous and/or complex transfers, high-transaction costs and can be associated with opportunistic behavior [START_REF] Baldwin | Where do transactions come from? Modularity, transactions, and the boundaries of firms[END_REF]. The design of transactions and relationships differs systematically with the thickness of the crossing points. Spot transactions are more likely at thin crossing points, while vertical integration or formal and relational contracts designed to reduce transactions costs are more common at thicker ones. Many of the theories on innovation, modularity and supply chains were developed in the computer industry. Bioproduct innovation differs on several dimensions. The products have longer life cycles, the objectives include greener supply chains and replacing oil, and supply chain partners include farmers and forestry companies. This study extends the concepts introduced by Adner and Kapoor (2009) to the bioproduct industry and particularly bio-based chemicals, examining how the nature and location of innovations affect the commercialization process for bio-based chemical innovations. It applies Baldwin's theories on the thickness of crossing points to innovation relationships in bio-based supply chains and adds consideration of timing to the discussion on modularity. Methods A case study approach was employed to study innovation in four supply chains developing new bio-based products (Figure 1). The research addressed the following hypotheses: ─ H1: Introduction of a new bio-based substitute for an existing oil-based component will be more likely to succeed where the production of the bio-based substitute occurs in a highly modular organization with thin crossing points to existing supply chains and few innovations needed in the rest of the chain. ─ H2: The nature of modules changes as technologies are developed. Innovating organizations must incorporate more transactions and organizations during the early stages of development of a new bio-based technology than at later stages and will exhibit differences in the thickness of its crossing points at different stages. ─ H3: Introduction of a bio-based substitute will be more successful if the innovations needed to commercialize the product are located adjacent to the focal firm. Background data for the case studies was collected using publicly available data. More detailed information on the companies was collected through semi-structured interviews with selected industry participants holding senior management positions. Results Three chains involved new to the industry bio-based chemicals and the fourth involved a bio-fibre composite material. The research found that bio-based chemical chains were typically composed of 'traditional' technology firms, with a single biofocused firm which acted as the link between biomass production and traditional economy firms (Figure 1). The motivations for adopting bio-based products differed. For some chains the motivation was to produce more environmentally friendly products for end consumers (Chains 1 and 2). In other cases, the motivation was reducing costs (Ch environm tive but, b tions represent thick crossing points with significant knowledge and process interactions and higher transaction costs. Thick crossing points where characterized by greater market and technology uncertainty and generally resulted in the need for more formal contractual relationships. At thin crossing points, lower transaction costs and knowledge interactions were required to commercialize the new bio-chemical products. Thin crossings were also associated with less formal contracts using spot market pricing and supply contracts. Thick crossing points were seen more in downstream relationships then upstream, where knowledge of the new biomaterial had to be passed between modules or where skills had to be temporarily incorporated to assist in commercialization, as discussed above. Although the sample size was limited, the results affirmed H1: innovations which required fewer innovations in other firms in the supply chain were more likely to succeed. This was the case in chains 2 through 4 which were moving to successful commercialization, while chain 1 was challenged by the downstream innovations needed in the succeeding two levels. The focal firm had to deal with uncertainties of feedstock supply, in addition to the marketing and sale of a new chemical molecule that was heavily dependent on third party partners and downstream collaborators to show proof of concept at all levels of the supply chain. Chains 2, 3 and 4 also provided confirmation for H2: each exhibited much broader modules during development, incorporating more transactions and organizations than were planned for or needed after development was completed. The innovating firm in chain 3 used vertical integration to reduce transactions costs downstream while investments by governments helped reduce transactions costs in chains 2 and 4. Both chains 2 and 3 planned to reduce their module scope once development was completed, changing their downstream crossings from thick to thin. Results also affirmed H3: introduction of bio-based alternatives is most successful when innovations required for commercialization are located near or within the focal firm. In chain 2, the innovations required were immediately before the innovating firm and were managed during development through shared development and government support for knowledge partners. In chain 3, the innovations needed were immediately upstream, modifying an ethanol plant, and downstream, encouraging buyers to switch. The innovating firm expanded their module, vertically integrating backward during development to control the integration of their processes with ethanol production. Once the technology and processes are fully understood and stable the innovator will refocus, concentrating solely on the iso-butanol modules which can be sold or licensed to ethanol facilities. Chain 1 is a good example of how required innovations further downstream can challenge innovating companies. Conclusion The supply chains studied all illustrated the application of modularity theory to the introduction of bio-based technologies into traditional manufacturing chains. Rather than constructing entirely new chains, bio-based modules were integrated into existing chains but that integration was influenced, at least in part, by the nature and location of downstream innovations needed to commercialize the new technologies. The structure and transactions in the module and thickness of its crossing changed during development as innovation challenges were resolved. Solving the upstream and downstream innovation challenges required a relationship between the innovation process and the production processes and systems in the existing supply chains. It was also dependent on the role of partners with external knowledge. Fig Fig. 1. Conve two distin bio-based manufact In the cha members incorpora stream fir The lo 3 & 4), th direct sub the bio-b to the pro to the OE greater fu members cializing Table 1 . 1 Interaction between the bioproduct innovator and the existing supply chain Case 1 Case 2 Case 3 Case 4 Innovator Segetis Evolution Gevo BioAmber Biopolymers Product L-ketal -Unique A biocomposite Platform tech- A bio-based chemical from of up to 30% nology for pro- succinic acid levulinic acid natural fibre and ducing bio-based and its deriva- and glycerol recycled plastic iso-butanol tives such as PBS and BDO Key product Improved func- Lower cost, Competitively Low cost re- features tionality as sol- more strength priced iso- placement -bio- vent, polyol, and "green" butanol succinic acid plasticizer footprint IMPACT ON DOWNSTREAM SUPPLY CHAIN Knowledge Very high. New Limited. Bio- High. Exchange Very limited. user & pro- molecule re- composite resin supports advanc- Bio-succinic ducer must quires must work in es in chemistry acid is identical have about the knowledge shar- existing molding and market for to petroleum other's domain ing with user. processes. iso-butanol. derived succinic acid. Crossing pt Thick Thin Thick Thin Nature of Long-term sup- Purchase order Long-term sup- Exclusive and transaction ply & joint de- ply agreements long-term sup- between par- velopment for iso-butanol. ply agreements, ties agreements. joint develop- ment. Transaction High Low Moderate Low costs Strategies to Joint develop- Government Investment from Investment from reduce down- ment agree- funded R&D major customer. customer. stream trans- ments. center. Exclu- action costs sive downstream relationships.
14,580
[ "1001989", "1001990", "1001991" ]
[ "485901", "300747", "485901" ]
01470675
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470675/file/978-3-642-40361-3_78_Chapter.pdf
Anita Romsdal email: [email protected] Emrah Arica email: [email protected] Jan Ola Strandhagen email: [email protected] Heidi Carin Dreyer email: [email protected] Tactical and Operational Issues in a Hybrid MTO-MTS Production Environment: The Case of Food Production Keywords: planning, control, hybrid, MTS, MTO, food des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Food production is similar to process manufacturing, showing a higher complexity than discrete manufacturing [START_REF] Crama | A discussion of production planning approaches in the process industry[END_REF]. In addition to great attention to quality and food safety, food producers have traditionally focused on economies of scale to keep costs and prices down (van Donk et al., 2008, Verdouw and[START_REF] Verdouw | Reference process modelling in demanddriven agri-food supply chains: a configuration-based framework[END_REF]. However, a production system as a whole should not only focus on costs but also show high flexibility in reacting to changing market conditions, fluctuating demand forecasts and actual demand [START_REF] Bertrand | Production control: a structural and design oriented approach[END_REF]. Over the past decades the food sector has therefore attempted to become more responsive by shifting from the traditional make-to-stock (MTS) approach towards applying more make-to-order (MTO) and combined MTO-MTS approaches [START_REF] Crama | A discussion of production planning approaches in the process industry[END_REF][START_REF] Soman | Combined make-to-order and make-to-stock in a food production system[END_REF]. The need for differentiating products and managing them differently is well recognised in literature (see e.g. [START_REF] Fisher | What is the right supply chain for your product?[END_REF][START_REF] Christopher | A taxonomy for selecting global supply chain strategies[END_REF]. However, such hybrid systems complicate the task of production planning and control (PPC) considerably since combining MTO and MTS in the same production system impacts on a number of tactical and operational issues and decisions -requiring companies to deal with complex trade-offs between inventory policies, number of set-ups, machine utilisation, production lead times, needs for cycle and safety stock, etc. [START_REF] Soman | Combined make-to-order and make-to-stock in a food production system[END_REF]. The purpose of this paper is therefore to highlight and discuss some of the tactical and operational PPC issues and decisions involved in a hybrid production environment. Particular emphasis is put on how to handle the demand uncertainty caused by the application of both MTS and MTO in the same production system. The paper starts with a description of the study's research methodology, followed by an introduction to the empirical background. Next, PPC is defined, while the two subsequent chapters outline and discuss issues on the tactical and operational planning levels respectively. The conclusion outlines the paper's contributions and some suggestions for further research. Methodology This conceptual paper is a theoretical discussion of the tactical and operational implications of a concept involving hybrid MTO-MTS production approaches. Research on these more operational aspects of hybrid production situations is scarce since the majority of research focuses on a single type of production environment. The aim of the paper is therefore not to provide solutions to these highly complex issues, but rather to highlight and discuss some of the most critical decisions based on existing literature and the authors' experiences from industry. The paper's theoretical base is within operations strategy, planning and control, and scheduling, and the discussion exploits and combines the advances of related production environments to provide new insights. The study focuses on the food sector as this is one of the sectors where hybrid production environments are becoming more common. Food Sector Characteristics Food supply chains deal with highly perishable goods, where rapid product and raw material deterioration significantly impacts on product quality and the amount of waste both within the supply chain and in consumer households. In addition, demand and price variability of food products is increasing, making food supply chains more complex and harder to manage than other supply chains [START_REF] Ahumada | Application of planning models in the agrifood supply chain: A review[END_REF]. Food supply chain actors are faced with the challenge of supplying an ever broader variety of these perishable products to increasingly demanding customers, while at the same time moving products quickly through the supply chain and keeping costs as low as possible. Error! Reference source not found. summarises some of the supply chain and logistics characteristics which are particular to the food sector.  High and increasing product variety, particularly for promotions, decreasing product life cycle, high percentage of slow-moving items Market  Varying and increasing demand uncertainty, fairly predictable annual demand, high variation in periodic demand  Customers demand frequent deliveries and short response times Supply  Some supply uncertainty and variable raw material yield Production system  Capital-intensive technology, long set-up times, high set-up costs  Long production lead times, processes adapted to high volume, low variety, with raw materials and intermediates processed in batches Food production can be classified as a process industry where production of standard products is mainly continuous, with large production series, and raw materials and intermediates are accumulated and processed together in batches. The typical steps are receipt of inputs (raw materials, ingredients, packaging materials, etc.), processing, packing (where bulks are transformed to discrete products through sizing and labelling), and delivery. Typically there are three stock positions; raw materials, unpacked bulk products, and packed end products [START_REF] Méndez | An MILP-based approach to the short-term scheduling of make-and-pack continuous production plants[END_REF]Cerdá, 2002, van Dam et al., 1993). The characteristics in Error! Reference source not found. show that there is increasing product variety and demand uncertainty in the food sector -which significantly increases the complexity of PPC for food producers. A differentiation strategy according to demand uncertainty has been suggested as a way to reduce this complexity. Kittipanya-ngam (2010) and [START_REF] Romsdal | Linking supply chain configuration with production strategy; the case of food production[END_REF] suggest that the products that are the most difficult to plan and control should be given most focus in PPC. In this way, products with high demand uncertainty are within the "focus box" and controlled using an MTO strategy, while the remaining products are associated with an MTS strategy. This means that producers may find themselves in a situation where they need to combine MTS and MTO approaches within the same production system -thereby significantly complicating PPC on the tactical and operational level. Introduction to Production Planning and Control (PPC) Planning and control refers to the task of defining the structures and information upon which managers within a production system make effective decisions [START_REF] Vollmann | Manufacturing planning and control systems for supply chain management[END_REF], and the design of the PPC system should be based on company-and industry-specific needs and characteristics [START_REF] Stevenson | A review of production planning and control: the applicability of key concepts to the make-to-order industry[END_REF]. At the highest planning level, the PPC approach is determined. The most common approaches include MTS, MTO, engineer-to-order (ETO), assemble-to-order (ATO) and mass customisation (MC). In the food sector, production systems are commonly classified as following either an MTS or an MTO strategy. In addition, ATO can be relevant in cases where the processing and packaging processes can be decoupled [START_REF] Romsdal | Linking supply chain configuration with production strategy; the case of food production[END_REF]. However, in food processing, neither a pure MTO nor a pure MTS strategy is practical and food is therefore one of the sectors where a combined MTO-MTS approach is quite common. At the strategic PPC level product families are formed in order to group items which can be planned and controlled using the same strategy. In addition, target service levels are set against which the performance of the production system is later evaluated. This level should also ensure that the operational capabilities meet the total load of aggregated demand for products and resources in the long run. Operating a hybrid MTO-MTS approach brings about a number of issues involving complex trade-offs which must be thoroughly evaluated and incorporated into the lower PPC levels. The key issue is how to deal with MTO items in the MTS schedule -and some of the tactical and operational decision and alternative methods for dealing with these in hybrid production situations are discussed in the following chapters. Tactical Level Issues At the tactical level, the production volumes for MTS items are planned and the material planning is performed to determine the quantity and timing for components needed to produce these end-items. In a hybrid environment this level must also accommodate the uncertainty associated with the quantity and timing of future demand for MTO items into the material plans. The literature contains several studies that discuss methods appropriate for tactical PPC decisions. [START_REF] Jonsson | The implications of fit between planning environments and manufacturing planning and control methods[END_REF] argue that the re-order point system, runout-time planning, and material requirements planning (MRP) methods seem to work well for making detailed materials planning decisions in an MTS environment with standardised product components produced in a batch production process. Further, they suggest a good match between MRP and the MTO environment. However, [START_REF] Stevenson | A review of production planning and control: the applicability of key concepts to the make-to-order industry[END_REF] argue that MRP does not fully address the key decision support in an MTO environment since capacity is not considered at the point of order/job entry and order release. At the operational level, order acceptance and due date assignment are other key decisions in an MTO environment which must consider capacity. Based on the above requirements, Workload Control (WLC) can be appropriate since it ensures high due date adherence and considers capacity simultaneously. WLC uses a pre-shop pool of orders consisting of a series of short queues, where jobs are released if workload levels will not exceed pre-set maximum limits. Simultaneously, WLC ensures jobs do not stay in the pool too long, thereby reducing work in progress (WIP) and lead times [START_REF] Stevenson | A review of production planning and control: the applicability of key concepts to the make-to-order industry[END_REF]. However, before these methods can be applied in a hybrid MTO-MTS environment, the differences in the production rates in MTS and MTO environments need to be considered. The differentiation strategy described in chapter 3 is based on the majority of production being run using the MTS strategy (i.e. for products with low demand uncertainty), thus requiring a standardised method like MRP to reduce operating costs. In addition, MTO orders are received occasionally, requiring a focus on strict adherence to specified due dates. Based on this, a possible solution for the hy-brid environment is to combine the MRP method with WLC. MRP can be used as the backbone of the system -but must be tailored and supported with some additional techniques so that the WLC method can be applied at the point of new MTO order entry. In addition to the issue of dealing with MTO orders, other potential disruptions to schedules can occur which must be handled at the operational short-term level. Consequently, in order to ensure consistency between the tactical and operational levels, as well as to enable the combination of MRP and WLC methods, the tactical level must contain some approaches which consider such uncertainties and provide the required flexibility. Although some studies have been conducted on how to incorporate MTO products into an MTS planning environment (see e.g. [START_REF] Federgruen | The impact of adding a make-to-order item to a make-to-stock production system[END_REF]Katalan, 1999, Soman et al., 2006), the studied approaches only considered a narrow selection of food supply chain characteristics. There is therefore a need to investigate a broader set of techniques that consider more of the food sector characteristics. Different techniques exist to address uncertainties in different contexts. In general, supply chains can buffer against uncertainty using inventory, capacity and time. MTS environments use inventory and capacity as buffers -where safety stock is used to ensure availability when demand is greater than expected, while capacity allows for stock to be duly replenished. In MTO environments, customer orders cannot be delivered instantly and are therefore stored in the order book before they are released as production orders, thus spreading the demand variability out over time [START_REF] Hedenstierna | Dynamic implications of customer order decoupling point positioning[END_REF]. Safety lead time to tackle uncertainty in timing can be a more appropriate technique than safety stocks when demand is stable (Buzacott and Shanthikumar, 1994), thus representing a useful approach for products with low perishability and low demand uncertainty. Further, hedging has been suggested as a useful technique for coping with internal uncertainties [START_REF] Koh | Uncertainty under MRP-planned manufacture: review and categorization[END_REF], thus representing a useful technique for products with internal error-prone characteristics such as cheese which requires maturation periods as part of the production process. In summary, MRP in combination with WLC seems to be a promising approach for material planning -supported by additional techniques to accommodate uncertainties and provide flexibility. Before these techniques can be applied in practice, further investigation with regards to their ability to handle the characteristics of different product-market combinations and their interactions is needed. Operational Level Issues The operational level involves determining which product to produce next, when to produce, and how much to produce in the short term, e.g. week or day. The production orders are sequenced and scheduled on machines and other resources within the planning period, determining the set of production orders to be accomplished in the bottleneck, sequence of production orders, and production orders' run length and starting times [START_REF] Soman | Capacitated planning and scheduling for combined make-to-order and make-to-stock production in the food industry: An illustrative case study[END_REF]. Developing daily/weekly plans and schedules for production volumes, as well as sequencing orders on the shop floor, is not a substantially challenging task in a stable MTS environment. However, during the execution of the schedules, several types of customised orders for MTO products may be received in a hybrid production environment. Such changes may trigger the rescheduling of production orders and revision of priorities given to the shop [START_REF] Jacobs | Manufacturing planning and control for supply chain management[END_REF]. Once required flexibility and uncertainties are accommodated at the intermediate tactical planning level, the capacity-based WLC method is an appropriate approach to fit MTO products into the operational schedule, while also incorporating the customer order entry level. At the point of customer order entry, the due date is set depending on the capacity status. This decision is applicable for products with long customer order lead time allowance and negotiable due dates. After the due date is known, the order release date is determined by deducting planned workstation lead time from the due date. Workstation lead time can be assumed stable in this highly controlled process-type environment. Depending on the existing and required workload for the new MTO order, the order is added to the sequence of MTS products being released in that period. If the total workload exceeds the workstation load limit, there are four available options. The preferred option is to move the order release date to the earlier periods, evaluating the available capacity until the present period. By this approach, the system nervousness and cost of rescheduling can be avoided. Products with low perishability and long customer order lead time allowances are good candidates for such forward scheduling. However, if the product perishability does not allow moving the order to earlier periods, there are three other options; to reschedule the pool of jobs at the point of order release with the aim of reducing setup costs, to increase capacity or to renegotiate the due date. Orders are normally prioritized and sequenced according to their order release date. This is regarded as one of the advantages of WLC concepts as the performance of order release simplifies the shop floor dispatching process [START_REF] Stevenson | A review of production planning and control: the applicability of key concepts to the make-to-order industry[END_REF]. However, in a food production environment this might lead to high sequencedependant set up costs, and a sequencing rule that considers the trade-off between order priorities and set-up costs might thus generate considerable benefits. In summary, we suggest that also at the operational level the combined MRP-WLC approach can improve the effectiveness of schedules in hybrid environments. The operational performance of the schedule can then be measured on its ability to meet due dates for MTO products, minimise time jobs spend in the process, reduce WIP inventory for MTS products, and minimise set-up costs and waste. Conclusion This paper has provided increased understanding and knowledge on the tactical and operational implications of hybrid production environments. A number of critical decisions and alternative approaches to balance the requirements of both MTS and MTO items were highlighted and discussed on a material and product level, and a combined MTS-WLC approach seems promising in addressing some of the issues. In terms of contributions to practice, the paper provided an overview of critical issues which companies must handle when designing PPC systems for such hybrid environ-ments. However, further studies are required to investigate implications for planning and control on a resource level and how the MTS-WLC approach can be applied in practice. In addition, which PPC techniques that are appropriate for what degrees of perishability, demand uncertainty and customer order lead time allowances should be investigated. Relevant aspects to consider include differences in production lead times and maturation times, the point of variant explosion for different product families, and interdependencies between different products for instance in terms of set-up times and costs. The main limitations are related to the study's conceptual nature, and further research is required to investigate the appropriateness and applicability of the suggested approaches and techniques in practice. Table 1 . 1 Food supply chain characteristics (based on[START_REF] Romsdal | Fresh food supply chains; characteristics and supply chain requirements[END_REF] Area Characteristics Acknowledgements This research was made possible by LogiNord (Sustainable Logistics in Nordic Fresh Food Supply Chains, supported by NordForsk) and SFI NORMAN (Norwegian Manufacturing Future, supported by the Research Council of Norway).
20,451
[ "1001994", "1001995", "999944", "991878" ]
[ "50794", "50794", "556764", "50794" ]
01470676
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470676/file/978-3-642-40361-3_79_Chapter.pdf
Hejn Nielsen email: [email protected] A note on the simple exponential smooth non-optimal predictor, the order-up-to policy and how to set a proper bullwhip effect Keywords: The bullwhip effect, inventory variance, non-stationary demand, unobserved parameters The literature concerning the bullwhip effect is mostly focused on determining expressions for the theoretical bullwhip measure given specific theoretical system setups, whereas it must also be of interest to deal with the problem of how in fact to make a proper choice as to a sensible bullwhip level. Such a management approach to the bullwhip phenomenon has to be of quite some importance, as the bullwhip effect on the one side definitely is a system malfunction, but on the other also an effect the size of which common intuition tells us should be possible to control. The control is based on a decision as to what variation in demand should be locally absorbed and what variation should be passed on upstream. This paper will focus on design aspects of a bullwhip control policy in order to decide on sensible trade-offs between the bullwhip level and the local inventory variability. Introduction When digging down into the vast body of literature dealing with the bullwhip effect, it seems quite possible to get a fairly good understanding of the phenomenon on the generic level, but when it comes to the actual managerial choice of an appropriate bullwhip value, the story seems to be less decisive leaving us with an undetermined trade-off between the variation in demand that is absorbed locally into the inventory and the variation in demand that is propagated upstream in the supply chain. Somehow, such trade-off has always been at the core topic when dealing with standard production and inventory control [START_REF] Silver | Inventory Management and Production Planning and Scheduling[END_REF] under the headings of production smoothing or production leveling. The role of forecasting in much classical production and inventory theory is often a combined one in that the forecasting mechanism also handles the production smoothing upstream. The simple exponential smoothing mechanism is one of the most popular schemes in that respect and the smoothing coefficient is by experience recommended to be held within the interval 0.1 to 0.3 typically. Any supply chain collaboration or coordination aims at balancing production smoothing with inventory control at each tier. A basic intuition tells us that the less smoothing locally the more variation is sent upstream the supply chain. So which are the controlling handles when talking about the bullwhip effect, and how should these handles be set? And do we really have this inverse relation between variation absorbed locally and variation exported upstream? Let us consider as a starting point the discussion given by Stephen C. Graves [START_REF] Graves | A Single-Item Inventory Model for a Nonstationary Demand Process[END_REF], where he develops the bullwhip formula BW = (1+ (1θ)L) 2 derived conditionally on the optimal predictor given that the demand DGP (Data Generating Process) is IMA [START_REF] Box | Time series analysis: Forecasting and control[END_REF][START_REF] Box | Time series analysis: Forecasting and control[END_REF]. The optimal predictor in this case is the simple exponential smooth mechanism with the smoothing coefficient α = 1 -θ. In the Graves setup we are then left with the parameters θ and L, both of which are not really controllable in the short run, but given by the structure of the supply chain. If we want to get access to the non-optimal exponential smoothing coefficient values, we need to consider bullwhip measures unconditionally. However, the unconditional variance of a non-stationary variate, as for instance a random walk (IMA [START_REF] Box | Time series analysis: Forecasting and control[END_REF][START_REF] Box | Time series analysis: Forecasting and control[END_REF] where θ = 0), is infinite. We therefore have to redefine the bullwhip measure in this case to be focused only on the amplifying signal effect on top of the (stochastic) trend movements. The traditional way to eliminate the (stochastic) trend in the classical ARIMA [START_REF] Box | Time series analysis: Forecasting and control[END_REF] approach is to consider the differenced data instead. Henceforth, the bullwhip measure that will be used is then defined as BW = V ar(∆q) V ar(∆D) (1) where q denotes the upstream ordering and D is the demand. The paper will be organized in the following way: In section 2 the simulation model used in this work is described and commented upon. Section 3 is devoted to mapping out the bullwhip measure as well as the inventory variation effect as a function of the simple exponential smoothing factor. Section 4 is dealing with the trade-off between absorbing demand variation in the local tier inventory and passing the variation upstream as bullwhip. Finally, section 5 presents a few concluding remarks. The supply chain simulation model The model used in order to simulate the actual bullwhip values follows tradition. Customer demand as seen by the retailer is assumed to be well described by a non-stationary time series model of the ARIMA(0,1,1) type process ∆D t = ε t -θ • ε t-1 where ε t ∼ i.i.d. N (0, σ 2 ε ) (2) This process is covariance stationary for any value of θ, however, the MA(1)part of the model is only invertible for values of the parameter θ between -1 and 1. Whenever θ=0, the IMA(1,1) alias ARIMA(0,1,1) is equivalent to the dynamic process which is normally denoted "a random walk". As we are facing a delivery lead time L typically of a non-negligible magnitude, it is necessary to have a sound idea of the cumulative amount of demand that potentially could be executed during such a period. A measure y t of such a lead time demand could be formalized as follows Dt (1) = Dt+1 = S t = α • D t + (1 -α) • S t-1 Dt (L) = L • Dt (1) σ2 et (L) = σ2 ε • 1 + (L -1) • α 2 y t = Dt (L) + z • σL et (3) where -1 < θ < 1 and 0 ≤ α ≤ 1. This setup represents the optimal predictor setup if α = 1 -θ, in terms of minimum mean square error forecast, for the IMA(1,1) process and the z-value (≥ 0) controls to some extent the "worst case" dependence as a function of the variance of the stochastic innovations of the IMA(1,1) demand process induced on the cumulative future demand. The determination of the size and timing of orders q t that are placed by the retailer upstream at the manufacturer can be controlled by a variety of ordering mechanisms [START_REF] Disney | On Replenishment Rules, Forecasting and the Bullwhip Effect in Supply Chains[END_REF], however, if we assume an order-up-to "base-stock"-policy as is often encountered in the literature (see for instance [START_REF] Chen | Quantifying the Bullwhip Effect in a Simple Supply Chain: The Impact of Forecasting, Lead Times, and Information[END_REF][3] [START_REF] Graves | A Single-Item Inventory Model for a Nonstationary Demand Process[END_REF]) and which are basically of a make-to-order nature, q t can be expressed as follows q t = y t -y t-1 + D t (4) Inventory is then simply a matter of elementary "stock-flow" bookkeeping I t = I t-1 -D t + q t-L (5) The simulation setup is a periodic time setup and 50,000 time steps (approx. steady-state) are being simulated for each combination of α = 0 to 1 in steps by 0.02 and θ = 0 to 0.9 in steps by 0.1. The optimal predictor situations α = 1 -θ are singled out specifically. Demand is simulated with D(init) = 0 and error term Std.dev=10 and afterwards level shifted in order to be exactly non-negative. Inventory is also controlled for non-negativity. I(init) = 5, 000 seems to do the job. z is set equal to zero throughout this paper. The actual computations are programmed and performed in the Gnu-R statistical system [START_REF]The R Project for Statistical Computing[END_REF]. Mapping out the bullwhip and inventory variance effect The variance components that will be dealt with in this section are V ar(∆D), V ar(∆q) and V ar(I). In order for these computations to be meaningful ∆D, ∆q and I all have to be covariance stationary. ∆D complies by construction, whereas ∆q and I are checked manually. For α = 0 the inventory dynamics are defined by I t = I t-1 -D t + D t-L and here it seems that non-stationarity takes over resulting in V ar(I) approaching infinity. For α = 0 the bullwhip effect is trivially neutral, that is BW = 1. However, the full BW -map turns out as follows. The message from the obtained BW -map is fairly simple. Irrespective the actual state of the DGP for the demand and a fixed lead time L, a smaller value of α results in a smaller bullwhip effect. Furthermore, deviations from the optimal predictor setting are almost of no consequence for small α values (α ≤ 0.5) and of some consequence otherwise. This is definitely interesting from a production leveling perspective. It can furthermore be noted that the curvature of the BWcurves based on the first time differenced data is the same as the Graves ([5]) BW measure. The Graves BW measure is, however, consistently lower having a value of 16 instead of 25 for α = 1 and θ = 0. This is not surprising in that the stochasticity that works through the simple exponential smoothing mechanism has been taken completely out in the Graves setup due to the conditioning. So being able to control the bullwhip effect by the simple exponential smoothing factor, the mirror effect on the local inventory dynamics has to be observed. The full V ar(I)-map turns out as follows: The message from the obtained V ar(I)-map is a little more complicated than for the BW -map in that choosing a too low α easily results in an extremely large inventory variance implying that the local inventory is virtually out of control. Take as an example the top curve on the V ar(I)-map with the dot corresponding to α = 1 and thereby identifying the θ = 0 curve. In this case the demand is a pure random walk and the optimal one-step-ahead predictor is the so-called naïve forecast. Even if demand is characterized by a stochastic trend, it is quite impossible to even predict whether demand goes up or down in the next period. The inventory variance is contrary to the BW measure dependent on both scale and level of noise of the demand process. In the analyzed case the optimal predictor situation implies an inventory variance of approximately 1,300, that is a standard deviation of ±36 units. If in this case we decided to choose a non-optimal but lower value of α for example 0.5, the resulting inventory variance would be approximately 1,700, that is a standard deviation of around ±41 units. Maybe not so drastic an increase, but if α is decreased to 0.3, the variance increases to 2,200, which then is a standard deviation of around ±47 units. And finally if α is decreased to 0.1, the variance increases to 5,200, which then is a standard deviation of around ±72 units. Compared to the standard deviation on demand ±10 units it is quite impressive though. The above given example constitutes the worst case situation, but if we now turn our reasoning around and assume that we are not quite sure of the actual θ value and tend to go for a production smoothing value of α say 0.2, then we may expect the inventory variance to be somewhere between 500 and 3000, translated into standard deviations between 22 and 55. Moving up to α = 0.5, the standard deviations must be expected to lie between 24 and 45, which is quite a reduction in the uncertainty range. On the other hand going from α = 0.2 to α = 0.5 the bullwhip changes from being within the interval [2.6; 2.8] to being within the interval [7.0; 8.5]. If we know the actual θ value from the demand DGP, we can observe that an optimal choice of α = 1 -θ simply results in the minimal obtainable local inventory variance. This is quite a nice property, but it implies that the optimal prediction setup in the present case simply chooses a BW effect that is based on a maximum transference of upstream variability. This is maybe not so nice a property if the BW effect really hurts upstream. Anyway, it will clearly depend on specific trade-off considerations in given situations. How to set a proper bullwhip effect In order to decide about the choice of a "proper" bullwhip effect, here given the system's "malfunctioning" with respect to ordering, there are obviously two very different situations. One where the demand DGP parameters have been determined successfully and one where the actual parameter settings are unknown, but only known to be an IMA(1,1) structure. The figure below is constructed based on the observation that α and BW are almost one-to-one at least for α values below 0.5. For α values above 0.5 a midpoint BW value is used. The curves "UB Std.Dev(I)" and "LB Std.Dev(I)" represent the decision information where θ is unknown, whereas if θ is known, the "OPT Std.Dev(I)" curve represents the decision information. Clearly, in the case of a known θ there is not much of a trade-off, but approximatively a one-to-one strictly increasing relationship between BW and V ar(I). But then again, there is not much of a managerial choice present in this situation. More interesting is the situation where only the structural form is known, but none of its parameters. It must, however, be remembered that besides not knowing θ we then also do not know the size of the error of the innovation term of the demand DGP, and so any reasoning based on figure 3 is clearly only indicative as it is simulated based on a specific error term Std.Dev=10. Nevertheless, we now have a trade-off situation with respect to the "UB Std.Dev(I)". In a worst case sense the trade-off along this curve represents a lower expected inventory variance traded against a higher BW effect and vice versa. But maybe an interpretation that connects the range of uncertainty, to a choice of a BW effect is the really interesting angle on the subject of setting a proper BW effect. Small BW values correspond well to small α settings, but leave the local inventory with a huge uncertainty as to the actual dynamic variability. This uncertainty effect is certainly narrowed down for increasing BW values. The question is now really -how do we value risk with respect to local inventory variance against upstream exported and amplified demand variation as expressed by the dynamics of the ordering? Clearly, this is closely related to the worst case ("UB Std.Dev(I)") trade-off against BW , but the risk element focuses more clearly on the element of gambling that is virtually always present. Concluding remarks It is quite obvious that this work has not really given a complete answer to the posed question of how to set a proper BW effect. Still a few moments to remember have come by in that just following classical production and inventory control wisdom and setting the smoothing factor low, somewhere between 0.1 and 0.3, might be a risky business for the local inventory even if it produces a small BW effect upstream. If it was the bullwhip effect alone, α should trivially be set to zero. Now if θ is known and is low, which means that the demand process is closer to a pure random walk, then a non-optimal low α choice is really bad. We are then simply close to the "UB Std.Dev(I)" curve values. Pressing α towards zero in this situation makes things even worse in that the inventory variance goes towards infinity. So, at last and no surprise, the concrete choice of upstream bullwhip effect does in the end depend on precisely how costly unforeseen inventory and/or production activity variation is at the individual tiers in the supply chain. But the intuition that somehow there must always be a negative trade-off between the variance absorbed locally and the variance transferred up-stream seems also to be supported in the specific case studied, where demand is non-stationary and the order mechanism is of the order-up-to type, at least viewed from a certain perspective. Fig. 1 . 1 Fig. 1. Bullwhip measures as a function of α (Alpha) given θ = 0 to 1 in steps of size 0.1 (the curves); Demand as IMA(1,1) with error term Std.Dev=10 and a lead time of L=3. Each bullwhip curve is θ-specific and the curve dots are indicating the α = 1 -θ situations, thereby implicitly identifying the respective curves' specific θ value. Fig. 2 . 2 Fig. 2. Inventory variance (VAR-INVENTORY(i,)) as a function of α (Alpha) given θ = 0 to 1 in steps of size 0.1 (the curves); Demand as IMA(1,1) with error term Std.Dev=10 and a lead time of L=3. Each variance curve is θ-specific and the curve dots are indicating the α = 1-θ situations, thereby implicitly identifying the respective curves' specific θ value. Fig. 3 . 3 Fig. 3. BW against Std.Dev(I)-ranges. Initial draft versions of this work have been supported by grant no. 275-07-0094 from the Danish Social Science Research Council.
16,971
[ "1001996" ]
[ "19908" ]
01470677
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470677/file/978-3-642-40361-3_7_Chapter.pdf
Andreas Koukias email: [email protected] Dražen Nadoveza email: [email protected]@epfl.ch Dimitris Kiritsis Semantic Data Model for Operation and Maintenance of the Engineering Asset Keywords: asset, asset management, maintenance, ontology model, semantics des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Engineering assets within an organization can be the foundation for its success and future and are defined as "as any core, acquired elements of significant value to the organization, which provides and/or requires -according to a user or provider point of view -services for this organization" [START_REF] Ouertani | Through-life Management of Asset Information In[END_REF]. The management of physical assets, such as machining tools, can be a challenging task in order to optimize their performance through efficient decision making and reduce their maintenance costs, increase the revenue and guarantee their overall effectiveness [START_REF] Ouertani | Through-life active asset configuration management In[END_REF]. Physical or engineering assets, such as machining tools and containers, are distinguished from intangible or virtual assets such as knowledge, software, or financial assets [START_REF] Stapleberg | Risk Based Decision Making in Integrated Asset Management: From Development of Asset Management Frameworks to the Development of Asset Risk Management Plan[END_REF]. According to the definition of asset management proposed by the Cooperative Research Centre for Integrated Engineering Asset Management (CIEAM) [4], and adopted for this work, asset management is "the process of organizing, planning and controlling the acquisition, use, care, refurbishment, and/or disposal of physical assets to optimize their service delivery potential and to minimize the related risks and costs over their entire life through the use of intangible assets such as knowledge based decision making applications and business processes." Asset management is a holistic and interdisciplinary approach that covers in the context of physical assets the whole life cycle of the asset, from the acquisition to the disposal of the asset. Its scope extends from the daily operations of assets trying to meet the targeted levels of service to supporting the organization's delivery strategies, satisfying the regulatory and legal requirements and minimizing related risks and costs [START_REF] Koronios | Information and Operational Technologies Nexus for Asset Lifecycle Management In: 4th World Congress on Engineering Asset Management[END_REF][START_REF] Frolov | Building An Ontology And Process Architecture For Engineering Asset Management[END_REF][START_REF] Frolov | Identifying core function of asset management[END_REF]. Asset management is particularly important now with the ageing of the equipment, the fluctuating requirements in the strategy and operation levels and the emphasis on health and safety requirements [START_REF] Frolov | Building An Ontology And Process Architecture For Engineering Asset Management[END_REF]. We consider that the key concept to achieve optimization of asset management is the management of the asset's data. Information systems in asset management extend from collecting, storing and analyzing the asset information to supporting decision making and providing an integrated view [START_REF] Koronios | Information and Operational Technologies Nexus for Asset Lifecycle Management In: 4th World Congress on Engineering Asset Management[END_REF]. Decision makers use a variety of tools on their day-to-day and long-term activities and their effectiveness depend greatly on the quality of data. The requirements for the asset data demand that it is always complete, accurate, timely, consistent and accessible [START_REF] De Leeuw | Asset Information Management: From Strategy to Benefit[END_REF]. It is important that organizations can efficiently track the current and historical information of the assets concerning their status and component configuration along their lifecycle [START_REF] Ouertani | Through-life active asset configuration management In[END_REF]. However, asset data management systems currently suffer from system interoperability, data integration issues as well as the enormous amount of the stored data, thus preventing a seamless flow of information for monitoring and controlling the assets [START_REF] De Leeuw | Asset Information Management: From Strategy to Benefit[END_REF][START_REF] Matsokis | Ontology-based Modeling for Complex Industrial Asset Lifecycle Management: a Case Study[END_REF][START_REF] Matsokis | Ontology-Based Implementation of an Advanced Method for Time Treatment in Asset Lifecycle Management[END_REF]. The vision of Semantic Web can be the key to the harmonization of the information models, since it suggests using software agents that are able to understand the meaning of data and create connections between data automatically, to gain new information. Based on this vision, ontologies can be used to capture the semantics of data, resolve semantic heterogeneities and optimize data quality and availability. Gruber [START_REF] Gruber | Towards principles for the design of ontologies used for knowledge sharing In[END_REF] defines the ontologies as explicit formal specifications of the terms in a domain and relations among them whereas Noy and McGuiness [START_REF] Noy | Ontology development 101: a guide to creating your first ontology[END_REF] describes an ontology as a formal explicit description of concepts in a domain of discourse, with properties of each concept describing various features and attributes of the concepts and restrictions on slots. Ontologies offer a shared vocabulary to describe the knowledge for sharing in a certain domain or application area. Among the main phases that consist the engineering asset lifecycle [START_REF] Ouertani | Through-life active asset configuration management In[END_REF], the current work focuses on the operation and maintenance phase where the aim is to optimize the overall performance of the asset and guarantee its availability and longevity. The main obstacle is that available information concerning the asset's operation, configuration, maintenance and planning is currently disparate and thus not put to effective use in order to improve its quality of operations. The aim of this work is to propose a semantic data model that will integrate all this information for an engineering asset within an organization. Based on the Semantic Web vision, ontologies are proposed since they can capture the semantics of data, resolve semantic heterogeneities, create a shared domain vocabulary and optimize data quality and availability. We consider the various entities that are involved in the asset's usage and maintenance, as well as their relations, and try to develop a semantic data model that will assist in increasing the productivity of the asset and maintaining it, with minimum cost and high reliability. Related Work There are many research efforts using or recommending ontologies in the asset management domain, but to our knowledge none are focusing on the operation and maintenance phase of the asset's lifecycle. In [START_REF] Frolov | Building An Ontology And Process Architecture For Engineering Asset Management[END_REF] the authors develop an initial and fundamental asset management ontology and subsequent process architecture in order to support an organization's asset management initiatives, using a manual text mining approach. In [START_REF] Matsokis | Ontology-based Modeling for Complex Industrial Asset Lifecycle Management: a Case Study[END_REF], ontologies with Description Logic are used in a case study in asset lifecycle management in order to demonstrate the benefits of implementing ontology models in industry. An ontology-based implementation for exploiting the characteristics of time in asset lifecycle management systems is presented in [START_REF] Matsokis | Ontology-Based Implementation of an Advanced Method for Time Treatment in Asset Lifecycle Management[END_REF], mainly in maintenance but also considering the entire lifecycle. The development of a generic asset configuration ontology is recommended in [START_REF] Ouertani | Through-life active asset configuration management In[END_REF], in combination with a prototype workflow management system, in order to provide a generic and active asset configuration management framework for a better visibility of through-life asset configurations. Furthermore, the authors in [START_REF] Nastasie | The Role of Standard Information Models In Road Asset Management In[END_REF] propose a conceptual model for the adoption and implementation of ontologies in the area of Road Asset Management, in order to assist the automated information retrieval and exchange between heterogeneous asset management applications. Moreover, in order to achieve an efficient asset management, the minimum functional requirements at the operational level are presented at [START_REF] Haider | Information technologies implementation and organizational behavior: An asset management perspective In: Technology Management in the Energy Smart World (PICMET)[END_REF], whereas in [START_REF] De Leeuw | Asset Information Management: From Strategy to Benefit[END_REF] the requirements are outlined and a model for improving the strategy by classifying the assets is proposed. Lastly, concerning the use of ontologies on the maintenance domain, an ontology to support semantic maintenance architecture is proposed in [START_REF] Karray | Towards a maintenance semantic architecture[END_REF], a domain ontology for industrial maintenance is shown in [START_REF] Karray | A Formal Ontology for Industrial Maintenance In: Terminology & Ontology: Theories and applications[END_REF] and an ontology in order to model the condition monitoring and maintenance domain knowledge is introduced in [START_REF] Jin | Semantic integrated condition monitoring and maintenance of complex system[END_REF]. Asset Management Semantic Data Model Based on the available literature in the defined scope of activities, we propose a semantic data model for the operation and maintenance of an engineering asset, which can be seen in Figure 1. The dotted line in the middle of the model separates the static asset data, e.g. asset function and specifications from the dynamic asset data e.g. operation data and maintenance schedule. The model consists of the main upper asset ontology and the related lower asset event domain ontology. In order to provide a better understanding, the top-level concepts of the proposed model are firstly defined below: • Asset: the engineering asset, as previously defined. It may be possible to break down the asset in its technical components, which may be considered as assets themselves. • Asset_Specification: static data originating from documentation and containing all asset specification data. This is developed during the design and building phases of the asset lifecycle and depicting the target asset operation and maintenance data in order to guarantee performance and availability. • Asset_Function: the main functionality and possibly also secondary functionalities, performed by the asset. • Actor: the person or group of persons in the plant who is responsible for operating and managing the asset and has a specific functional role • Asset_State: the current physical state of the asset which can be either normal, degraded or in failure. • Maintenance_Schedule: defines the sequence of asset maintenance activities, specifying the maintenance tasks and their frequency. • Asset_Operation_Data: data stored during the operation of the asset, e.g. asset temperature. The instances over a period of time provide a historical view of the asset's operations. Depending on the values of the operation data, this can be separated into categories of operation status. • Asset_Configuration_Data: record of the asset configuration status at any point of time. The instances can assist in tracking the current and historical changes of asset configurations. • Asset_Maintenance_Data: data concerning the performed asset maintenance activities. We consider that the events that take place during the asset's operation and maintenance phase can be modeled as a lower event domain ontology. Initially, we define an event as any transient occurrence of interest for the asset which can be distinguished between internal events as changes of state caused by an internal asset transformation and external events with direct effects on the asset. In this work the low-level events are considered, which declare every status update and are necessary for monitoring the state of the asset e.g. value update. The high-level events that exist on a higher abstraction level and concern the long term asset strategy are not in the scope. A special type of low-level event is the Alarm which represents an abnormal asset's state that requires the user's attention and has warning purposes. Fig. 1. Engineering asset management semantic data model The current approach recommends the use of the ontology reasoning capabilities on the proposed model. Reasoning can be applied in many different scenarios in the ontology, based on predefined rules, in order to provide the capability of answering queries e.g. if the asset fulfills the operational requirements, and thus generate new knowledge. In order to provide a better understanding, a typical scenario is described using the proposed semantic model. Firstly, we define the different Operation_Data_Types for the asset according to its operation values and corresponding to different operation phases. These phases can be either normal, e.g. operating, warming up or belong to different types of abnormal operation mode. If a value from the operation data e.g. the asset temperature, exceeds its predefined thresholds according to the As-set_Operation_Specification, the Asset_Operation_Data is classified to its respective operation mode and the relevant Asset_Event is raised accordingly from the lower event ontology. When the specific Asset_Event is raised, the historical As-set_Operation_Data, Asset_Configuration_Data and Asset_Maintenance_Data can be examined, as well as other Asset_Event instances, in order to evaluate whether there have been indications leading to this event e.g. temperature consistently rising to-wards and eventually surpassing the threshold or possibly skipping a maintenance action or adapting a wrong configuration. Based on predefined rules and using the reasoning capabilities, one possibility would be the classification of the asset in the Degraded_State and the subsequent adjustment of the adapted Maintenance_Schedule e.g. repair or replace a defective asset component before production begins. Another possibility would be the need to modify the Configuration_Data, e.g. reduce the rpm, in order to keep the Asset_Operation_Data within its specifications. Overall, a set of rules and procedures can be defined to support the reasoning system to make knowledge which is implicit only to the experts, explicit, by using the available events, operational data, configuration data and maintenance data in order to adjust to an improved maintenance strategy as well as select an optimal operation configuration, thus managing the evolution of asset configuration. Overall, the proposed model can use the ontology knowledge to improve the asset performance, longevity and availability. Conclusion and Future Work This work proposed a semantic data model for an engineering asset, focusing on the operation and maintenance phase of its lifecycle, in order to model the domain, assist in the decision making process and improve the asset's performance and availability. In the next steps, we intend to implement the ontology model and evolve it on a description logic language to allow the ontology's reasoning, validate its consistency and demonstrate its benefits. We will also validate the model in a case study in order to evaluate its applicability and effectiveness on an asset's operation and maintenance phases. Furthermore, we intend to extend the model by including the asset high-level events in order to take into consideration the asset operation strategy, as well as the concept of asset service to cover the possible outsourcing of assets and the distinction between the different roles of asset owner and provider.
16,592
[ "999981", "999980", "991268" ]
[ "302851", "302851", "302851" ]
01470680
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470680/file/978-3-642-40361-3_82_Chapter.pdf
Peter Nielsen Giovanni Davoli email: [email protected] Izabela Nielsen email: [email protected] Niels Gorm Rytter A Design of Experiments Approach to Investigating the Sensitivity of the Re-Order Point Method Keywords: Inventory management, discrete event simulation, design of experiment, demand distributions This paper investigates the re-order point inventory management models sensitivity to demand distributions, demand dependencies and lead time distributions. The investigated performance measures are four different versions of service level. The conclusion is for all measures that the single most critical aspect adversely affecting service level performance is the presence of asymmetrically distributed demand. Introduction Inventory management is a practical challenge faced by many companies across industries. Inventory management can be reduced to the single objective of maintaining the cost optimal amount of inventory on hand. However, in practice this is typically reduced to achieving a certain service level i.e. satisfying a certain percentage of demand directly from inventory. The correct service level will be situational, some schools e.g. advocated in [START_REF] Silver | Inventory Management and Production Planning and Scheduling[END_REF], that the service level is a direct trade off between various cost factors (typically holding vs. cost of stock out). Other would argue that it is determined by market requirements. Regardless of how the target service level is determined, it is possible to calculate for a given situation the amount of safety stock needed to achieve the target. This paper presents the results from an investigation of the behavior of a simple reorder point inventory management method. The aim is to determine which factors, settings and the combination of factors influence on the achieved service level. For this purpose a simulation study is carried out using XCV software. The study is structured around a design of experiments and investigates four different service level measures and their sensitivity to four different parameters: demand distribution, correlation of demand, mean lead time and standard deviation of lead time. The aim of the study is to 1) identify which factors and settings of these that are critical regardless of service level measure 2) provide insight into the complexity of real life inventory management by identifying which factors interact 3) give guidelines for how to improve service level measures in practice. The remainder of the paper is structured as follows. First, a literature review of inventory management, demand distributions and simulation studies of inventory management is presented. This is followed by a design of experiments. Third, results from a full factorial experiment are presented and discussed before final conclusions and avenues of further research are presented. 2 Literature review In general most methods used in practice for inventory management tend to be based on assumptions of normally independently distributed lead time demand (e.g. [START_REF] Silver | Inventory Management and Production Planning and Scheduling[END_REF], [START_REF] Vollmann | Manufacturing Planning and Control for Supply Chain Management[END_REF]. For many practical purposes this will also tend to be a reasonable assumption. However, [START_REF] Tadikamalla | A Comparison of Several Approximations to the Lead Time Demand Distribution[END_REF] concludes that high values of Coefficient of Variance associated with asymmetrical distributions of demand lead to poor performance of inventory management techniques and that the symmetry/asymmetry of demand is critical for e.g. inventory costs [START_REF] Zotteri | The impact of distributions of uncertain lumpy demand on inventories[END_REF]. [START_REF] Bobko | The Coefficient of Variation as a Factor in MRP Research[END_REF] find that the coefficient of variation is in fact a robust descriptor of the degree lumpiness. [START_REF] Tadikamalla | A Comparison of Several Approximations to the Lead Time Demand Distribution[END_REF] likewise underlines that it is necessary to deviate from the assumptions of normality / symmetrical distributed (lead time) demand to achieve a satisfactory performance. [START_REF] Zotteri | The impact of distributions of uncertain lumpy demand on inventories[END_REF] finds that the shape of the demand distribution is critical when determining the performance of inventory management methods. However, other methods (e.g. aggregation in time [START_REF] Eriksen | Emperical order quantity distributions: At what level of aggregation do they respect standard assumptions?[END_REF])) tend to be able to compensate for the asymmetry experienced. This could also indicate that there is a critical level of asymmetry of demand distribution (especially when taking into account a given lead time) that when reached will lead to poor performance. Within inventory management individual customer orders are translated into a demand rate (with a given distribution) and used for calculating reorder points, lot-sizes, timing, safety stock etc. (Silver, 1981). This is often translated into a lead time demand distribution. The shorter the lead time period, the closer the lead time demand distribution is to the actual distribution of demand. It however seems necessary to investigate to which extend inventory management techniques are sensitive to these and other parameters. Inventory management is can be fundamentally be reduced to a combined decision of how much and when to order. In most instances this problem is reduced to taking into account (for a given cost structure; holding, ordering, stock out) four factors: • Demand distribution (D d ) (sometimes modeled as lead time demand, in this paper modeled as the more complex daily demand) • Correlated demand (C d ) • Mean lead time (µ LT ) • Standard deviation of lead time (σ LT ) To investigate these factors this paper utilizes the same methods as the ones presented in e.g. [START_REF] Bobko | The Coefficient of Variation as a Factor in MRP Research[END_REF] and [START_REF] Zotteri | The impact of distributions of uncertain lumpy demand on inventories[END_REF]. Design of Experiments To investigate the behavior of the re-order point method a simulation experiment is completed. The aim of the experiment is to investigate the importance of the four factors on the service level performance of the ROP. The aim is not to develop a predictive response model. A full factorial experiment with three levels is used in this paper. Four factors and three levels give 3 4 =81 combinations and thus 81 separate experiments were conducted. The three settings for the four factors are shown in Table 1. = ⇒ = × (1) Where µ LT is the mean of the lead time and σ LT is the standard deviation, so for e.g. µ LT =7 days, CV=0.2 the standard deviation used is 0.2x7 days =1.4 days. All demand distributions have a mean of 1.000 units/day for both independent and dependent distributions. In the case of this experiment all factors are treated as categorical factors. This means that a linear response is not expected. It also means that it will be possible to identify not just which factors are significant for any of the four performance measures (responses) but also at which levels they are critical. This will enable a discussion of which factors should be taken into account when using a Re-orderpoint inventory management method in practice. The four performance measures investigated in this paper are all service level measures. They are defined as follows: SL1: calculated as the ratio between stock out in pieces and total demand. SL2: calculated as the ratio between stock out in days (the sum of days when a stock out occurred) and total number of days. SL3: calculated as the ratio between stock out in pieces and total demand during the period while at least one order is open. SL4: calculated as 1 minus the ratio between the number of "stock out" and the total number of orders. A "stock out" is considered to have occurred when the total demand in a lead time exceeds the mean demand over the lead time. The simulation model was developed according with the standard EOQ model for single item. A set of stochastic functions, developed in the SciLab environment, are used to generate the demand that activates the model. The simulations are conducted for a length of 1.000 days to guarantee a stable output and thus a correct estimation of the four service level measures. Analysis of Results The design and analysis of experiments have been conducted using the open source software R (r-project.org, 2012) and the package AlgDesign. The ANOVA tables for all four fitted models are presented in Table 2. The general investigation of the fitted response models show the following: QQ-plots indicate normal distributed residuals for all four models indicating a reasonable fit of models. Adjusted R2 are in the interval 0.83 to 0.90 likewise indicating a good fit of all the models. This strongly indicates that the models are valid descriptors of the behavior of the four performance measures investigated in this study. Service level Table 2 shows the ANOVA of the four service level models for the models. The table includes coefficients and p-values. The values shown in Table 2 indicate firstly the coefficients and their corresponding p-values. The coefficients indicates the response difference from the baseline model where all factors assume the value -1. This means it is possible to see when there is a significant effect of a factor or combination of factors at given values. This makes it possible not only to identify which factors are significant but also on which settings they are critical. The values shown in Table 2 indicate that there are some generic responses to given settings of the factors, furthermore it is also apparent that all four response models have significant (on a 0.05 or better level) second order interactions. This illustrates the complexity faced when doing real life inventory management, as second order interactions must apparently be considered when designing and implementing inventory management. The R 2 for the four models are respectively for SL1-4: 0.97, 0.97, 0.98 and 0.97 and adjusted R 2 are: 0.87, 0.84, 0.9 and 0.83. The high values of both R 2 and adjusted R 2 indicate a very good model fit, which is of course to be expected when all the parameters are included. The results of the factorial analysis will be split in to three parts. The first will focus on the factors and combination of factors and settings that are critical for all four service level measures. The second will focus on the individual service levels and the factors that are significant only for this particular measure. The third will focus on response models only containing main effects. In general all four SL measures are only sensitive to one main effect and that is D d and only at the setting of 1. In all four cases the coefficient is negative for this setting of D d , and in all cases is the setting of 0 in no way significant. Both the size of the coefficient and the significance level the demand distribution setting of 1 -indicating an asymmetrical distribution -is the largest for all four models. This strongly indicates that the asymmetrical demand distribution (supported by the fact that the other symmetrical distribution has no significant different response than the base line setting -1) is the single most significant factor in explaining lower service levels. It also strongly indicates that the re-order point method is highly sensitive to shape of the demand distribution, especially when it is skewed. Previous work by e.g. [START_REF] Eriksen | Emperical order quantity distributions: At what level of aggregation do they respect standard assumptions?[END_REF] and [START_REF] Nielsen | Analyzing and Evaluating Product Demand Interdependencies[END_REF] indicates that demand rates are in fact neither normal nor symmetrical distributed in practice. This underlines that this topic deserves further investigations. Another generic significant effect is the second order interaction D d @1:C d @0:µ LT @1, indicating that demand is asymmetrically distributed, slightly correlated and lead time is long. In all four response models this combination of settings tends to increase service levels. The reason for the increase of service levels when three of the factors have these settings must be found in the central limit theorem. Under long lead times (µ LT @1) the asymmetrical distributed demand (D d @1) will tend towards a normal distribution with low variation. However, it is interesting to note that µ LT @1is not significant as a main effect, most likely because this is covered in safety stocks and calculation of these levels. A conclusion must be that to avoid the detrimental effects of a skewed demand distribution (i.e. D d @1) on service levels practitioners must compensate by aggregating in time (See e.g. [START_REF] Fliedner | Hierarchical forecasting: issues and use guidelines[END_REF]). This finds support in [START_REF] Nielsen | Analyzing and Evaluating Product Demand Interdependencies[END_REF] and methods for calculating adequate aggregation horizons to achieve a satisfactory distribution of demand can be found in [START_REF] Eriksen | Emperical order quantity distributions: At what level of aggregation do they respect standard assumptions?[END_REF]. D d @1:C d @1:σ LT @0 is likewise a significant second order interaction with a negative contribution in all four models. This combination of factors is for a dependently asymmetrical distributed demand with a low variation in lead time. The combination of the first two factors is not surprising as this would tend towards giving a very skewed demand distribution. However, it is difficult to establish why slightly nonconstant lead time should be is significant. Other noteworthy factors and interactions is D d @1:C d @1:σ LT @1that has a significant negative impact on SL1-3. This second order interaction actually indicates the almost worst case scenario. Highly correlated, asymmetrically disturbed demand, with large variation in lead times will naturally tend to lead to a lower service level. It is in fact interesting why it is not also a significant combination of factors for the fourth SL measure. Several generic conclusions can be reached from this study. First, that asymmetrical demand distributions tend to lead to lower service levels. This is interesting since several studies of real life demand indicate that the demand faced by companies can in fact be asymmetrically distributed and also correlated in time. This also supports the findings of e.g. [START_REF] Zotteri | The impact of distributions of uncertain lumpy demand on inventories[END_REF]. Second, that to compensate for this one has to aggregate in time, in the case of inventory management this is typically achieved through longer lead times. This of course will tend to lead to higher holding costs, so here the trade-off between service and costs become critical in practice. This area deserves further study. A general problem with a full scale model with second order interactions is that there is a risk of over fitting. For this reason it could be prudent to investigate the reduced response models only including the main effects. An over view of these response models can be seen in Table 3 below. Interestingly enough also when only main effects are considered the adjusted R 2 only drops to the range 0.60-0.78, which indicates a reasonable response model but with less precision. However, these models are interesting as they illustrate service level issues in a parameter by parameter manner. It is interesting to note that D d @1 is still the single most critical factor, and that this setting reduces service level measures in all response models. It is also interesting to note that for three out of the four measures C d @1 leads to a lower service level. The same goes for σ LT @1 although the contribution to the service levels are for all models (SL4 exempt) quite low. It is also interesting to note that only two out of 4x4 factors set at 0 are in fact significant. Of these only µ LT @0 for SL4 has any significant contribution to the service level meas--0.023 0.000 -0.043 0.000 C d 0 -0. Conclusion The aim of the study has been to investigate which parameters are critical for a given performance criteria of a re-order point inventory model. The conclusion is that there is a significant difference in the importance of the four investigated factors depending on the performance measure. From this study it can conclusively be stated that skewed demand distribution is the single most detrimental factor with regards to service level performance. It can also be concluded that there are several second order interactions that are highly significant, underlining the fact that inventory management is in fact a highly complex problem. Furthermore the findings of the presented investigations support the conclusions reached in e.g. [START_REF] Tadikamalla | A Comparison of Several Approximations to the Lead Time Demand Distribution[END_REF] and [START_REF] Zotteri | The impact of distributions of uncertain lumpy demand on inventories[END_REF], namely that the asymmetry of demand distributions are in fact highly significant for the performance of inventory management methods. Based on the study it is possible to give direct conclusive guidelines for where to focus when improving service levels in real-life applications of the ROP method. Especially it should be noted that the only feasible manner to compensate for the negative effects of asymmetrically distributed demand is to aggregate in time, through a longer safety period. The conclusion must necessarily be that the complexity of inventory management is high, since the performance of a given ROP managed system cannot be predicted solely based on the main effects affecting it. This underlines that research into the behavior of inventory management systems and their performance is in fact a topic of relevance for both researchers and practitioners. Normal distribution, mean = 1.000 units/day, standard deviation = 300 units/day Uniform distribution, minimum = 500 units/day, maximum = 1,500 units/day Exponential distribution, mean = 1.000 units/day C d Non correlated observations Slightly correlated, first 4, expect the constant values, are normal i.i.d.. The Coefficient of Variance (CV) is used to scale the variation in lead times defined as: Table 1 . 1 Overview of experimental settings of the four investigated factors. Table 2 . 2 p-values from ANOVA for SL1, SL2, SL3 and SL4 and coefficients of the fitted models. p-values emphasized with bold indicate variables significant on a better than 0.05 level. To improve readability only rows containing significant variables have been included. SL1 SL2 SL3 SL4 Estimate Pr(>|t|) Estimate Pr(>|t|) Estimate Pr(>|t|) Estimate Pr(>|t|) (Intercept) 0.999 0.000 0.998 0.000 0.999 0.000 0.983 0.000 D d 1 -0.012 0.011 -0.012 0.003 -0.017 0.002 -0.047 0.003 D d 1:C d 1 -0.009 0.099 0.004 0.398 -0.016 0.017 0.011 0.538 D d 1:µ LT 1 -0.012 0.036 -0.007 0.097 -0.008 0.203 -0.025 0.167 D d 1:C d 1:µ LT 0 0.012 0.042 0.002 0.610 0.019 0.008 -0.013 0.487 D d 1:C d 0:µ LT 1 0.016 0.012 0.014 0.008 0.021 0.005 0.063 0.004 D d 1:C d 1:µ LT 1 0.012 0.042 0.003 0.513 0.020 0.007 0.018 0.351 D d 1:C d 1:σ LT 0 -0.015 0.016 -0.012 0.014 -0.019 0.009 -0.047 0.023 D d 1:C d 0:σ LT 1 -0.011 0.066 -0.009 0.062 -0.014 0.044 -0.021 0.287 D d 1:C d 1:σ LT 1 -0.012 0.042 -0.011 0.022 -0.015 0.033 -0.030 0.125 ure. This underlines that a large deviation from the assumed behavior is necessary for it to significantly affect service level performance. The combined impression is that performance primarily depends on whether or not a skewed demand distribution is present, and to some extend whether demand is highly correlated. The conclusion is again that the demand behavior is critical for the service level. A highly correlated demand will also tend to lead to demand being asymmetrically distributed in the short term, again underlying that the asymmetry of demand distributions is in fact highly critical for service level performance, regardless of which SL measure is used. It is also interesting to note from Table 3 that all deviations from the standard setting of the factors to -1 tend to lead to a negative response on all four SL measures.
20,668
[ "991697", "1002000", "1002002" ]
[ "300821", "203954", "300821", "300821" ]
01470681
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470681/file/978-3-642-40361-3_83_Chapter.pdf
Albert Gardoń email: [email protected] Peter Nielsen Niels Gorm Malý Rytter Challenges of Measuring Revenue, Margin and Yield Optimization in Container Shipping Keywords: yield (revenue) management, liner shipping, transportation logistics We present in this paper some initial ideas of Revenue and Yield Management in the container shipping industry, namely a regression study of the behavior of the currently used indicator for measuring pricing and revenue performance in a leading shipping line. We consider the properties of the indicator used and discuss options of developing a better indicators of revenue or yield optimization, being either revenue or yield per available unit. At the end we also formulate implications for a future research work to be done on development of relevant measures for the industry. Introduction The aim of this paper is to provide a discussion what would constitute a reasonable measure for revenue or yield optimization in the container shipping industry. It reveals some initial ideas about Revenue (RM) or Yield Management (YM) applied in this business. The discipline of RM or YM has had widespread attention in a number of consumer oriented businesses (see [START_REF] Kimes | Yield Management: A Tool for Capacity-Constrained Service Firms[END_REF][START_REF] Levinson | The Box -How the Shipping Container Made the World Smaller and the World Economy Bigger[END_REF][START_REF] Mcgill | Revenue Management: Research Overview and Prospects[END_REF] and [START_REF] Zurheide | A simulation study for evaluating a slot allocation model for a liner shipping company[END_REF]) as e.g. passenger airlines (see [START_REF] Chiang | An Overview of Research on Revenue Management: Current Issues and Future Research[END_REF]), hotels, car rental companies and less in cargo transportation businesses for air cargo (see [START_REF]Revenue Management: A practical Pricing Perspective[END_REF]), rail cargo, container shipping or road transport companies. Cargo businesses meet characteristics of importance for practicing RM of YM as fixed capacity, high fixed and low variable costs, time-variable or stochastic demand, segmentable markets and clients, perishable inventory or capacity and in advance selling services (see [START_REF] Belobaba | The Global Airline Industry[END_REF]). However, when addressing RM or YM in the container shipping industry, one must first realize that the industry might have similarities, but also differs from e.g. passenger and cargo airlines and their business conditions where most research and practical experience has been accumulated over many years, which makes it necessary to tailor solutions and measures to the particular industry in focus. This paper addresses a particular case company in container shipping and investigates options for developing measures tailored to its business needs. The remaining part of the paper is structured in two parts. First, we investigate with use of regression analysis the behavior of the currently used measure (indicator) for revenue optimization. Next, we discuss potential alternatives for developing one more indicator better suited to its business needs. Finally, a conclusion is made on required further effort to succeed with a better profitability indicator measuring the container shipping industry. Revenue per transported unit and related behavior The first problem faced is the manner of the business condition reporting. In the container shipping industry the dominating revenue or yield optimization measure is the average price (called the net freight or the revenue) per unit sold (e.g. FFE -Forty Foot Equivalent, i.e. the volume of a 40 feet long container or TEU -Twenty-Foot Equivalent Unit, i.e. the volume of a 20 feet long container1 ) reported every week. This is by industry standards considered to be a solid indicator of the company or market condition (see [START_REF] Stopford | Maritime Economics[END_REF]). To investigate the behavior of this indicator and identifying the causes of its weekly variation we conducted a study at one of major container shipping lines. The study depended only on information about cargo type2 , transportation direction3 , client type4 and operational region. We construct a linear regression model describing the behavior of the average net freight per FFE (Y ) as the dependent variable, namely: Y (X 1 , . . . , X k ) = α 0 + k i=1 α i X i , where Y is the approximation of the average net freight per FFE (the dependent variable), (X i ) k i=1 are independent, explanatory variables and (α i ) k i=0 are regression coefficients. Using only three independent variables, namely the ratio of a transported reefer and dry cargo (in FFE), the ratio of a transported cargo in the headhaul and backhaul direction (in FFE) and the ratio of a transported cargo in the most profitable (over the company average) and the least profitable (under the company average) regions (in FFE), we have explained around 85% of the weekly indicator volatility. From the study it is clear that the most significant factor is the direction, because of the trade imbalances, i.e. a significantly lower demand for cargo transportation in one of directions, results in an essential number of empty containers transported in the backhaul direction. The less important has been the average region profitability, probably because of the strong inside price variation. Interestingly, replacing this variable by the amount of transported FFE's from only one properly chosen region gives a similar goodness of fit. This choice has been based on the observation that the amount of cargo shipped in this region has been almost uncorrelated with two other significant independent variables. However, both the more/less profitable regions ratio and the cargo percentage from the chosen region have got a faint business sense, therefore we have omitted these. Nevertheless, the remaining two independent variables: Y (X 1 , X 2 ) = α 0 + α 1 X 1 + α 2 X 2 , i.e. the direction ratio (X 1 ) and the cargo type ratio (X 2 ) still give a satisfying goodness-of-fit which exceeds 70%, as shown in the table from Fig. 1. Unfortunately, further investigations have shown that the model fitted above for the company overall cannot be generalized for chosen parts of the business. The trouble is not only with an unstable goodness-of-fit, but in fact, similar analysis for chosen regions have given different adjusted squared Pearson correlation coefficients. In the worst case this drops even below 25%, making the model completely unuseful in that instance. Although, in some cases the fitting is improved to around 70% by addition of other independent variables, as e.g. the percentage of FFE transported from key clients. What is more, it is observed that sometimes variables which are very important for one part of the business are at the same time completely insignificant for another part. This is a serious issue, since the model is not universal one needs to make separate studies for every instance. Although this is conlcuded for only one shipping line, we assume this to be a global phenomenon similar for all companies operating in this industry. Further studies of this will focus on other aspects of the business, especially lower levels of aggregation but using reacher data sets with many additional variables, also market data as e.g. Shanghai Containerized Freight Index (SCFI), whose development is shown in Fig. 2. We must also add that this model is useful only in an explanatory sense, not as a predictive tool. First of all because the descriptive variables depend on the market demand and therefore are not immediately drivable by the shipping company. Besides, the values of these variables are observed only in quite short intervals, i.e. their standard deviations have been quite small, which implies the fit of the linear model could be doubtful for arguments which differ significantly from the mean values. The behavior of the indicator outside these intervals remains an open question. Weaknesses of the existing revenue measure Having studied the currently used indicator we move forward to discuss what would be a better indicator for the revenue or yield optimization in this business. But what is more, and maybe the most important, the indicator mentioned in the previous section is maybe a critical parameter that cannot be omitted since it indicates market conditions, but in itself is not an adequate measure of how well the business is going. A company should not be satisfied (or dissatisfied) only because of increasing (or decreasing) prices. There should be incorporated somehow at least also such factors as e.g. the costs, the utilization (or capacity availability), i.e. the demand related to a changed service price, and the seasonality, i.e. comparison to results from previous years in a similar part of a year. This is what is done in e.g. the passenger airline industry (see [START_REF] Belobaba | The Global Airline Industry[END_REF]) and other industries further advanced in YM. Also better measures are a prerequisite for more advanced studies and prediction modelling (see [START_REF] Talluri | The Theory and Practice of Revenue Menagement[END_REF]). Costs could in fact be neglected if they are constant over time (something that may the case for e.g. the hotel industry), but in fact they are not. However, the freight rate is compensated for some of these changing costs in the form of surcharges, e.g. bunker surcharge (BAF), that also varies over time (see Fig. 3), so an increase of prices could be caused only by a cost increase which would not improve the business results. For this reason the indicator should ideally be based on a variable which includes costs of operations, e.g. a type of yield 5 . Similarly, a reduction of prices does not necessarily result in worse company conditions, if at the same time the volume increases one could observe a higher total revenue, which means a better business situation due to economics of scale. An increase of the transport capacity should reduce unit costs at the same utilization which means a better yield. The conclusion despite volumes going up, means that a company in fact can have a higher revenue and due to economics of scale (with higher capacity vessels) a better total yield. However, the net freight per FFE would not show this. What is more, despite the unit price is one of the most significant variables affecting the total yield, though, a strong positive linear correlation between it and the yield is doubtful because higher prices usually imply higher yield only as long as the price increases are accepted by customers and do not result in lowering of volumes. For this reason the indicator should not be calculated just per units sold but per all capacity units available. Finally, in order to find out if a company has an increasing or a decreasing trend of earnings one needs to compare the results to those from analogous time from previous years. This is especially critical for the liner shipping industry, as it is highly seasonal. If the yield behaves similarly as in the same season in previous years, its increase (or decrease) in comparison to the preceding observation time does not necessarily mean it is better (or worse) at the given time. Therefore it is necessary to compensate the company condition indicator for the seasonality. There are such times in the year when the demand decreases to a half or increases twice, e.g. Chinese New Year which lasts about two weeks and occurs from the middle of January to the middle of February, depending on the year. The problem becomes one of obtating enough historical data to model accurately the seasonality. This is an industry issue, because the industry is continuously changing the network structure, routings, capacities etc., making historical analysis and comparisons difficult at best. This problem is compounded in time, so that the further back in historical observation one would like to go, the worse the problem is. This motivates the usage of relatively new data, at most a few years old. Nevertheless, if such an approach will be provided the situation will improve in time because of a constant collection of the new data pouring in. [START_REF] Kimes | Yield Management: A Tool for Capacity-Constrained Service Firms[END_REF] The proposed new yield optimization measure When implementing RM or YM in an industry or a company, a choice of a main measure for revenue or yield optimization must be made. In consumer oriented businesses there is a tendency to emphasize revenue, where there in cargo oriented businesses is a focus on optimizing yield instead (see [START_REF]Revenue Management: A practical Pricing Perspective[END_REF]), as different products and services also incur different variable costs depending e.g. on routing of cargo. Additionally, in container shipping we also need to cater for flow of empty containers (flow adjustment) due to cargo imbalances, which is also important in such a company. To sum up, we have proposed a new improved business indicator which takes into account all the factors mentioned in the previous section and should describe more precisely the company condition: J i (t) = V i (t) C i (t) USD FFE , where t is the departure time from a crucial port in the service (the so-called bottleneck port), V i (t) is the total yield [USD] from the i-th vessel at t and C i (t) is the capacity available [FFE] on the i-th vessel at t. Incorporating the seasonality into the model we improve the indicator J to J * : J * i (t) = S(t) J i (t) USD FFE , where S(t) [no unit] is the seasonality factor at time t. Since the capacity on a vessel is shared usually over more than one string6 , at the first step we will focus on services. This is not a perfect approach, because the business-wise thinking is in terms of strings, but a service is a physical part of business with the precisely defined number of vessels (undertaking roundtrips), the capacity and the departure intensity. On the other hand the yield is calculated for each booking separately and the booking is linked to a string rather than to a service. The perfect solution would be a calculation of the total yield for each sea trip (port to port), taking into account the inland delivery to the first loading port, loadings and discharges and the inland delivery to the receiver. Knowing it would enable to divide the yield from each cargo not only between services, but also between strings. But it seems to be unrealistic since tracebility of costs is typically not possible for all parts of the transportation process. Therefore the idea is to begin with the simplest cases and further systematically consider the problem in a more complex way, fitting the model as good as possible to the business reality and be able to calculate the indicator for more and more parts of the business. At the first step we want to consider the simplest case, that means to select such services which are adapted to single strings in the sense that all capacity in a bottleneck port (separatelly in each direction) belongs to only one string. The next step will be to find such services which are adapted to more than one string, but all those strings have got only one bottleneck port in this chosen service. In this instance we will need weights for the capacity division C i over strings. This is a difficult task because the capacity on the vessel is not phisically signed for each string. What is more, it varies in time and sometimes is even exchanged by string managers. But it seems to be quite objective approach to use a discrete distribution of the slot division over strings, estimated on the basis of the historical data, for ordering the capacity available into the strings. The next step will be the calculation of the indicator for other strings which use more than one service (have got more than one bottleneck port). Now the yield V i needs to be divided additionally between different services (different bottleneck ports). Again we need a weight for this operation. This could be the percentage of the transportation time in days of the cargo within each service. It seems to be a reasonable choice because costs depend mainly on the transportation time (fuel, ports) and, besides, time spans are easy to identify since all departure and arrival dates are known. The last step will be the choise of the time window (day, week) and the decision concerning the level of aggregation, that means the decision which strings or services should be consider jointly, e.g. a geographical accumulation. Conclusions Using this new provided indicator we would like to investigate the impact of some central steering tools on the business. The two most popular are general rate increases (GRI) and capacity changes (allocation of vessels or addition of new vessels). Until now these effects have at many instances recently been invisible when observing the mean net freight per FFE (see Fig. 3). This has by industry typically been interpreted as a lack of effect of GRIs. However, this may be a faulty assumption, since the current measure is not necessarily sensitive to GRIs like a utilization based measure should be. Regarding the GRI's the possible cause could be the fact that they are maybe announced centrally, but implemented locally and local managers maintain new higher price offers very often only for a quite short time if at all, which implies the average net freight per transported FFE remains almost unchanged. Although, it is a well known fact that the GRIs causes an increased demand in the time span (several weeks) between the announcement and the implementation, which improves the company condition and should be shown by the measure. On the other hand, the capacity increase by the higher demand should lead to profit increase even though at the same time a special lower price would be offered for certain group of customers. In such case a decrease in a mean net freight per transported FFE could be observed when the company condition would be improved. And finally, we want to construct a stochastic model, in the sense of a time series or even time continuous stochastic process, for our new indicator for predictive purposes. Fig. 1 . 1 Fig. 1. The result of a linear regression model fitting (2 variables). Fig. 2 . 2 Fig. 2. Shanghai Containerized Freight Index (source: [11]). Fig. 3 . 3 Fig. 3. Far East-North Europe spot rates (with and without bunker surcharge) vs rate increase announcements: 2009-2012 (source: [1]). Obviously 1 FFE = TEU.[START_REF] Belobaba | The Global Airline Industry[END_REF] There are two basic cargo types: reefer and dry. Reefer cargo requires containers keeping special atmosphere conditions, as low temperature, proper humidity, air circulation, which need to be pluged in. Dry cargo is shipped in ordinary containers without any additional requirements. The direction, which the greater amount of containers is shipped in, is called the headhaul, whereas the opposite direction is called the backhaul. Client types differ in companies, but usually there is a group of the most important contractors which we will call the key clients. The revenue after the subtraction of costs is called the margin, whereas the yield is the margin after the addition of the so-called flow adjustment linked to the empty containers evacuation. A string is a virtual part of the network using a given amount of vessels capacity from one or more services. Acknowledgement This research was funded by the Danish Maritime Fund, through grant no. 2011-58.
19,720
[ "1002003", "991697", "1002002" ]
[ "485907", "300821", "300821" ]
01470682
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470682/file/978-3-642-40361-3_84_Chapter.pdf
Peter Bjerg Olesen Iskra Dukovska-Popovska Hans-Henrik Hvolby Improving Port Terminal Operations Through Information Sharing Keywords: Terminal operations, information sharing, waste, throughput time, case study In modern industry there are well defined methods for planning and optimising the efficiency of the production. However, when looking at supply chain operations there are often problems with lack of communication and planning between nodes. By not communicating the risk of creating non-value adding work also increases as the organisations becomes less synchronised. Therefore the focus in this paper is on how information can improve the performance of a container terminal. It was found that information relating to the containers is currently not widely used. Further it was found that there are improvements to be found in terms of reducing non-value adding activities by utilising information sharing. Introduction Businesses are becoming increasingly globalized, where each activity in the supply chain is located where the greatest value is added to the final product, which leads to increased transportation and more complex logistics. Since 1990 the global freight volume has almost doubled ( [START_REF]Secretariat: Review of maritime transport 2010[END_REF], [START_REF] Notteboom | Containerisation, Box Logistics and Global Supply Chains: The Integration of Ports and Liner Shipping Networks[END_REF]). There is now a new growing trend towards greening of the supply chains, in order to lower the carbon footprint, which increases the use of ships as a transport mode, compared to trucks. Typically supply chain management often focuses on customers and suppliers and not so much on how the goods are transported, and what the logistic performance is between the supply chain nodes, such as trucks, ships, trains, airplanes, shipping ports, truck terminals, airports etc. Due to the increased complexity, there is a need to focus on improving the logistic performance of the supply chain, and thereby reduce the time and cost transporting can impose. Ports are an integral part of many supply chains, and are increasingly changing their role from being gateways, to becoming active players in the supply chain. Containers are a standardised method for handling cargo, and presents possibilities for standardising and optimising the processes within a terminal, reducing lead-time and work load. This has enabled a faster transport lead-time which then again has in-creased the demand for more containers and have put pressure on the supply chains and terminals, leading to an even stronger need to improve operations [START_REF] Notteboom | Containerisation, Box Logistics and Global Supply Chains: The Integration of Ports and Liner Shipping Networks[END_REF]. This change in supply chains has led to more complex transport routes, where containers often pass several transport hubs, typically going from Asia to e.g. Rotterdam and distributed further into Europe by either ship or truck. Each time a container goes through a transport hub it is subjected to a series of operations, moved from one transport type to another or to storage, e.g. waiting for a specific transport vessel with a specific destination. So in order to increase the competitiveness of the supply chain going through a specific transport hub, it is necessary to optimise the operations, reducing the lead time inside the hub as much as possible. Wang et al [START_REF] Wang | The Efficiency of European Container Terminals and Implications for Supply Chain Management[END_REF] have made an extensive analysis of port efficiency, and concludes it is far from optimal, implying the potential for optimisation is large. This paper will focus on improving the terminal operations by reducing waste and lead-time in a container terminal through information sharing and coordination. In doing this the paper have introduced the general issues regarding port's roles in the global supply chain. Following is a description of how a terminal operates to provide a context. A case study is then introduced to support the arguments that information sharing is not widely applied and that it can have large benefits. Optimising terminal operations The development of ports has gone from ports just being gateways, to becoming active players in the supply chain. Beresford et al [START_REF] Beresford | The UNCTAD and WORKPORT models of port development: evolution or revolution?[END_REF] reviews port development and conclude that ports need to be proactive to challenges rather that reactive, which in terms of terminal production can be faster and better customer service, by providing customers with precise information relevant to them. Ports are generally subject to incremental development going towards more integrated systems, supported by ICT. A central part of the operations taking place in a port is the movement of goods from one transport mode to another, this is called terminal operations. A generic operation setup of a container terminal can be seen in Fig. 1 where different operation areas are presented. Stahlbock et al [START_REF] Stahlbock | Operations research at container terminals: a literature update[END_REF] have made a literature review that to a wide extend covers how the activities in a container terminal operates and concludes that there is need for more focus on reduction of waste and lead-time. There are some different approaches in the literature towards optimising ports terminal operations and ports in general. Some focus on strategic development and frameworks, where the key is coordination and integration between the different actors involved with the port operations as well as other ports and transport companies ( [START_REF] Brooks | Coordination and Cooperation in Strategic Port Management: The Case of Atlantic Canada's Ports[END_REF], [START_REF] Bichou | A logistics and supply chain management approach to port performance measurement[END_REF]). These models imply that by improving the external link and cooperation, it is possible to improve the internal performance, as the tasks are done more synchronous in the supply chain. This can be difficult in practice as small ports often do not have much leverage towards the supply chains. The smaller ports can however look at their internal operations to find improvement opportunities. Paixao et al [START_REF] Paixão | Fourth generation portsa question of agility?[END_REF] builds a port development framework, based the United Nations Conference on Trade and Development's (UNCTAD's) "Port generations model". More details on UNCTAD can be found in Beresford et al [START_REF] Beresford | The UNCTAD and WORKPORT models of port development: evolution or revolution?[END_REF]. Paixao et al [START_REF] Paixão | Fourth generation portsa question of agility?[END_REF] suggests to improve the operations by transforming the operations through a series of steps, starting with business process reengineering, going through just in time and lean projects ending with an agile and responsive system. Petit et al [START_REF] Pettit | Port development: from gateways to logistics hubs[END_REF] also suggest the use of lean and agile principles, but segments supply and demand characteristics and attributes to different pipeline strategies, e.g. quick response and continuous replenishment. These frameworks presents an opportunity to use some of the Lean terminology and tools such as waste (muda) and kanban. Stahlbock et al [START_REF] Stahlbock | Operations research at container terminals: a literature update[END_REF] suggest terminals need a more focussed approach for using ICT to plan the operations, and discuss ICT for supporting planning like seen in traditional "nuts and bolts" production systems. Stahlbock et al [START_REF] Stahlbock | Vehicle Routing Problems and Container Terminal Operations -An Update of Research[END_REF] further discuss some of the special characteristics in regards to planning and scheduling operations in terminals, such as the uncertainty of incoming containers. This lack of information is also mentioned by Kia et al [START_REF] Kia | The importance of information technology in port terminal operations[END_REF] and Zhou et al [START_REF] Zhou | Supply chain practice and information sharing[END_REF]. Zhou et al [START_REF] Zhou | Supply chain practice and information sharing[END_REF] describes how lack of information in a dynamic supply chain reduces efficiency in each level. The literature however lacks an empirical approach showing how to minimise waste and lead time in container terminals by using information. So this paper will through a case put focus on how information sharing can increase the throughput time of container at a terminal. Case study The case study is based on a small/medium sized port in Denmark. The port has a wide range of activities with bulk (oil, coal, cereals) and containers as the primary ship arrivals. The port itself is also acting as landlord and infrastructure planner for companies located in the port area. All transport is handled by operators. The focus in this case will be on the container terminal operator. The container terminal is divided into three major areas:  A dedicated area for one container route, segmented between normal and cooling containers.  A general container storage area for both storage and active storage for other container ship routes.  A third area divided between specialty cargo, which cannot fit inside a normal container and then empty container storage. According to Womack et al [START_REF] Womack | Lean thinking: banish waste and create wealth in your corporation[END_REF], value adding activities are activities that the customer wants (to pay for) e.g. sending a truck to the terminal, loading, unloading or forwarding goods. Waiting time and unnecessary storage is typically activities which a traditional production company tries to avoid; this line of thinking is imposed in the container terminal to make use of the terminology. Storage can in some cases be value adding, e.g. if a customer wants warehousing services, but this is not part of this papers scope. Physical flow To identify the tasks which are not value adding in a container terminal, it is important to define the concepts in this context.  Value adding activities ─ The activities the customer wants  Non-value adding activities ─ The activities that happens because of lack of planning etc.  Necessary activities ─ Transport and other activities which are impossible to remove but provide no value. In this terminal the operations looks as follows: A truck arrives and waits for service by the gate and is allowed to move into the terminal and wait for a stacker to take the container. The stacker then moves the container to a storage area. From the storage area the container is picked and put onto a terminal tractor, which in turn drives under the crane that loads it onto the ship. When unloading the ship, the process is almost exactly reverse. A process flow diagram of the container at the terminal is created in Fig 2 and activities will be analysed from a value adding perspective. Fig 2 shows waiting time and unnecessary storage of containers that are the main culprits in reducing the lead time and resource utilisation. Also the "rework" that occur when containers are placed so they have to be moved in order to reach other containers, have a significant impact on the resource utilisation. This amount of rework should as far as possible be avoided by ensuring that knowledge about shipment or pickup of the container is part of the planning. This would allow the operator to stack containers according to when the container is needed. It is not only the handling of the containers around the storage area that can be a problem. The execution of the processes also poses some pitfalls. The case terminal lists two main issues: making sure the nearest reach stacker picks up a container and making sure the stackers move optimally in relations to container placement. However it has been identified that these two issues are not the main problem at this stage. The question of how to call the nearest stacker is of course very relevant, but in order to do that other things needs to be in place in regards to coordination. With a plan for loading or unloading the ship it would be possible to assign stackers to each lift. Further the problem of storage of containers, according to the case terminal, is that they don't have any information on when a container arrives or when it is picked up, so they do not do anything to optimise this, again a clear lack of information and planning. As a lack of information is the main reason for not making a good plan (and thereby reducing waiting time, storage, rework etc.) we will look more into the information flow in the terminal. Information flow The information flow is limited today. The terminal operator only communicates on a strategic and operational level with its partners, leaving the tactical information unavailable and unused. Because of this it is not possible to coordinate the operational information to create more optimal sequences of activities. The three planning horizons; strategic, tactical and operational are based on the MPC model from Vollmann et al [START_REF] Vollmann | MANUFACTURING PLANNING AND CONTROL SYSTEMS FOR SUPPLY CHAIN MANAGEMENT: The Definitive Guide for Professionals[END_REF]. Instead, the information flow is directly related to the physical flow, except that execution of orders sometimes is initiated by the gate function. Containers are prior to arrival not known by the terminal, and all activities are dependent on the freight letter, that follows the container. Neither destination nor time information is available before arrival. This adds further to the possibility of lead-time delays, as there is very little coordination. Further arrival/departure time and destination of the container remains an unknown until the ship arrives and the manifest calls the container. Another problem in terms of not having a plan or schedule for the arrival of trucks from the hinterland is that these trucks risks waiting and that the resource utilisation of the terminal can be low. This can be amplified as most trucks tend to arrive around the latest deadline, creating queues and heavy peak hours. There is access to information about containers arriving via ship, but his information is not used, except for the ships arrival date and the number of containers. Further there is no information on containers going to or from the hinterland, except that the terminal knows that the day before ship departure the terminal is busy serving trucks. By establishing a coordinating gate function it will be possible to put the available information to use. As the gate could allow slot times for trucks picking up or delivering containers depending on the availability on a container or the capacity of the terminal. This would also allow for planning of container placement in the storage area, reducing the amount of rework done to move containers to get to a specific one. This would be a huge improvement to the current setup. Results and discussion It has been found in the case study, that the lack of planning and information flow is a large contributor to non-value adding activities in the terminal. A proposal is made in order to improve the use of available information. To improve the information flow, three actions are suggested:  Registration of containers prior to arrival ─ Destination and time information  Time slots for trucks  Placement of containers according to time and destination Registration of time and destination is done in order to know where and when the container is leaving the terminal and also when they arrive. Time slots for trucks should be given prior to trucks arriving at the terminal, in order to ensure the trucks are serviced at once when arriving. And having more information about containers and their destination would allow the terminal to prepare a container for pickup by the arriving truck if the truck has to deliver a new shipment. To improve the terminal operations further the placement of the container in the storage area should be done according to departure and destination and not as now only by owner of the container. By placing the containers according to a schedule, the amount of rework and non-value adding activities would be reduced. In general it can be said about the three suggestions that they reduce some of the most obvious non-value adding activities, but in general just presenting the concept of identifying activities as being value or non-value will bring a new positive way of thinking to the terminal. Even more important is the fact that time slots and more information would allow the terminal to level their production. As it is now the terminal has some very busy rush hours 24 hours before a ships departure. This is because there is no regulation of the input of trucks for the terminal, and then transporters tend to wait until the last minute to deliver the cargo. With time slots, this would definite become more level. Practical implications. This paper's practical implications include showing how available information can be used at terminal operations in order to reduce the waste and improve the throughput of the container. Research limitations/implications. The study has limitations and left questions unanswered. The findings are based on one port, and not all the findings can be generalized. The research implications are by offering an empirical case on improving efficiency by utilization of information in small ports. Future research would include testing the hypothesis of this paper and implement the three suggestions. Also a deeper look into the operations at the terminal, to see if visual management would contribute to improving the control and execution of tasks. Fig. 1 . 1 Fig. 1. Illustration of the operations areas in a generic container terminal, [6]. Fig 2 2 Fig 2 shows a simplified container flow in a small container terminal.
18,209
[ "1001922" ]
[ "300821", "300821", "300821" ]
01470683
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470683/file/978-3-642-40361-3_85_Chapter.pdf
Cecilie M Damgaard Vivi T Nguyen Hans-Henrik Hvolby Kenn Steger-Jensen Perishable Inventory Challenges Keywords: Perishability, Inventory Control, Retail, Fresh Food Supply Chains à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Product perishability is of major concern in many industrial sectors. Fresh food, blood products, meat, chemicals, composite materials and pharmaceuticals are all examples of perishable products that can deteriorate and become unusable after some finite time [2; 8; 15]. Other industries supplying services also deal with variety of perishable products such as airfares, hotels and concerts [START_REF] Belobaba | The Global Airline Industry[END_REF]. This complex issue supports the rationale for investigation into perishable inventory control in several industries as well as most countries. Handlings of perishable inventories occur naturally in many practical situations. The perishable products are naturally managed in practice, as the length of the lifetime of the product defines the maximum length between the order frequency [START_REF] Nahmias | Perishable Inventory Theory: A Review[END_REF]. However, not many inventory control models take perishability of the products into account which is a weakness in these models. Most inventory control models assume that stock items can be stored indefinitely to meet future demands as in the case of the EOQ model. However, when dealing with perishables, the product lifetime must be taken into account in inventory models. Perishable inventory is a challenge for companies both from a managerial and an operational point of view. The retail industry in Denmark deals with perishable inventory on a daily basis. The challenges arise when unsold perishable items approach their expiration date. Then management has to decide whether or not to sell the items at a lower price than expected or simply consider the remaining inventory as waste. To investigate these issues a single case study has been conducted in the Danish retail industry. When the inventory holds the same product variant, but with different expiration dates, then the challenge occurs since the items have different quality level due to the different length of remaining shelf life. Decisions can be difficult, as the remaining items needs to be taken into account, when making replenishments. In order to ensure a higher profit, when dealing with perishable items, the shelf life must be taken into account. According to Gürler [START_REF] Gürler | Analysis of the (s, S) policy for perishables with a random shelf life[END_REF] a significant reduction in the cost function can be obtained by explicitly taking into account the randomness of the shelf life and the system costs differ drastically among various shelf life distributions, which imply that a precise estimation of the shelf life distribution is desirable. This exploratory research is based on the belief that better knowledge about the products' remaining shelf life and use of this information in the inventory control can help retailer managers with better planning and less use of manual resources. Therefore, the study of the effects of a product's lifecycle are important to identify for managers and researchers alike. In the following, an overview of the existing inventory control for perishable items is presented as well as a discussion of the ordering strategies for retailers. The paper ends up with an identification of important parameters in perishable inventories, which needs to be taken into account in perishable inventory control models. Perishable Inventory Models The literature of perishable inventory is generally divided into two; perishable inventory theory and the dynamic nature of the perishables. Together the literature discusses the impact and consequences perishable items have on inventory control models. The interest in the research literature about perishable inventory have increased over the last two decades [START_REF] Gürler | Analysis of the (s, S) policy for perishables with a random shelf life[END_REF], where the Nahmias [START_REF] Nahmias | Perishable Inventory Theory: A Review[END_REF] presented an extensive review of the relevant literature on lot-sizing problem with deteriorating and perishable items. Later several authors have contributed to the development of a number of inventory models for deteriorating items [START_REF] Hariga | A The Inventory Replenishment Problem with a Linear Trend In Demand[END_REF]. Nahmias [START_REF] Nahmias | Optimal Ordering Policies for Perishable Inventory-II[END_REF] defined characteristics of a perishable item with a limited shelf life and introduced ways of using the traditional models to account for items having a limited lifetime. One way is by defining the periods according to the length of the items lifetime. An example is by using the simple EOQ model; the annual demand in the general EOQ is set to be the expected demand over a period equal to the items lifetime. This ensures that no units expire [START_REF] Nahmias | Perishable Inventory Theory: A Review[END_REF]. Nahmias [START_REF] Nahmias | Optimal Ordering Policies for Perishable Inventory-II[END_REF] also contributed to the literature as one of the first to derive and evaluate optimal order policies for obsolescent products with shelf life greater than two periods. This was followed by an in-depth understanding of the quality deterioration of perishable items [8; 5; 4]. Pierskalla and Roach [START_REF] Pierskalla | Optimal Issuing Policies for Perishable Inventory[END_REF] addresses that issuing the oldest item first, through the FIFO principle, is the optimal policy from the perspective of the retailer, when the objective is to minimise total inventory holding cost (and waste). However, in practice this is not always controllable by retailers. Often, customers selects the latest delivered products first (especially fresh-food) thus the LIFO principle is a more realistic assumption [START_REF] Ferguson | How Should a Firm Manage Deteriorating Inventory?[END_REF]. Some authors dealing with perishable inventories suggest use of the newsvendor model, others suggest use of periodical reviews based on the items lifetime [START_REF] Nahmias | Optimal Ordering Policies for Perishable Inventory-II[END_REF].Most inventory models for perishable items are based on the traditional models, which are often driven by a cost reduction focus. However, few models have a profit optimisation focus, where the price of the perishable product is an explicitly variable in the model [START_REF] Ferguson | How Should a Firm Manage Deteriorating Inventory?[END_REF]. Another model focusing on profit optimisation is the newsvendor model, which defines the possible sales opportunities and loses The objective function of the inventory control model has an impact on the outcome but also an impact on the operational level in the company. Depending on the domain of industry and the ordering strategy, there is a need to consider the objective function. A segmented overview of traditional inventory control policies is illustrated in figure 1. Ordering Strategy Beside the elements of the traditional inventory models, some element has to be acknowledged when dealing with perishables. As perishable inventories undergo change in storage it may in time become partially or entirely unfit for consumption. The lifetime of a product is often measured in days which defines when the products become unacceptable for consumption [START_REF] Donselaar | Inventory Control of Perishables in Supermarkets[END_REF]. Dealing with perishable inventories has a large impact on the inventory control model. Most inventory models assume that stock items can be stored indefinitely to meet future demands, which is not the case with perishable items [START_REF] Nahmias | Optimal Ordering Policies for Perishable Inventory-II[END_REF]. Thus, the assumption of infinite shelf life for the inventoried items has originated a considerable amount of criticism to the EOQ model in the inventory lot-sizing literature [START_REF] Hariga | A The Inventory Replenishment Problem with a Linear Trend In Demand[END_REF]. A more diversified picture of strategies, which also affect the inventories, is seen when comparing high-end stores and discount stores. A general picture seems to be that high-end stores aim at customer service in terms of a broad assortment and high product availability at the expense of higher inventory investments, while discount stores aim for a limited assortment and low inventory at the expense of possible outof-stock situations. This case study is based on both high-end stores and discount stores. From the case study it was found that the decision-making regarding the products aftercare is based on the individual manager's perception of the characteristics of the perishable items and how the highest profit margin can be achieved. Another challenge found in the case study is that the retailers are not able to use point of sales (POS) data to identify products that are about to reach their expiration date. Instead, many manual resources are used to identify the products, which have reached or almost reached the expiration date. This is due to the fact that the barcode used for POS identification at checkouts does not include the products expiration date. Several major challenges exist for establishing the input for calculating the order quantity:  How to estimate the current inventory level when having products of varying lifetime stored?  How to set the sales price for products with more or less reduced lifetime?  How the remaining demand is affected if products with reduced lifetime are sold at a reduced price (also titled cannibalisation 1 )? 1 Cannibalization is the term used when old products sold at a reduced price will lead to a reduced sale of new products [START_REF] Desai | Quality Segmentation in Spatial Markets. When Does Cannibalisation Affect Product Line Design[END_REF]. If the old products are sufficiently attractive the customers may find it beneficial enough to buy the old products rather than the new products. The term originates from the literature of marketing strategy, which describe that competition among product types lead to a reduction of sales volume, sales revenue, or market share. Cannibalization can also occur when the producer or their competitors introduce new products. Demand-Price Relation Ferguson and Koenigsberg [START_REF] Ferguson | How Should a Firm Manage Deteriorating Inventory?[END_REF] present the mathematical function, that if the relation between demand-price is known for a given product, then the desired sales price for new and old products can be found through a linear function, based on the market potential, order quantity of new products, and inventory level of old products. If old products are sold at a discount price they might capture the entire demand share and the retailer will obtain a smaller profit margin of the product but avoid waste. Thus, the expected profit margin of a single product has to be revised by the retailer in order to recalculate the total profit. Desai [START_REF] Desai | Quality Segmentation in Spatial Markets. When Does Cannibalisation Affect Product Line Design[END_REF] has further investigated the consequences of the competition of the products caused by cannibalisation. It can be interpreted from Desai's findings, that by having the right price strategy in the product differentiation of old and new products, they become substitutes of each other instead of competing for the same demand. Quality Categorisation The contributions which have value for the literature of perishable inventory control, are the in-depth understandings of the quality deterioration of perishable items [4; 5]. The quality of the product is a function of product life time, however there is a difference in the perception of the quality from the customer view and the retail store view. The perception of the quality is a reflection of the price the customers are willing to pay where the retail stores are speculating on determining the price. The retail stores are always seeking to mark the price as high as possible in order to gain a higher profit margin. From a retailer's point of view this is done by speculating about the customers' behaviour and perception of the quality whereas the customers consider the quality of the products from the value for money perspective. The categorisation of the quality level is inspired by Ferguson and Koenigsberg [START_REF] Ferguson | How Should a Firm Manage Deteriorating Inventory?[END_REF], Talluri and Ryzin [19], and the retail case study. As illustrated below in figure 2 the quality level can be categorised into three main types based on the characteristics of the products, as it is believed to be representative for all product types. In the first type of quality, the product maintains the same quality level, but never reaches a value of zero over time. However, the rapid changes in the market, such as season and trends, have an influence on the customers' perception of quality level of the products. Therefore, the end of the product shelf life will often be determined by the customer's perception. Type one is not affected by the time dimension and can therefore be managed from the traditional inventory models. The second type of quality level is a product, where the quality level degrades continuously over time and at the end of the shelf life has reached the value of zero. Often, the product within this type of quality does not reach a value of zero before a replenishment of new products arrives [START_REF] Ferguson | How Should a Firm Manage Deteriorating Inventory?[END_REF]. Thus, a single product type in the retail stores can have different levels of quality on the shelves. Therefore, product quality is an important parameter in the inventory control for type two products. The third type of quality level is where the products maintain the same quality level over time but become unusable after the expiration date. The customers' perception of quality does not decrease over time, but has no value for the customers, when the product expires at the given date. As the product can only be used before a given date, customers prefer to purchase closer to the date in order to ensure that the product will be utilised. Thus, many firms providing these product services set multiple prices for the products depending on the demand, number of remaining product, and, remaining time before expiration date. The purpose of this is to enhance the demand for purchasing in advance, in order to ensure the highest possible profit margin. Furthermore, it is also an approach the companies take in order to reduce the risk of having unsold products left. [1; 5]. Parameters in Perishable Inventory Control Dealing with the nature of perishability in inventory control, the quality deterioration of the perishable products has to be taken into consideration together with the unsold inventory from previous periods. The reason is that the different quality levels have different values, and therefore cannot be considered the same, since the products with the lowest quality will have a lower attractiveness for some customers. Some companies meet the different quality levels by differentiating on sales prices, though this affects the profit margin. If the inventory control objective function is based in profit maximisation, then it might be beneficial to incorporate the different product qualities in the development of the inventory control model. A major challenge is the aftercare of the unsold perishable products, as this has a great impact on the ordering policy, when finding the optimal order quantity. Many retailers, including the case company, do not have a predefined strategic decision making process describing what to do with the unsold inventory before the perishable products have almost or reached the expiration date. The consequences are; the companies do not optimize the profit gain from the potential market. The "mark down" price decision with the purpose of minimizing the loss of investment often fails due to the timing. In the development of a perishable inventory control model some important parameters, has to be evaluated. The latter contribution is the identification of the elements: 1) demand, 2) periods, 3) quality deterioration of the perishable item, 4) the aftercare of the items including price of the new and the old perishable items and 5) total cost and profit margin function. Conclusion and further work The paper identifies relevant parameters which will serve as input to the further development within perishable inventory control models. They are 1) demand, 2) periods, 3) quality deterioration of the perishable item, 4) the aftercare of the items including price of the new and the old perishable items and 5) total cost and profit margin function. Furthermore, it provides a deeper understanding of the nature of perishables in the retail industry together with the identification of the importance of how the aftercare of items is conducted. It is important to acknowledge the importance of product quality and the parameter must therefore also be considered in the inventory model. Furthermore, when dealing with perishable inventory control, the pricing of products has to be taken into account. Lastly, a better aftercare of items close to expiration date will help reduce investments in inventory and ensure a better profit margin. Further work will aim at developing a mixed model that is able to handle the time dimension of existing inventory. The case study showed that the existing models do not consider that companies have the same products with different lifetime left. Further, the options and outcome of improved information sharing in supply chains will be studied [6; 16; 18]. Fig. 1 . 1 Fig. 1. A segmented overview of traditional inventory control policies Fig. 2 . 2 Fig.2. Three basic quality level categorisations inspired by[5; 19] ACKNOWLEDGEMENT The authors of this paper graciously acknowledge the funding provided by NordForsk for the LogiNord "Fresh Food Supply Chain" project and Christopher Martin for his valuable comments in the paper writing process.
18,545
[ "991658" ]
[ "300821", "300821", "300821", "300821" ]
01470684
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470684/file/978-3-642-40361-3_86_Chapter.pdf
John Dilworth Ashok Kochhar email: [email protected] Assessing the Impact of Management Concerns in E-business Requirements Planning in Manufacturing Organizations Keywords: E-Business, Requirements Planning, Manufacturing, Management Concerns, Company characteristics, Supply Chain, Collaborative Working des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction E-business as a concept emerged around the turn of the millennium. During the intervening period, e-business (in so far as it is applied to business-to-business interaction between manufacturing and distribution organizations) seems to have become almost indistinguishable from modern thinking in Supply Chain Management. The theory of optimization, collaboration and timely information availability seem central to supply chain management thinking [for example1, 2 and 3]. The essential point is that strategic supply chain management demands collaboration among all participants in the value chain. The real business benefits only occur when the entire supply chain is optimized. The problem is that the whole concept stands or falls by collaboration so that everyone will behave for the greater good. If it can be achieved, it may be worth doing; if it cannot for whatever reason (politics, human behavior, relative power structures, sufficient information availability) then the whole thing can be a potentially expensive failure to achieve anything worthwhile. The e-business domain therefore deals with functionalities that not only are under the control of external organizations (for example customers and suppliers) but functionalities that are only of any meaning when such organizations are working collaboratively. Any attempt to assess the relevance of e-business concepts must therefore find a way of dealing with the subjective concerns that might exist as to factors that may inhibit such collaborative working. For example buyers and sellers could use some sort of collaboration tool to upload forecasts and actual plans, and then both a given buyer and seller could go into a collaborative mode to come to a common understanding and consensus on what the seller is going to supply to the buyer. The question begged is whether this behavior is realistic taking into account politics, human behavior and the realities of business. It would seem therefore that any attempt to assist organizations with assessing the relevance of possible e-business initiatives must find a way of reflecting the impact of these "softer" issues in determining the initiatives that are likely to be practical and hence beneficial. This paper describes the development of and experience with an e-business requirements model that attempts to deal with these more subjective concerns that might exist as factors inhibiting such collaborative working. An e-business requirements model to deal with concerns There are many reported examples of "models" [4, 5, 6, 7 and 8] that find ways in which e-business activity can be categorized so that its structure can be understood. While all of these can provide useful insights into the e-business concept, an approach that dealt with specific detailed functions was felt to be required. Dilworth and Kochhar [START_REF] Dilworth | Creation of an e-business requirements specification model[END_REF] proposed a systematic process that can propose at a useful level of detail the probable e-business requirements of an organization based on objective criteria. They went on to describe the creation, testing and validation of an e-business requirements specification model to provide such a systematic process. The model constructed contained all the functions relating to the process of buying, selling, or exchanging products, services, and information via computer networks. The functions supported by the model are organised into three broad categories • Demand Side Functions; • Supply Side functions relating primarily to "outside" partners; • Supply Side Functions relating primarily to "inside" the organization. The Demand Side Functional Domains were • Product Development and pre-production -the functions involved in communicating customer-related design and engineering information and change requests. • Demand Management -the communications relevant to the process of creating and recording customer demand within the organization's systems. • Supply Chain Planning -the functions involved in the process of responding to the customer's supply chain planning requirements of you as a supplier • Outbound Logistics -the communications relevant to the dispatch of goods, to and from the customer and other external partners. • Customer Accounting -the communications with customers or other financial organizations relating to the receiving of money for goods sold. • Service -For those organizations that provide post sales service the functions relating to the management of remote service activity The Supply Side "outside" functional domains were • Product Development and pre-production -the functions involved communicating supplier-related design and engineering information and change requests. • Supply Chain Planning -the functions involved in the process of planning what needs to be supplied by external suppliers and other partners. • Purchasing and Procurement -the interactions relevant to the management of the supplier base and the communication of demand to the supplier. • Inbound logistics -the communications to and from the supplier and other external partners that are relevant to the receiving of goods. • Manufacturing -in a manufacturing "network" situation, the communications necessary between partner plants and sub contractors. • Supplier Accounting -the communications with suppliers or other financial organizations relating to the payment of money for goods supplied • Maintenance -the functions related to communication with external partners involved in planning and executing maintenance activity The Supply Side "inside" functional domains were • General Finance -includes those financial management, planning, budgeting or treasury activities conducted amongst separated groups within an organization • Administration -administration systems managed on a centralized basis (e.g. personnel records, time recording) involving geographically dispersed groups. Model Structure A preliminary rationale (i.e. a set of reasons) was produced, based on expert knowledge and discussions with industrialists, in order to link these e-business functions to possible objective characteristics and subjective management concerns. • Characteristics were defined as facts about the business. In principle, company characteristics are intended to be as objective and factual as possible and should be capable of being measured or counted or at least estimated to a reasonable level of accuracy. The number of customers, or the number of items dispatched per year are examples of such facts. • Concerns are defined as the attitudes or opinions, concerning internal constraints and/or customer/supplier behavior, which can influence the relevance or practicality of e-business functions. Concerns could conceptually be thought of as being either a reason for doing something (for example the opinion of excessive current clerical activity could be a motivation for automating a clerical task), or a reason for lack of confidence in the success of an initiative (for example suppliers are not sufficiently technologically competent). A reasoning structure was developed to link the characteristics and concerns of the company through detailed reasoning to an overall verdict as to the overall relevance of a given function. The model thus produced was tested on a variety of case studies and was demonstrated as improving in reliability as case studies progressed. Methods were developed whereby conclusions from the model could be presented at a "management" level of detail, and whereby useful insights could be provided. In this model the concept of management concerns was added to objective factual characteristics. Concerns were intended to address the issue of how internal attitudes or customer/supplier behavior can make or break the relevance of certain functionalities irrespective of the objective relevance or otherwise of these said functions. In the model, concerns tended to have one of two effects: • They represent a problem that ought to be a motive for interest in an e-business function (for example excessive current clerical activity ought to be a motivation for automating a clerical task); • They represent a problem that would tend to prevent an e-business function from being useful, or a reason for lack of confidence in the success of an initiative (e.g. our suppliers cannot cope with our e-business oriented communication with them). Discussion of the impact of concerns uses a synthesis of the detailed results of the model that was described as an "e-business profile". This is a simple analysis of the proportion of functions triggered (in relation to the total of those possible) in each functional domain converted to a percentage score and presented visually. This provides a simplified presentation, of what are in fact very detailed results, in an accessible format. Testing of the computerized model in 13 manufacturing organizations has shown that it can generate E-business requirements with a high level of accuracy and requires typically one man day of effort compared to months required to carry out the same task using conventional systems analysis techniques. [START_REF] Dilworth | Creation of an e-business requirements specification model[END_REF] It was possible to use these profiles to discuss in general terms the potential impact of management concerns in the achievement of e-business possibilities. Running the model both with and without taking into account the management concerns makes it possible to gain additional insight. A version without concerns potentially provides a more objective analysis, while using the version with concerns brings in more subjective factors. By analyzing the difference between the two, the potential was recognized for achieving extra insights, for example: • What are the real needs of the organization as opposed to what can be reasonably expected to give benefits assuming current attitudes? • Which areas of e-business are most adversely affected by current attitudes and concerns? • How difficult might be the implementation of e-business functions? The model run without concerns can therefore be regarded as an indication of the theoretical relevance of the functions to the organization, whereas the version with concerns represents the relevance of the functions in a practical world where theory cannot always be perfectly applied! The concept could also be used as an indication of the barriers likely to be encountered in an e-business implementation, and the consequent ease of implementation and probability of success. 4 The impact of the concerns across 13 case studies The model has currently been used in thirteen case studies to develop the e-business requirements of manufacturing organizations. These case studies provided some interesting insights both into the potential relevance of e-business in a selection of manufacturing industry and also into the varying impact of these concerns. Figure 1, below, shows the e-business profile of a notional company representing a composite of all the case studies. Composite profile all Case Studies Demand Side Supply Side "Outside" "Inside" Percentage of functions triggered 0 50 100 This is the full output from the model including the effect of management concerns. It shows the "average" (over all the case studies) relevance of e-business functions as predicted by the model. In the e-business profiles, the functions are organised into the three same three broad categories indicated earlier: • Demand Side Functions; • Supply Side functions relating primarily to "outside" partners; • Supply Side Functions relating primarily to "inside" the organization. In effect this composite profile represents a "league table" of popularity of e-business functions at least in so far as it affects the thirteen case studies. This is interesting primarily as a weighted assessment of e-business function relevance against the sample of British Industry that our case studies represent. The most relevant are: • The General finance and Administration side representing activities largely internal to the organizations involved; • The supply chain and purchasing activities oriented to the procurement of goods On the demand side, there is a more consistent level of coverage among the functional domains. The most popular (just) e-business functions relate to Product Development with customers. For the domains of Demand Management, Supply Chain Planning and Outbound Logistics, roughly one third of the possible functions were considered relevant for the case study sample. Service functions were less triggered, but this relates more to the fact that not all the case studies provided such functions to a significant degree. Of those that did, not all managed these using remote based resources or external organizations for which electronic communication would be relevant. On the supply side far more variability of functional coverage was encountered. The most popular functions were Supply Chain Planning, Purchasing and Procurement, General Finance and Administration. At first glance it might appear to be anomalous that the functions associated with Product Development with Suppliers were not more popular. This actually is explained by the fact that many of such functions depend on the capabilities of suppliers, and in most case studies the suppliers represented smaller, less capable organizations than the case study organizations themselves (the customers). Manufacturing functions were by contrast the least triggered. This is perhaps to be expected because most manufacturing activity is intra rather than inter organization, but it also reflects the decreasing importance of manufacturing to some of the case study organizations. Maintenance functions were also less triggered, but this relates more to the fact that not all managed these using external organizations, for which electronic communication would be relevant. To illustrate the impact of concerns, a second profile is provided in figure 2. The area shaded in black illustrates the additional degree of functionality that would have been considered relevant if the business concerns were ignored. As can be seen overall, removing the concerns tended to increase the number of ebusiness functions that were considered relevant, but within that it is difficult to detect a pattern. A more interesting pattern is provided by profiles that illustrate, for each case study, the e-business functions actually triggered as a percentage of all the possible ebusiness functions that could have been triggered. These can be visualized as an indication of an overview of the relevance of the e-business concept to each of the case studies. Conclusions It can be concluded that the arguments in favor of concerns (the fact that the results are more realistic and useful) prevail against the counter-argument of undue pessimism. The key point is that the effect of these concerns in individual detailed cases can be studied at a detailed level, and therefore overridden if the effect of the concerns is unreasonable in an individual case. Having established that concerns are useful, the question arises as to whether there a value is producing results both taking concerns into account and ignoring them. Most concerns proved to be inhibitor concerns, the inclusion of which in the model tended to suppress the potential relevance of e-business functions. Excluding them would suggest that a function was relevant; including them would suggest that a concern was not relevant. For example suppose an e-business supply chain initiative is considered relevant in the absence of concerns, but concerns involving lack of support from management or lack of adequate capabil- ity of suppliers render it irrelevant. Which of these judgments is the most useful? The answer is either -depending on circumstance. The judgment including concerns could be a correct reflection of the likelihood of an organization making a success of a particular function. However the version without concerns could be an inspirational reflection of the potential for e-business in an organization providing that there was determination to address the said concerns. Running the model both with and without concerns, and then presenting the reasons for major variances, can expose the issues and enable them to be studied in detail at the individual function/reason level. An e-business specification ignoring concerns is a useful indication of the objective relevance of the e-business functions to the organization in a reasonably concern-free world, whereas a version with concerns represents the relevance of the functions in a practical world where theory cannot always be perfectly applied! The original idea was that by exploring the difference between the two, an indication could be obtained of the significance of the difficulties of implementation of e-business functions that otherwise might have a theoretical application. Although not perfect or fully developed, it was concluded that the concept was useful in most cases in exposing some of the barriers that could inhibit an e-business implementation. It could therefore be regarded as a useful indicator of the consequent ease of implementation and probability of success. Figure 1 - 1 Figure 1 -Composite e-business profile Figure 2 - 2 Figure 2 -Composite e-business profile with and without concerns
17,928
[ "1002004" ]
[ "26904", "26904" ]
01470687
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470687/file/978-3-642-40361-3_89_Chapter.pdf
Pavan Kumar Sriram email: [email protected] Erlend Alfnes email: [email protected] Emrah Arica email: [email protected] A Concept for Project Manufacturing Planning and Control for Engineer-to-Order Companies Keywords: de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction ETO supply networks are dynamic and hard to define, and their planning and control functionalities are frequently affected by the actions of suppliers and customers which typically may result in excessive inventories, long lead times, low customer satisfaction and poor resource allocation [START_REF] Hicks | Computer-aided production management issues in the engineer-to-order production of complex capital goods explored using a simulation approach[END_REF]. Due to high complexity of the products the customer are involved closely from the design to engineering phase of a product [START_REF] Braiden | Engineering design and product development and its relationship to manufacturing: A programme of case study research in British companies[END_REF]. ETO companies cannot forecast due to unknown sales and product specifications for future order [START_REF] Hicks | Computer-aided production management issues in the engineer-to-order production of complex capital goods explored using a simulation approach[END_REF] .This type of manufacturing environment requires a modified manufacturing planning and control (MPC) method to suit the characteristics and in this paper we highlight and discuss some of the classical MPC approaches such as Kanban, Manufacturing Resource approaches' to Production Planning (MRP II) and Theory of Constrains (TOC), and also discuss techniques such as Workload Control (WLC), Constant Work In Process (CONWIP), Paired cell Overlapping Loops of Cards with Authorization (POLCA) solutions for their applicability for ETO [START_REF] Stevenson | A review of production planning and control: the applicability of key concepts to make -to-order industry[END_REF]. The paper starts with a description of the methodology, followed by an introduction to the empirical background of ETO. Next, MPC is defined, while the two subsequent sections outline and discuss issues on the current MPC approaches and its limitations and Project manufacturing planning and control (PMPC) respectively. The conclusion outlines the paper's contributions and some suggestions for further research. Methodology This conceptual paper is a theoretical discussion of the MPC approaches applicable to an ETO environment. Research on the ETO MPC approaches situations is scarce since the majority of research focuses on a mass production environment. The paper's theoretical base is within planning and control, and the discussion exploits on improving the existing MPC system proposed by Vollmann [START_REF] Vollmann | Manufacturing Planning and Control for Supply Chain Management[END_REF] through recommendations that have been carefully identify by the literature review in the form of international journal publications, scientific textbooks, and white papers in order to capture the main challenges, approaches and solutions from previous researchers on the MPC in ETO sector. The aim of the paper is therefore not to provide solutions to these highly complex issues, but rather to highlight, discuss and develop a PMPC framework based on existing literature. ETO Sector characteristics and requirements ETO companies are characterized by time-limited projects related to the supply of complex equipment to third parties, and this process often includes the phases: design, manufacturing, installation, and commissioning [START_REF] Stevenson | Aggregate load-oriented workload control: A review and a re-classification of a key approach[END_REF] and [START_REF] Braiden | Engineering design and product development and its relationship to manufacturing: A programme of case study research in British companies[END_REF] state that that the decoupling point is located at the design stage, and operate in project specific environments. According to Hicks [START_REF] Hicks | A typology of UK engineer-to-order companies[END_REF] ETO products are manufactured and assembled in low volumes to satisfy individual customers' specifications. Stevenson et al., [START_REF] Stevenson | A review of production planning and control: the applicability of key concepts to make -to-order industry[END_REF] agrees on this and described the production volume as batch of one to very low volume. The production volume is not mention by all authors but there is no disagreement found in literature that the production volume is low within the ETO environment. Bertrand and Muntslag [START_REF] Bertrand | Production control in engineer to order firms[END_REF] describe control characteristics of the ETO production situation by using the following three aspects: dynamics, uncertainty and complexity. The engineer-to order firms have to cope with strong fluctuations in mix and sales volume in the short and medium term. It is impossible to cope with these fluctuations by means of, for example, creating capacity stock because of the customer order driven production. This dynamic market situation asks for a lot of flexibility to cope with these fluctuations. Gosling and Naim [START_REF] Braiden | Engineering design and product development and its relationship to manufacturing: A programme of case study research in British companies[END_REF] and Little [START_REF] Little | Integrated planning and scheduling in the engineer-to-order sector[END_REF] describe that flexibility is a condition for ETO firm success, an ETO company needs to deal with strong fluctuations in mix and sales volume. The second characteristics uncertainty is the difference between the amount of information required to perform a task and the amount of information already available in the organization. And in an ETO environment uncertainty is high for both the process as for the products for example in terms of specification, demand, lead times and the duration of processes [START_REF] Hicks | Computer-aided production management issues in the engineer-to-order production of complex capital goods explored using a simulation approach[END_REF] The third characteristic mentioned by Bertrand and Muntslag [START_REF] Bertrand | Production control in engineer to order firms[END_REF] is complexity and it exists because information is unknown and changes are bound to occur over time. Little [START_REF] Little | Integrated planning and scheduling in the engineer-to-order sector[END_REF] states that a common feature of ETO manufacturing is for the customers to change their requirements over the time of the production. ETO MPC approaches and exiting solutions Classical manufacturing planning and control methods such as Material Requirement Planning, Workload Control, Drum-Buffer-Rope, Kanban and CONWIP are briefly described on the next pages. At the end of this description the characteristics of the methods are summarized in Table 2. Material Requirements Planning (MRP) and Manufacturing Resource Planning MRP II: MRP is a periodic push-based system designed for complex production planning environments [START_REF] Stevenson | A review of production planning and control: the applicability of key concepts to make -to-order industry[END_REF]. In a study of Sower and Abshire [START_REF] Spearman | Production rates, flow times and work in process levels in a generalized pull production system[END_REF] was found that one-third of the manufacturing companies studied use packages such as MRP. MRP II often offers greater functionality then MRP because of the wider integration of the number of modules and company operations. According to [START_REF] Braiden | Engineering design and product development and its relationship to manufacturing: A programme of case study research in British companies[END_REF] the choice of a MRP II system is often based on the wide availability. They also state that many engineer-toorder firms have tried to implement MRP II without success. Workload Control (WLC): is a MPC method designed for highly complex production environments like job shop and MTO / ETO industry. Land and Gaalman [START_REF] Land | Workload control concepts in job shops -a critical assessment[END_REF] mention that WLC works particularly well in the job shop environment, reducing shop floor throughput time (SFTT) and WIP. According to Stevenson and Hendry [START_REF] Stevenson | Aggregate load-oriented workload control: A review and a re-classification of a key approach[END_REF] WLC originates from the concept of input and output control. The input of work to the shop floor is controlled in agreement with the capacity of work centres (the output rate) in order to regulate and maintain a stable level of WIP. The method Workload Control will not be simulated, not because it doesn't fit an engineer-toorder environment well but because this method needs to be included in the demand planning at the medium planning level horizon. Drum-Buffer-Rope (DBR): The Theory of Constraints (TOC) is a bottleneckoriented concept which is developed from Optimized Production Technology (OPT), as is commonly attributed to the work of Goldratt [START_REF] Stevenson | Aggregate load-oriented workload control: A review and a re-classification of a key approach[END_REF] The production planning and control method is now more known as the Drum-Buffer-Rope (DBR) approach. Under the TOC philosophy, the bottleneck should be scheduled at 100% utilisation because the bottleneck determines the performance of the whole production system. The bottleneck work centers are the drums and are used to control the workflow. The rope refers to "pull" scheduling at the non-bottleneck work centers. The purpose of the rope is to tie the production at each resource to the drum. The buffers are used to protect the throughput of the bottleneck work centers. The goal of the DBR method is to break a constraint, or bottleneck condition, and thereafter identify the next constraint. This continuous improvement process is an integral part of the TOC philosophy [START_REF] Vollmann | Manufacturing Planning and Control for Supply Chain Management[END_REF]. Wahlers and Cox [START_REF] Wahlers | Competitive factors and performance measurement: applying the theory of constraints to meet customer needs[END_REF] highlight the applicability to highly customized industries where the companies where able to reduce the lead time and improve the delivery reliability performance. KANBAN: Kanban is a card-based production system where the start of one job is signalled by the completion of another. According to Stevenson et al. [START_REF] Stevenson | A review of production planning and control: the applicability of key concepts to make -to-order industry[END_REF] there are many variations of the Kanban system but in the simplest form cards is part number specific. Kanban is not a suitable method within the engineer-to-order environment because of the routing variability, small batch size and lack of repetitions. POLCA: POLCA is an abbreviation for Paired-cell Overlapping Loops of Cards with Authorization [START_REF] Suri | Quick Response Manufacturing. A Companywide Approach to Reducing Lead Times[END_REF]. It is a MPC method that regulates the authorization of order progress on the shop floor in a cellular manufacturing system. POLCA controls the flow of work between production cells. The method is introduced by Suri [START_REF] Suri | Quick Response Manufacturing. A Companywide Approach to Reducing Lead Times[END_REF] in his book on Quick Response Management (QRM). Within this management philosophy the focus is on the reduction of lead time. CONWIP: CONWIP stands for Constant Work In Progress and is a continuous shop floor release method. The CONWIP system has been proposed in [START_REF] Spearman | Production rates, flow times and work in process levels in a generalized pull production system[END_REF], and further presented in [START_REF] Spearman | CONWIP: a pull alternative to kanban[END_REF]. CONWIP uses cards to control the number of WIPs. For example, no part is allowed to enter the system without a card (authority). After a finished part is completed at the last workstation, a card is transferred to the first workstation and a new part is pushed into the sequential process route. Spearman et al. [START_REF] Spearman | Production rates, flow times and work in process levels in a generalized pull production system[END_REF] mention that CONWIP sets a limit on the total WIP in the entire system. CONWIP and Kanban are both card systems but Kanban sets a limit on the number of jobs between every pair of adjacent stations. Rather than set a limit on the level of WIP between each step in the manufacturing process, or the entire system. Hopp and Roof [START_REF] Hopp | Setting WIP levels with Statistical Throughput Control (STC) in CONWIP production lines[END_REF] mention that to change the number of cards and regulates the level of WIP Statistical Throughput Control is used. This requires accurate feedback data which is difficult to provide in a complex manufacturing environment like ETO. In table 1 all the described manufacturing planning and control methods are summarized. For each method the characteristics are summed up. It is important when using these methods to be aware of this and take it into consideration when selecting a MPC method for an engineer-to order organization. Based on the description and summary we can see that WLC and POLCA have some degree of relevance towards planning and control activities in an ETO and others lack the capability to meet the requirements of an ETO environment. As of now there is only one framework common to all types of manufacturing and due to issues such as demand forecast, short lead time and urgent delivery, controlling engineering change, dynamic scheduling and coordination of activities etc. calls for a modified MPC framework as even the approaches like WLC and POLCA might not be the most efficient approaches [START_REF] Stevenson | A review of production planning and control: the applicability of key concepts to make -to-order industry[END_REF] to meet the overall requirement of ETO characteristics. Table 2 shows the summary of manufacturing planning and control literature in ETO reviewed. And based on the summary and discussions we propose PMPC which is described in the next section. Literature and Conceptual Framework to establish design and manufacturing for ETO companies Project manufacturing planning and control PMPC is a modification of the well-known MPC framework developed by [START_REF] Vollmann | Manufacturing Planning and Control for Supply Chain Management[END_REF] to the project manufacturing environment. The framework illustrates the planning processes in project manufacturing and their interconnection (refer Fig. 1). The proposed framework, when implemented in an MPC environment, will act as a decision support system and can serve as a foundation of further work to be carried in this area. In project MPC, the demand management function covers the management of sales order lines and defines either project groups as "planning groups" or single projects for these orders based on the resemblance between different orders. The demand management process is carried out in collaboration with engineering management tools where preliminary design and configuration activities are performed. When the detailed engineering activities are further accomplished in time, the project schedule and tasks are updated and customized parts/ items are specified. Projects and tasks create demands for specific parts/ items, and eventually these demand items requests for resources. Aggregated demands planning for these main parts are done on the sales and operations planning function. Since the project environment requires a network of subcontractors, suppliers, and outsourcers, network activity planning, namely subcontracting and operation rates, should be identified on this overall planning function. Further a resource capacity check should be made on this level as well. Project tasks are then pegged to demand items/ parts related to each task and these demand parts/ items are also pegged to the assigned capacities. Master production scheduling is performed on modules that are defined in the product structure. This planning process is useful in project manufacturing for the planning of long lead time parts and sub-assemblies that require preproduction or need to be ordered in advance. Material requirement planning is controlled by different reservation levels that are assigned to items or item groups. Some items are project specific, some can be assigned to project groups, while some can be identified as common supply, meaning that, they can be used by all projects and project groups. When the detailed capacity is checked against the plans, the production and purchasing orders are placed. PROJECT MANAGEMENT OPERATIONS MANAGEMENT Fig. 1. Project planning and control (PMPC) When an engineering change request is received from the customer, project MPC system can be used to aid changes related to material management and respective adjustments on the project schedule. This can be enabled by its pegging logic. Having linked the orders with the supplies, activities, and project schedule, it facilitates change propagation of the engineering change request in an integrated and automated manner. Then, these changes can automatically adjust the project schedule by the pegging functionality which is enabling the linkage between materials and activities. Hence, the updated schedule can up-date the timing, budget, and cost, which can be either used to negotiate with the customer or for control purposes. Conclusion and future work ETO companies face many challenges for effective and efficient management of MPC. By means of this paper we aim to provide main contribution to theory by proposing a conceptual framework to under-stand project manufacturing planning and control. The framework will be useful for ETO companies in their ERP development and tendering practices by defining the methods and functionality that are required for efficient planning and control in ETO environments. The research has limitations due to the relatively scarce literature and lack of empirical study and evidences. However it reaches its main objective by increasing the theoretical knowledge in this field and building a framework for further study and applications. The framework will be further developed in collaboration with the Norwegian offshore supplier industry and the research projects "The Norwegian Manufacturing Future" (SFI NORMAN) and "PowerUp". The main limitations are related to the study's conceptual nature, and further research is required to investigate the appropriateness of the suggested approaches and techniques in practice. Table 1 . 1 MPC method characteristics MPC method Push or Pull Product mix Volume Flexibility MRP & MRP II Push Stable High Low WLC Pull Variable High High DBR Hybrid Stable Low Low Kanban Pull Stable High Low POLCA Hybrid Variable Low High CONWIP Hybrid Stable Low High Table 2 . 2 Summary of manufacturing planning and control literature in ETO reviewed[START_REF] Stevenson | Aggregate load-oriented workload control: A review and a re-classification of a key approach[END_REF] Topic Author(s) Methodology Approaches and Solutions Framework, problems, Jin et al. (1995) Conceptual Proposed a new MRP framework in and implementation ETO environment (MRP,ERP,and MRPII) Bertnard and Literature Contradiction of MRP II and ETO Muntslag (1993) review and environment, a new framework conceptual Web-based SCM and Kehoe and Simulation eSCM for planning and control MPC Boughton, (2001a and 2001b); Caglinao et al. (2003) Computer-aided produc- Hicks and Braid- Conceptual MPC issues in capital goods indus- tion management issues en, (2000) and simula- try, applicability of MRP II tion Gosling and Literature Robust definition of SCM in ETO Naim (2009) review Overview of MPC Gelders, (1991) Literature Review of new MPC concepts, review and Planning approach to monitor conceptual capacity, lead times in ETO Integrated planning and Little et al. Multiple case MPC challenge in ETO companies , scheduling (2000) study ETO planning and scheduling reference model Samaranayake Conceptual Integrated ERP and SCM and Toncich and simula- (2007) tion Caron et al. Conceptual Developed a project management (1995) model , Integrating manufacturing and innovative processes Yeo et al. (2006) Conceptual Managing projects in synchroniza- tion Rahman et al.
21,046
[ "991455", "991453", "991878" ]
[ "50794", "50794", "50794" ]
01470688
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470688/file/978-3-642-40361-3_8_Chapter.pdf
Volker Stich email: [email protected] Niklas Hering email: [email protected] Stefan Kompa email: [email protected] Ulrich Brandenburg email: [email protected] Towards Changeable Production Systems -Integration of the Internal and External Flow of Information as an Enabler for Real-Time Production Planning and Controlling Keywords: Changeability, real-time capability, production planning and control, machinery and equipment industry In this paper, it will be shown how information and communication technologies (ICT) act as enablers to realize changeable production systems within the German machinery and equipment industry. A cybernetic structure is proposed to design and operate systems that have to cope with a high degree of complexity due to continuously changing environment conditions. The integration of IT-Systems along the order processing of small-and-medium-sized enterprises (SME) is shown to be one of the missing links of changeable production systems in practice. A demonstration case is presented in which standardized interfaces of IT-Systems enhance real-time data exchange between the relevant planning levels of producing companies, their suppliers and customers. Introduction Nowadays, the German machinery and equipment industry faces many challenges. The increasing variety of products in combination with increasing market dynamics (e.g. shorter product life cycles) results in a growing complexity of the order management processes [START_REF] Westkämper | Wandlungsfähige Produktionssysteme -Das Stuttgarter Unternehmensmodell[END_REF][START_REF] Westkämper | Ansätze zur Wandlungsfähigkeit von Produktionsunternehmen -Ein Bezugsrahmen für die Unternehmensentwicklung im turbulenten Umfeld[END_REF][START_REF] Schuh | Produktkomplexität managen: Strategien -Methoden -Tools[END_REF][START_REF] Stratton | The strategic integration of agile and lean supply[END_REF]. Additionally, the increasing and volatile demand of customers in combination with quantity and delivery time reduction has a direct impact on value creation processes of the order management [START_REF] Westkämper | Wandlungsfähige Produktionssysteme -Das Stuttgarter Unternehmensmodell[END_REF]. On-time delivery of products has to be realized despite the volatile demand of customers1 [START_REF] Schmidt | Effizient, schnell und erfolgreich -Strategien im Maschinenund Anlagenbau[END_REF]. Today manufacturing companies cope with these turbulent market conditions by keeping extensive stock levels. High stock levels are treacherous. They hide problems within the production systems, are very cost intensive and do not increase flexibility. Additionally, the rapid spread of new technologies, aggressive competition, closer integration of goods and capital flows as well as the fragmentation and dynamic reconfiguration of value chains pose unprecedented challenges for the German machinery and equipment industry [START_REF] Nyhuis | High Resolution Production Management[END_REF]. These uncertainties directly affect the company's internal planning and control processes. The variety of these processes confronts organizations and information systems with a significant coordination effort [START_REF] Scholz-Reiter | Einfluss der strukturellen Komplexität auf den Einsatz von selbststeuernden logistischen Prozessen[END_REF]. According to a survey by the German association of the machinery and equipment industry (VDMA) the majority of companies considers the capability of order fulfillment processes as a key success factor for the future [START_REF] Schmidt | Effizient, schnell und erfolgreich -Strategien im Maschinenund Anlagenbau[END_REF]. To this day, planning and execution of order processing -from offer processing to the final shipment of the product -is still a part of the production planning and control (PPC) [START_REF] Schuh | Produktionsplanung und -steuerung. Grundlagen, Gestaltung und Konzepte[END_REF]. Production planning and control is almost entirely integrated into information systems. In order to manage dynamic influences on processes within order processing, a deficiency in the processing of decision-relevant and real-time information can be observed [START_REF] Swafford | Achieving supply chain agility through IT integration and flexibility[END_REF], [START_REF] Zhou | RFID and item-level information visibility[END_REF]. The human organism acts as a role model for a changeable production system One solution to these turbulent market conditions discussed by academia in recent years are flexible production systems. Flexibility may be described as the ability of a system to adapt quickly and cost efficient within a pre-defined time horizon [START_REF] Nyhuis | Wandlungsfähige Produktionssysteme -Heute die Industrie von morgen gestalten[END_REF]. However, flexible production systems are not sufficient enough to establish a sustainable, competitive position for the German machinery and equipment industry because of their limited long-term potential to react to rapidly changing external influences [START_REF] Spath | Organisatorische Wandlungsfähigkeit produzierender Unternehmen[END_REF][START_REF] Wiendahl | Wandlungsfähigkeit -neues Zielfeld in der Fabrikplanung[END_REF][START_REF] Reinhart | Reaktionsfähigkeit -Eine Antwort auf turbulente Märkte[END_REF]. A systematic approach is quickly required for a suitable design and operation of changeable production systems. In this context changeability stands for the development of flexibility, which allows changing the system reactively or even proactively beyond known uncertainties [START_REF] Nyhuis | High Resolution Production Management[END_REF], [START_REF] Spath | Organisatorische Wandlungsfähigkeit produzierender Unternehmen[END_REF], [START_REF] Wiendahl | Wandlungsfähigkeit -neues Zielfeld in der Fabrikplanung[END_REF] (cf. Fig. 1). The human organism has proven itself over millions of years as one of the most reliable and most flexible systems. Thus, it can be considered as a role-model of a functional, complex organization in management cybernetics [START_REF] Strina | Zur Messbarkeit nicht-quantitativer Größen im Rahmen unternehmenskybernetischer Prozesse[END_REF][START_REF] Malik | Strategie des Managements komplexer Systeme -Ein Beitrag zur Management-Kybernetik evolutionärer Systeme[END_REF][START_REF] Espejo | Organizational Transforming and Learning. A Cybernetic Approach to Management[END_REF][START_REF] Beer | Kybernetische Führungslehre[END_REF]. A key success factor for the ideal control of the human organism is the existence of reflexive and conscious coordination mechanisms of the different organs, muscles and nerves. The reaction of human beings depends on the situation and the type of external influences 2 . Any relevant information on biomechanical or electrical pulses is available to the central nerve system for both types of responses to dynamic environmental conditions in real-time [START_REF] Malik | Strategie des Managements komplexer Systeme -Ein Beitrag zur Management-Kybernetik evolutionärer Systeme[END_REF], [START_REF] Beer | Kybernetische Führungslehre[END_REF]. Therefore human beings adapt to change or anticipate the need to adapt by having a varied repertoire of actions and activities at their disposal. Thus, the human is in the position to take the appropriate action or reaction consciously or unconsciously, based on real-time information. Transferred to a production system, it means that only the transparency of information in real-time and subsequent real-time processes make changeability possible. Furthermore, decision-making mechanisms have to exist in order to select and evaluate the sum of the possible, appropriate alternatives regarding the situation requirements (cost effectiveness vs. operational effectiveness). Transferred to the machinery and equipment industry and thus transferred to concrete practical problems, it means in order to establish the ability to coordinate in real-time value networks, it needs: 1. to increase the ability of integration of various companies and business units. The use of biunique information throughout the supply chain and also a consistent definition of standardized interfaces between different IT-systems, that are being used for planning and control processes, form the technological enablers for the realization of a changeable production system [START_REF] Nyhuis | Wandlungsfähige Produktionssysteme -Heute die Industrie von morgen gestalten[END_REF]. 2. to improve the ability to respond in planning and control processes significantly. This implies, that the static planning and control logic, which is based on the Manufacturing Resource Planning (MRP II) concept, must be replaced by a decentralized operating and real-time capable planning and control logic with closedloops. Integration of ICT as a key enabler towards changeable production systems In companies different IT-systems are used to plan, control and monitor production and logistic processes. These IT-systems may be assigned to one of the four planning levels, named shop floor, detailed, rough and intercompany planning level. In practice, an integrated information flow between these four planning levels is seldom (cf. Fig. 2). Standard interfaces exist just to functional and powerful ERP systems on the market like SAP, Infor ERP.LN or Microsoft Dynamics. Whereas standard interfaces are often not available to the great majority of small and medium-sized software engi-neering companies. The resulting lack of transparency in production systems is one of the biggest weaknesses of ERP, MES, PDM and Supply Chain Management (SCM) systems. Consequences of the problem are unrealistic delivery times to the customers and inefficiencies within the business processes [START_REF] Schuh | Liefertreue im Maschinen-und Anlagenbau: Stand -Potentiale -Trends. Studienergebnisse des FIR[END_REF]. Fig. 2. Internal and external planning level Therefore the research project "WInD" aims at improving the integration ability of supply chains fundamentally. This aspect is the technological and informational enabler for the realization of a changeable production system. Thus, the crucial standardization gaps of the concrete use case for the machinery and equipment industry are being closed in the framework of the project. Standardized interfaces should not only connect the rough with the detailed planning level (ERP-to MES-systems), but also guarantee the accurate synchronization of master data between ERP-and PDMsystems. Additionally the transfer of the electronic product code (EPC) to the machinery and equipment industry should also improve the data quality by implementing an opportunity to identify the product data (standard and custom parts) biunique in the future. The necessary data quality and information allocation in the machinery and equipment industry is currently not available due to heterogeneously constructed ITenvironments (lack of standardized interfaces and subsystem diversity) [START_REF] Straube | BMBF-Voruntersuchung: Logistik im produzierenden Gewerbe[END_REF]. Nonbidirectional data results in inconsistent master data for planning and control activities of the entire production network. Parallel manipulated master data (parts lists or material master data) in Enterprise Resource Planning-(ERP) and Product Data Management-(PDM) systems are not sufficiently (logically and in regard of content) synchronized. The concept of Manufacturing Resource Planning (MRP II) still represents the central logic of production planning and control. However, the centralized and push-oriented MRP II planning logic is not able to plan and measure dynamic processes adequately due to diverse disturbances, which often occur in production environments [START_REF] Pfohl | Logistiksysteme: betriebswirtschaftliche Grundlagen[END_REF]. The traditional hierarchical planning method leads to an iterative planning process that dissects PPC-tasks into smaller work packages. Therefore, individu-al optimization is far away from a holistic approach and therefore from the achievement of an optimal solution [START_REF] Hellmich | Kundenorientierte Auftragsabwicklung -Engpassorientierte Planung und Steuerung des Ressourceneinsatzes[END_REF]. Consequences of these problems are unrealistic delivery times to the customers and inefficiencies within the business processes [START_REF] Schuh | Liefertreue im Maschinen-und Anlagenbau: Stand -Potentiale -Trends. Studienergebnisse des FIR[END_REF]. A new decentralized and real-time capable planning and control logic, developed in the framework of the project WInD, is supposed to enable the processing of obtained real-time data according to the requirements. For this the Aachener-PPC-model as a science-accepted reference model for tasks and processes of production planning and control and particularly the enhancements of the process view for "contract manufacturers" by Schmidt will be the basic reference for the machinery and equipment industry [START_REF] Schmidt | Effizient, schnell und erfolgreich -Strategien im Maschinenund Anlagenbau[END_REF] in this project. Based on the process model by Schmidt a demonstration case will be shown applying a cybernetic structure into an industry environment. Demonstration case of changeable production systems based on an integrated ICT-structure One of the main issues of ICT research is that its results as presented by educational institutions are not directly accessible to industry. Therefore it is difficult for industry, especially SMEs, to comprehend and to adapt to the technological advances in a direct way. To overcome this gap numerous works have been conducted in the field of manufacturing education and industrial learning [START_REF] Chryssolouris | Education in Manufacturing Technology & Science: A view on Future Challenges & Goals[END_REF][START_REF] Chryssolouris | Challenges For Manufacturing Education. Proceedings of CIMEC[END_REF][START_REF] Shen | Challenges Facing U.S. Manufacturing and Strategies[END_REF]. Demonstration is an established instructional method for manufacturing education. Within the research project WInD funded by the German Research Foundation DFG as part of the Cluster of Excellence "Integrative Production Technology for High-Wage Countries" the Institute for Industrial Management at RWTH Aachen University and participating SMEs develop a demonstration setting that visualizes the IT-integration as an enabler for changeability. Therefore, the academic solution to changeable production systems as described in chapters 2 and 3 is consequently transferred into a practical environment to be accessible for industry. Fig. 3 illustrates the process and information flow model of the demonstration case. The process and information flow model visualizes the integration of five IT solutions: Enterprise Resource Planning (ERP), Product Data Management (PDM), Manufacturing Execution Systems (MES), an electronic market place (VDMA eMarket 3 ), an Electronic Product Code Information Services (EPCIS) 4framework and the myOpenFactory (myOF) platform 5 . The main goal of the demon-stration case is to show how the order processing with such a heterogeneous ITstructure can be automated through the integration of IT-Systems. The demonstration case features the following highlights:  Automation of the order processing of a customized product in a heterogeneous ITlandscape.  Full integration and bidirectional information flows between ERP and MES (ERP-MES interface).  Automated Engineering Change Requests (ECR) in PDM based on customer changes in ERP-system.  Full integration and bidirectional information flows between ERP and PDM (ERP-PDM interface).  Integration of a web shop for special demands and automated inquiry to potential suppliers.  EPCIS communication framework to facilitate real-time information on changes of delivery dates. Conclusion and outlook In this paper an approach for changeable production systems enabled by ICTintegration has been presented. It was shown how cybernetic principles have been transferred to a production network representative for the German machinery and equipment industry. A framework for the ICT-integration was introduced to facilitate changeable production systems in practice. To overcome the barriers between the research conducted by educational institutions and industry a demonstration case has been presented reflecting a typical setting within the SME-dominated industry. The main features of the demonstration case have been presented. The demonstration case will be implemented at the Campus Cluster Logistic (CCL) which is currently under construction at RWTH Aachen University. Within the CCL a demonstration factory will be erected demonstrating the principles of changeability and IT-integration in the manufacturing industry. Further development stages of this demonstration case at the CCL will be the integration of a generator of transaction data. Therewith, it will be possible to simulate the effects of varying different setting within ERP-Systems in the whole supply chain. This research is part of the Cluster of Excellence "Integrative Production Technology for High-Wage Countries" at RWTH Aachen university were ontologies and design methodologies for cognition enhanced, self-optimizing Production Networks are being developed. Fig. 1 . 1 Fig.1. Flexibility and Changeability[START_REF] Scholz-Reiter | Einfluss der strukturellen Komplexität auf den Einsatz von selbststeuernden logistischen Prozessen[END_REF] Fig. 3 . 3 Fig. 3. Process and information flow model of the WInD demonstration case Incoming orders may show an average monthly variation of [START_REF] Pfohl | Logistiksysteme: betriebswirtschaftliche Grundlagen[END_REF]-40 percent in the German machinery and equipment industry[START_REF] Schmidt | Effizient, schnell und erfolgreich -Strategien im Maschinenund Anlagenbau[END_REF] So e.g. they might react to heat on a reflex, but to cold they might react by making a conscious decision The VDMA-E-Market (http://www.vdma-e-market.de/en/) is a platform for product search established the VDMA (Verband Deutscher Maschinen-und Anlagenbau -German Engineering Federation) [START_REF] Stratton | The strategic integration of agile and lean supply[END_REF] EPCIS is a standard to capture EPCIS-events containing the information What (e.g. a certain product), Where (e.g. outgoing goods), When (e.g. 3.05 pm) and Why (e.g. product shipped )[START_REF] Schmidt | Effizient, schnell und erfolgreich -Strategien im Maschinenund Anlagenbau[END_REF] The myOpenFactory platform acts as a standardized interface between the different ERPsystems of participating SMEs. Their ERP-systems only have to be mapped once to the platform allowing automated exchange of order processing relevant data within the whole production network. Acknowledgement The research project "WInD -Changeable production systems through integrated ITstructures and decentralized production planning and control" (02PR2160), sponsored by the German Federal Ministry of Education and Research (BMBF) and supported by the Project Holder Karlsruhe at the Karlsruhe Institute of Technology (KIT) for Production and Manufacturing Technologies (PTKA-PFT). The authors wish to acknowledge the Federal Ministry and the Project Holder for their support. We also wish to acknowledge our gratitude and appreciation to all the WInD project partners for their contribution during the development of various ideas and concepts presented in this paper.
19,701
[ "991574", "996237", "1002007", "991575" ]
[ "303510", "303510", "303510", "303510" ]
01470689
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470689/file/978-3-642-40361-3_90_Chapter.pdf
Giovanni Davoli email: [email protected] Peter Nielsen Gabriele Pattarozzi Riccardo Melloni Practical considerations about error analysis for discrete event simulation model Keywords: discrete event, simulation model, error analysis, stochastic model The purpose of making efficient and flexible manufacturing systems is often related to the possibility to analyze the system considering at the same time a wide number of parameters and their interactions. Simulation models are proved to be useful to support and drive company management in improving the performances of production and logistic systems. However, to achieve the expected results, a detailed model of the production and logistic system is needed as well as a structured error analysis to guarantee results reliability. The aim of this paper is to give some practical guide lines in order to drive the error analysis for discrete event stochastic simulation model that are widely used to study production and logistic system. Introduction Stochastic, discrete events, simulation models are widely used to study production and logistic system. Apart from the development, one of the main problem of this approach is to perform the error analysis on the outputs of the simulation model. Simulation experiments are classified as either terminating or non-terminating as far as the goal of the simulation is concerned (Law and Kelton,2000), (Fishman,2000). If we limit our interests on non-terminating simulation, the error analysis can be split into two different parts. The first part consists of individuating the initial transient period and the confidence interval of the outputs. The second part consists of estimating how the transient period and the outputs confidence interval varies when the initial model scenario is changed. The first part of the problem is widely studied, [START_REF] Kelton | A new approach for dealing with the startup problem in discrete event simulation[END_REF][START_REF] Kelton | Random initialization methods in simulation[END_REF], [START_REF] Schruben | Detecting initialization bias in simulation output[END_REF][START_REF] Schruben | Optimal test for initialization bias in simulation output[END_REF], [START_REF] Welch | A graphical approach to the initial transient problem in steady-state simulations[END_REF], Vassilacopoulos (1989) [START_REF] White | An effective truncation heuristic for bias reduction in simulation output[END_REF], and many methods are provided to determinate the transient period often related to output stability, that can be quantified in different ways. Between the proposed techniques Mean Squared Pure Error method, [START_REF] Mosca | Teoria degli esperimenti e simulazione. Quaderni di gestione degli impianti industriali[END_REF][START_REF] Mosca | Integrated management of a bishuttle FMS using discrete/stochastic simulator[END_REF], should be reminded as a practical method useful to determinate both transient period and confidence interval. On the other hand the second part of error analysis problem is not commonly addressed directly as reported in the recent work of Sandikc (2006) that tries to fill the gap for the initial transient period for simulation model addressing production lines. The variance of outputs confidence interval between different scenario is often faced with the hypothesis that it is normally distributed around a central value used in the reference scenario according with the basic theory of statistics (Box et al. 2013). But in many practical cases there is no evidences that this hypothesis is correct and, moreover, the significance of central value, for the reference scenario, is lost. In fact in some recent simulation handbook [START_REF] Chung | Simulation Modeling Handbook -A Pratical Approach[END_REF]) the advice to quantify the confidence interval for all different simulated scenario is given. Purpose The aim of this paper is to give some practical guidelines in order to drive the error analysis for discrete event stochastic simulation models. The paper is focused on the study of confidence interval variance related to the variance of simulated scenario. Nowadays, in many practical applications, the calculation potential is large enough to perform "long" simulation run in order to assure to exceed the initial transient period. Much more important is to determinate the confidence interval for the outputs in different simulated scenario, because overestimate or underestimate these confidence intervals can drive analysts towards a wrong interpretation of the results. Methodology To address the aim of the paper a quite simple discrete event simulation model is considered and the MSPE (1) is used to estimate outputs confidence interval. Then the simulation are performed according to different scenario and the variance of confidence interval is studied for different outputs. * [START_REF] Bather | A continuous time inventory model[END_REF] This paper is grounded on a discrete events simulation model reproducing a re-order point logistic system, in particular a single-item fixed order quantity system also known as: Economic Order Quantity (EOQ) model. The economic order quantity (EOQ), first introduced by [START_REF] Harris | Howmany parts to make at once. Factory[END_REF], and developed by [START_REF] Brown | Smoothing, Forecasting and Prediction of Discrete Time Series[END_REF] and [START_REF] Bather | A continuous time inventory model[END_REF] with stochastic demand, is a well-known and commonly used inventory control techniques reported in a great variety of hand book, for example: Tersine (1988) and [START_REF] Ghiani | Introduction to logistics systems planning and control[END_REF]. The notation used in this paper is illustrated in table 1. Findings The presented experiments are evaluated in terms of stability of the results and confidence interval width for all considered KPI. The simulations are conducted for a length of 1.000 days and this guarantee the stability of outputs for all KPI. Initial transient period length varies according with different parameters set and the variance is more significant for certain KPI, as shown in figure 1. Low (-1) Mean ( 0 To evaluate the significance of confidence intervals the results are presented for each KPI as the ratio between half interval and the mean for each KPI. Confidence half intervals are calculated for a 95% level of significance according with (2). * The ANOVA test reveals that the considered factors have different impact on confidence interval. Demand and lead time distribution have a very strong effects in comparison with the other parameters and even their interaction is important, as shown in in figure 2 for SL1. Conclusions The case study presented here can be used to make some practical considerations to support error analysis for discrete event simulation models. First, a "long" simulation period, in order to pass the initial transient period, is relatively easy to set, even if different behavior have been observed for different KPI. Second, the initial transient period and the related confidence interval depend in a very different way by the considered parameters. In particular, for numeric parameters, the hypothesis that confidence interval variance is normal distributed around a central value calculated in the reference scenario is almost verified. On the other hand, when the studied parameters are not numerical, for example distribution type as in the considered case study, the confidence interval must be re-calculated in each scenario because the variance could be high and the interaction are almost unpredictable. So, in practice, the effort to check the confidence interval related to discrete event simulation should be done when the modified parameters are not simply numeric. This kind of analysis, thanks to the actual computational resource, is not prohibitive in terms of time when we manage a rather simple model. Figure. 1 1 Figure.1 MSPE for KPI SL1 and SL3 Figure. 2 2 Figure.2 Confidence half interval (minmax), in terms of %, for SL1and SL4 Table 1 . 1 Symbol and definitionsThe simulation model was developed according with the standard EOQ model for single item. A set of stochastic functions, developed in SciLab environment, are used to generate the demand that activates the model. The simulation model was tested performing standard EOQ model with normal distributed demand (where σ d is demand standard deviation) and normal distributed lead time (where σ t is lead time standard deviation). The parameters set used in the reference scenario are illustrated in table 2. Symbol Unit Definition N Day Number of days for simulation D i Unit/day Mean demand per day in units Lt Day Mean lead time in day Co Euro/order Single order cost in euro C s Euro/ unit*year Stock cost in euro per unit per year SS Unit Safety stocks in unit 3.1 Simulation model Table 2 . 2 Used parameters set Parameter Set value D i 1.000,00 σ d 300,00 Lt 7,00 σ t 2,00 C o 1.000,00 C s 1,00 Imposed SL 0,95 To evaluate model performances, in terms of achieved service level, a set of 4 Key Performance Indicators (KPI) is defined. The used KPI are illustrated in table 3. Table 3 . 3 Used KPI experiment with three levels is used in this paper. Four factors and three levels give 3 4 = 81 combinations. For each combination a number of 5 replications were conducted for a number of 405 simulations. The three settings for the four factors are shown in table 4. KPI Unit Definition SL1 % 1-Number of stock-out in days per day SL2 % 1-Number of stock-out in units per day SL3 % 1-Number of stock-out in units per day during lead time SL4 % 1-Number of stock-out event during lead time period Table 4 . 4 Factors setting Table 5 . 5 ANOVA test P-value results, codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 Factors LS1 LS2 LS3 LS4 Demand distribution 3,15E-14 *** 4,99E-14 *** 4,85E-15 *** 0,0002638 *** Lead time distribution < 2.2e-16 *** < 2.2e-16 *** < 2.2e-16 *** 0,3792613 Safety Stocks 0,004364 ** 0,013121 * 0,0001933 *** 8,62E-09 *** Ratio Co/Cs 0,132156 0,126539 < 2.2e-16 *** 3,49E-07 *** Demand dist.: Lead time dist. < 2.2e-16 *** < 2.2e-16 *** 4,03E-16 *** 0,1436708 Demand dist.: Safety Stocks 0,639589 0,952472 0,8427638 0,9926262 Demand dist.: Ratio Co/Cs 0,016213 * 0,031195 * 0,0091386 ** 0,0912483 Lead time dist.: Safety Stocks 0,570304 0,801933 0,0424879 * 0,2119648 Lead time dist.: Ratio Co/Cs 0,025063 * 0,009775 ** < 2.2e-16 *** 0,1682513 Safety Stocks: Ratio Co/Cs 0,92423 0,918936 0,9432251 0,8817591 Limitation and further work The number of replications for each scenario provided in the DOE is fixed, a deeper study about this aspect should be investigate.
10,822
[ "1002000" ]
[ "203954", "300821", "203954", "203954" ]
01470690
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470690/file/978-3-642-40361-3_9_Chapter.pdf
Vesna Mandic email: [email protected] Radomir Radisa email: [email protected] Vladan Lukovic email: [email protected] Milan Curcic Integrated model-based manufacturing for rapid product and process development Keywords: model-based manufacturing, virtual engineering, rapid prototyping, virtual manufacturing The paper presents integrative model-based approach in application of virtual engineering technologies in rapid product and process design and manufacturing. This has resulted in integration of so called CA-technologies and Virtual Reality in product design and FE numerical simulations and optimization of production processes, as digital prototyping of product and processes, from one side, and rapid prototyping techniques as physical prototyping, on the other side. Reverse engineering and coordinate metrology have been also applied in re-engineering of sheet metal forming process of existing product, with aim at generation of initial digital information about product and final quality control on multi-sensor coordinate measurement machine. Introduction Model-based manufacturing implies technological integration of CA technologies (CAD/CAM/CAE) in product development with VM technologies (Virtual Manufacturing) for modelling of manufacturing processes and application of rapid technologies (RP/RT/RM) for testing and validation purposes. It results in 3D digital model of a product/tool, but also in the virtual model of manufacturing processes in computer environment and physical prototypes of components and assemblies. VM is based on nonlinear finite element analysis and it enables optimisation of key factors of production for validation of different concepts of manufacturing processes and optimization of e relevant parameters for shop within the whole set of "what if" scenarios. In a word, a capability to "manufacture in the computer" is so powerful tool which reduces the errors, cuts the costs and shortens the time of design, because all modifications are made before the actual manufacturing process. Besides CAD modelling, 3D model of a product/tool can be also rapidly generated in digital form using reverse engineering, remodelled and exported to one of the systems for rapid prototyping (RP), rapid tooling (RT) or rapid manufacturing (RM). Virtual and rapid prototypes obtained in this way can be used for testing the functionality of product or assembly and different concepts in the early stage of design without expensive and long-term trial-and-error attempts in traditional design and production. Virtual manufacturing also uses virtual reality (VR) as advanced technology for 3D presentation of model's structure, composition and behaviour as if it were physically manufactured. Large number of papers presents the most recent investigations and achievements in the area of virtual and rapid product and processes development, realized in the integrated model-based system, for modelling, simulation, optimization, control and verification of the real production systems and designed products [START_REF] Mandic | Virtual Engineering[END_REF][START_REF] Ding | An integrated manufacturing system for rapid tooling based on rapid prototyping[END_REF][START_REF] Yan | Rapid Prototyping and Manufacturing Technology: Principle, Representative Technics, Applications, and Development Trends[END_REF]. This paper starts with description of the model-based manufacturing system components, i.e. technologies of virtual engineering, which are being applied in it. Through the case study is presented the proposed model-based manufacturing approach in the reengineering of product and sheet metal multi-stage forming technology for its manufacturing. The proposed integrated system represents feasible and useful tool in engineering design, not only for researchers but for industrial engineers, too. Components of model-based manufacturing and its integration Model-based manufacturing technologies are integrating engineering and manufacturing activities, using virtual models and simulations instead of real objects and operations. That is some kind of "digital tool" for simulation and optimization of production, through models of products and processes developed in the virtual environment, with advanced possibilities for rapid prototyping and rapid manufacturing, presentation in 3D environment, collaborative functions for efficient communication of teams, even the remote ones, with reliable storage of all the electronic data, which describe the product and processes for its manufacturing, servicing and sale. In Fig. 1 are presented virtual engineering components and its interactions applied in modelbased manufacturing approach, where the central position belongs to virtual model of product and manufacturing processes, namely their complete description and all the generated 3D digital data within the product life span, the so-called Digital mock up. Virtual prototypes are the inevitable part of the new product development, which enable visualization of the product, investigation of its functionality and exploitation characteristics before the manufacturing itself, estimate of process parameters influence on the product characteristics in its conceptual design. Contemporary CAD/CAM/CAE systems are powerful tools that can simulate the complete life cycle of a product, from the conceptual to the parametric design, testing, assembling, maintenance and even sale. Possibilities of the automatic generating of the NC code and simulation of the tool motion, selection of strategies and tolerances checking, are especially important in the tool and parts manufacturing on the CNC machines within CAM technologies. Also, in the modern CA tools, the modules are available for automatic design of the tools' engraving based on the product model, in processes of the injecting moulding of plastics, forging, sheet metal forming and others. Reverse engineering (RE) is a process of digitalization of the existing part, assembly or the whole product, by precise measuring or scanning. Application of this technology is especially useful when the electronic models of technical documentation are not available. The two phases are distinctive within the RE process: the first one which consists of the data digitalizing and the second one, within which the 3D modelling of the object is done, based on the acquired data. Output from the first phase of the RE process represents the digital description of the object in the three-dimensional space, which is called the point cloud. Fig. 1. Virtual engineering system components and their interconnections [4] The Rapid prototyping (RP) technologies, through the physical model of a product/tool, enable an analysis of the product functionality within the assembly, checking of design, ergonomic analysis and other functional testing. The RP appeared as a key enabling technology, whose application exhibited reduction of the lead time for about 60 % with respect to the traditional way. The trend of reducing the product development time in RP caused appearance of the Rapid Tooling (RT). All together, they make the integrated rapid approach RPM (Rapid Prototyping/Manufacturing). Natural continuation of the 3D computer graphics are the new Virtual Reality (VR) technologies with advanced input-output devices. Through the VR technology one generates synthetic, namely virtual environment in which is enabled the threedimensional presentation of the product, tool, process in the real time, in the real conditions, with interaction with the user. Its application is especially significant in the product detailed design phase, virtual mounting of assemblies, or in checking characteristics of the complex products in the automobile and aerospace industries. Application of Virtual manufacturing (VM), based on non-linear FE simulations, is a well verified and extremely useful tool for prediction of problems in manufacturing. Since the virtual models of processes are very flexible they enable investigation of design changes influences, both the tool layouts and the process parameters, on the product quality and manufacturing costs. Optimal choice of relevant production parameters has positive consequences on reducing the time-to-market, costs of manufacturing, material and tools, as well as increase of the final product quality. It is known that metrology is the integral part of the production processes, and with development of the systems for digitalization of geometry and objects, which are also used in the RE technologies, it has a significant place in the early phases of the product design and verification of the design solutions alternatives. Possibilities of modern metrological systems are to the greatest extent supported by the powerful software which control data acquisition, its processing up to automatized estimate of the measurement uncertainty. Additionally, CAD on-line and CAD off-line functions enable preparation of programs for measurement based on its virtual model. One of the basic problems in manufacturing is how to integrate engineering and production activities, considering that integration has to be based on interaction between designers, constructors, technologists, suppliers and buyers, throughout the product's life cycle. The integrated solution provides for unified environment for modelling, analysis and simulation of products and manufacturing processes and also prevents loss of information and electronic data, which often happens in their transfer. Moreover, virtual environment offers designers and researchers visualization of products and their better understanding, leading to improving of quality, reducing the lead time, securing the design solution which is the right one, without the need for later expensive redesign. Case study The main objective of the presented case study is to present, on the arbitrary chosen product, i.e. product component, the integrated model-based approach in the reengineering of manufacturing processes in sheet metal multi-stage forming and verification of the proposed tool design by application of the virtual and physical prototypes [START_REF] Mandić | Integrated virtual engineering approach for product and process development[END_REF]. The handle made of the sheet, which is used in manufacturing different type of kitchenware, is obtained by processes of blanking, punching, deep drawing and bending. The last operation of bending and closing the handle could be unstable, depending on the shape of the blank and previous operation of deep drawing/bending and additionally caused by thin sheet anisotropy. The applied re-engineering approach (Fig. 2) comprises the following technologies: • Reverse Engineering (CMM-optical&laser sensors) -for scanning of blank shape and free surfaces of handle The finished part and the blank are scanned at the multi-sensor coordinate measurement machine WERTH VideoCheck IP 250, which is equipped with three sensors: optical, laser and fiber sensor. Since the blank is a planar figure the optical scanning of closed contour "2D" was done and as the output was obtained the ASCII file The option that was chosen there was backlighting when light that illuminates the workpiece comes from below, thus the contour edges are visible on the video screen as a shadow. In Fig. 3 is shown the blank on the CMM table and corresponding display of the scanning results on the screen. The point cloud in the ASCII format was imported into the Digitzed shape editor. The contour line is used in the Part-design for obtaining the 3D model of a blank with defined sheet thickness. Scanning of the finished part was done by use of the optical and laser sensors in the 3D scanning option. By optical scanning the contour shape was registered with the autofocus option, as presented in Fig. 4, while the top surface of the handle was scanned by the laser sensor. On the portion of the handle with the variable crosssection and complex surface, the laser scan lines were registered at a distance of 0.75 mm from each other, while at the flat part of the handle 3D scanning of the object was done by lines separated from each other for 20 mm. As in the previous case, the results of both scans was exported as the ASCII file, later imported into CATIA through the Digitized shape editor. In the Generative shape design are imported scan lines used for modelling the cross sectional surfaces, by what was generated the whole contact surface of the top part of the tool for the second operation. The generated surface was used for modelling the upper surface of the mandrel (Fig. 5). The tool for the second forming operation consists of the upper die, mandrel and the supporting plate. Finite element simulations of both operations were performed by using commercial software Simufact.forming, as a special purpose process simulation solution based on MSC.Marc technology. Non-linear finite element approach was used with 3D solid elements (HEX), optimized for sheet metal forming using a "2½ D sheet mesher -Sheetmesh". In Fig. 6 is presented a blank on which was initially formed the FE mesh (element size 0.7 mm), virtual assembly for the first operation, the formed workpiece after the first operation with the FE mesh, virtual assembly for the second forming operation and the virtual model of a handle. The flow stress curve was determined by tensile test, defined by equation In the integrated environment the user can analyze processes, systems, products on relation virtual-physical-virtual, where the virtual model of the product is imported into the VR system for the 3D display and interaction with the user. The virtual model of a handle can be analyzed in more details in the VR environment. For those needs a VR application was developed by use of the following software and hardware com-ponents: 1)Wizard VR program, 2) 5DT Data Glove, 3) Wintracker, magnetic 6DOF tracking device. The screens form the VR application are shown in Figure 11. In such prepared application it is possible to import other 3D objects modelled in the CAD system or exported from the various VM systems in the form of VMRL files. Conclusion In this paper are presented components of the model-based integrated system, which generates and/or uses the virtual/rapid prototypes of products and processes, whose analysis and verification are possible both in the physical and virtual sense. Each component of the system has its advantages and disadvantages, thus the integrated approach, which assumes their complementary application, became the powerful tool for designers and researchers. Through the presented case study at the example of process re-engineering of making the handle from the sheet metal, advantages and possibilities of the VE technologies integration were demonstrated, through application of the CAD/CAM/CAE, VM, RP/RM and VR techniques. It was shown that, due to development of the IT technologies, software and hardware components, engineering design and development, as well as the other phases of the product life cycle, can be very successfully realized, with respect to quality, costs and time, by application of the virtual/rapid prototyping/manufacturing technologies of virtual engineering. Fig. 2 . 2 Fig. 2. Model-based engineering design and prototypingProduction processes for manufacturing the handle contains the following operations: 1) Blanking and punching 2) Two-angle bending and deep drawing 3) Bending -the final operation in which the final closure of top surface of the handle is obtained.The finished part and the blank are scanned at the multi-sensor coordinate measurement machine WERTH VideoCheck IP 250, which is equipped with three sensors: optical, laser and fiber sensor. Since the blank is a planar figure the optical scanning of closed contour "2D" was done and as the output was obtained the ASCII file The option that was chosen there was backlighting when light that illuminates the workpiece comes from below, thus the contour edges are visible on the video screen as a shadow. In Fig.3is shown the blank on the CMM table and corresponding display of the scanning results on the screen. The point cloud in the ASCII format was imported into the Digitzed shape editor. The contour line is used in the Part-design for obtaining the 3D model of a blank with defined sheet thickness.Scanning of the finished part was done by use of the optical and laser sensors in the 3D scanning option. By optical scanning the contour shape was registered with the autofocus option, as presented in Fig.4, while the top surface of the handle was scanned by the laser sensor. On the portion of the handle with the variable crosssection and complex surface, the laser scan lines were registered at a distance of 0.75 mm from each other, while at the flat part of the handle 3D scanning of the object was done by lines separated from each other for 20 mm. As in the previous case, the results of both scans was exported as the ASCII file, later imported into CATIA through the Digitized shape editor. Fig. 3 .Fig. 4 . 34 Fig. 3. Optical 2D scanning of blank contour on CMM , MPa. Interface conditions were described by the Coulomb friction law, with friction coefficient 0.1. Fig. 5 .Fig. 6 . 56 Fig. 5. Transforming scanned lines in CAD surfaces and 3D models (upper die and mandrel) Fig. 7 . 7 Fig. 7. Effective stress -1 st stage Fig. 8. Effective stress -2 nd stage No matter how the numerical models of processes and products, obtained by virtual manufacturing are complete, the need exists for such models to be transformed into RP models, in order to perform the final verification of dimensions and fitting. The virtual model of a handle, obtained by the FE simulation, was exported in the STL file (Figure 8 a) and it was used for the prototype made of plastics by application of the PolyJet technology. In Fig.9 is presented the RP model of a handle, which besides for the visual control of surfaces was used for precise measurement of the model on the CMM. The measuring strategy was identical as for measuring the real part. The positions of cross-sections for comparison of forms and dimensions of the real part and the RP model, indirectly the FE model, are shown in Fig.10. The graph in the same figure shows comparison of scanned lines for cross-section 4. Fig. 9 .Fig. 10 . 910 Fig. 9. Rapid prototyping from FE simulation result and control measurement on CMM Fig. 11 . 11 Fig. 11. Virtual reality application
18,551
[ "1002008", "1002009", "1002010", "1002011" ]
[ "468182", "485916", "468182", "468182" ]
01470699
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470699/file/978-3-642-40361-3_30_Chapter.pdf
Victor De email: [email protected] Oliveira Gomes email: [email protected] Durval J De Barba email: [email protected] Jefferson De Karl-Heinrich Grote email: [email protected] Christiane Beyer Victor Emmanuel Gomes email: [email protected] Jefferson De Oliveira Gomes Sustainable Layout Planning Requirements by Integration of Discrete Event Simulation Analysis (DES) with Life Cycle Assessment (LCA) Keywords: Discrete Event Simulation, Life Cycle Assessment, layout planning des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Sustainable Layout Planning Requirements by Integration of Discrete Event Simulation Analysis (DES) with Life Cycle Assessment (LCA) 1 Introduction A proper evaluation of manufacturing is a fundamental step to decision making in planning layouts. In these cases the support of mathematical tools, such as Discrete Event Simulation (DES), have been used for identifying waste on the shop floor and cost analyses for manufacturing optimization [START_REF] Standridge | Why Lean Needs Simulation[END_REF]. One of the advantages resulting from the application of DES in a corporate environment is its capability to include the impact of randomness in a system. All the dynamics and the non-deterministic nature of the parameters eliminate the use of static tools such as spreadsheets for solving many line design problems. Furthermore, all commercial simulation software provides detailed animation capabilities. The animation of the manufacturing process and flow can help engineers to visually detect problems or bottlenecks and also to test out alternate line designs. For these reasons DES may be applied to generate requirements and sustainable systems specifications for manufacturing. However, the analyses results realized by using DES are not sufficient for the joint assessment of impacts on the three dimensions of sustainability [START_REF] Joschko | Combination of Job Oriented Simulation with Ecological Material Flow Analysis as Integrated Analysis Tool for Business Production Processes[END_REF]. There are distinct tools and techniques to analyze and provide environmentally sustainable manufacturing systems. In most cases they consist of cost analyses integrated with pollutant emissions and energy efficiency analyses [START_REF] Helu | Evaluating Trade-Offs Between Sustainability, Performance, and Cost of Green Machining Technologies[END_REF]. One of the biggest challenges for new production systems projects is obtaining data incorporated into the typical analyses of production (production capacity, material flow, transport, occupation rate of posts etc.)for the environmental impacts analysis. The Life Cycle Assessment (LCA) is a tool widely used in the academic environment and by corporations to calculate pollutant emissions rates. Through an LCA study it is possible to develop a systematic analysis of the environmental consequences associated with products during their life cycle, which improves the decision-making in areas such as innovation, regulations (industrial, environmental), strategies and policies. This work discusses the combined use of DES with LCA to analyze production resource utilization in manufacturing systems. The combination of DES with LCA produces a powerful tool for analyzing the cause and effect of various scenarios where time, resources, place and randomness of input variables affect the outcome in sustainable manufacturing design. This joint application establishes the dynamic environment for sustainable production systems assessments and is an unexplored area addressed by few research publications. A case study was conducted, as a contribution towards this discussion, to analyze this joint use in decision-making for purchasing forklifts according to sustainable premises. Materials and Methods -Case Study The chosen problem for the proposed discussion was a sequencing process layout involving automotive door panels and it included situations concerning two important criteria contained in an industrial layout plan, which were transport performance and space. Transports cost can improve the efficiency of the product flow. It concerns the design of layout, the accommodation of people, and the machines and activities of a system or an enterprise within a physical spatial environment. Inefficient space utilization may cause cost increase and lead to competitive disadvantage in the market. As the result of less space demand in the planning phase, the company can either save rent cost or use the saved space for further development. The system was a supermarket where automotive door panels, sequenced by the operators according to the customer's orders, were transported by forklift trucks until the dispatch sector. Each cell-sequencing had twelve containers overlapping with panels out of sequence from the assembly line, arranged in six sequenced pairs, and two containers with already sequenced panels, also overlapping. The production system goal was to provide a minimum amount of 380 containers daily in the dispatch area, with the minimum amount of operators and forklifts and reduced space utilization. From these premises, models with a pre-established configuration were simulated through DES software Plant Simulation-Tecnomatix TM . A type of model is illustrated in Figure 1. For each proposed scenario the cost of forklift acquisition and personnel (labor/salary cost + incidental wage/salary cost -e.g. sickness, insurance fund, pension insurance fund etc.) was considered. It is known that the usage phase of forklifts is of greatest environmental impact factors within its life cycle [START_REF] Suzuki | LCA Activities of Toyota Industries Corporation: Case Study[END_REF]. This high rate is directly related to the fuel consumption. In this way, the main factor to materially influence the choice of layout in forklift used must be the type of fuel used in addition to forklift performance. For this reason an LCA study was conducted in parallel with results supplied by the simulation analyses to verify the use of forklifts with different energy sources. Data from fuel utilization were acquired in different research sources. This is a problem in which there are several objectives to be achieved simultaneously. Among the methods developed for evaluation in a multi-criteria decision environment the method PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation) was employed to support the decision-making process. It is appropriate in situations where decision-makers previously identified criteria and alternatives, by-passing structuring phase, and emphasizing the evaluation phase. Results and Discussion By the DES analysis, the scenario S9 was chosen due to better fulfillment of production requirements and the relation between cost and production with a lower shopfloor area (Figure 2). However, the scenario choices depends on the decision-making approach that can be done according to the area, the fuel consumption analysis, the forecast of future demands and other considerations which can form a decision-making matrix. Based on [START_REF] Gaines | Full Fuel-Cycle Comparison of Forklift Propulsion Systems[END_REF][START_REF] Antes | Propane's Greenhouse Gas Emissions: A Comparative Analysis[END_REF][START_REF] Sullivan | A Review of Battery Life-Cycle Analysis: State of Knowledge and Critical Needs[END_REF][START_REF] Wang | Fuel Cycle Analysis of Conventional and Alternative Fuel Vehicles[END_REF] by the Lifecycle assessment, a comparison of five types of fuels used by forklifts was performed listing their environmental impacts due GWP (Global Warming Potential), displayed in metric tons CO 2 equivalent per unit per year (ETC-Electricity Trickle Charge; EFC-Electricity Fast Charge; LPG-Liquefied Propane Gas; CNG-Compressed Natural Gas; Diesel and Gasoline) (Figure 3). End-use emissions means the emission generated during the use of fuel. The meaning of upstream-emissions is related to emissions generated during the fuel production. The adopted term Battery means the emissions generated in the manufacturing, recycling and disposal phases of forklifts batteries. A forklift needs 3 batteries daily. It was estimated a battery life of 50 months and also the influence of loading speed on the overall efficiency. Furthermore, a trickle charge (normal) presents an efficiency of 95% against 72% efficiency for a quick charge [START_REF] Australia | GHG life cycle assessment of LPG in the Australian stationary energy market[END_REF]. When the fuels are compared with the CNG, the Electricity Trickle Charge had the lowest impact (-26.4%) followed by Electricity Fast Charge (-12.5%). The Gasoline fuel had the most unsustainable impact (+22.2%). The difference between LPG and Diesel fuels was very small, respectively -1.4% and +1.4%. Beyond this energy resource analysis, the Diesel fuel has to be considered as a particulate matter (PM) source harmful to human health impacting shop-floor area forklift use and leading to additional investments in ventilation systems. Figure 4 presents the acquisition costs of forklifts. In comparison to CNG (normally used on shop-floors), Diesel propulsion presented the lowest value (-6%), followed by electrical (-5.6%). LPG and gasoline had much larger values, respectively 29.6% and 30.4%. The electrical forklift analysis considered the cost for three batteries, including acquisition and maintenance for 50 months use. The presented results are based on the authors' premise that the environmental impact and cost analysis were the tasks to be considered in a decision making process for a sustainable shop-floor planning. However, both tasks were not significantly dependent, characterizing a need for a multi-criteria decision method (MDCA -Multi Decision Criteria Analysis). In the application of PROMETHEE method, a MDCA tool, the authors considered weighted environmental impact and cost analysis 35% and 65% respectively. Using this method it presented a ranking of alternatives from the best to worst. Table 1 shows the sort of decision in accordance to the standards proposed by the decision maker. In accordance to the results, both electricity batteries were considered first choice. There were no significantly difference between LPG, CNG and Diesel. Gasoline fuel was the worst alternative. This analysis was considered for a land with high percentage of hydraulic generation, like Brazil and Norway. However, considering a multinational company, which has global projects (there are differences in the amount of product demand and also areas occupied by production resources, different kinds of energy generation), the main idea is the consideration for applying local decisions. In that case, considering the application of the MDCA method for a scenario in a land with high percentage of thermal generation, like USA, UK and Germany, the final results will be different than those for lands with a high percentage of hydraulic generation (Figure 5 based on [START_REF] Gaines | Full Fuel-Cycle Comparison of Forklift Propulsion Systems[END_REF][START_REF] Antes | Propane's Greenhouse Gas Emissions: A Comparative Analysis[END_REF][START_REF] Sullivan | A Review of Battery Life-Cycle Analysis: State of Knowledge and Critical Needs[END_REF][START_REF] Wang | Fuel Cycle Analysis of Conventional and Alternative Fuel Vehicles[END_REF]). For example, if the analysis had been conducted in the USA (percentage of hydraulic generation of 7.4%), the results would be different than in Brazil (percentage of hydraulic generation of 83.2%). Similarly, the fuel costs are very different in those lands, being more expensive in Brazil. Electricity in Brazil is approximately three times greater than in USA [10]. For a MDCA analysis with the same alternatives, criteria and weights, a different ordering can be obtained. In this way the choice of the type of forklift in the USA would be preferably by LPG followed by CNG (Table 2). It is observed that in this case there is a big difference between the forms of batteries, because the Fast Charge would be the worst choice (Figure 6). Conclusions The combined use of DES and LCA presented a dynamic evaluation process for analyzing production resource in a sustainable manufacturing system environment. Primarily, it was defined by simulation in a study case that assumed a production scenario combining small area, minimum amount of resources and efficient fuel consuming. Supported by a Multi Criteria Decision Analysis Tool (PROMETHEE II), the use of forklifts was rationalized for distinct applications in shop-floors located in lands with high percentage of hydraulic and thermal energy generation. The results ratify the importance of global projects with local solutions, even for layout planning. Fig. 1 - 1 Fig. 1 -Computational model and proposed scenarios Fig. 2 - 2 Fig. 2 -Proposed scenarios analyses Fig. 3 - 3 Fig. 3 -Comparison of fuels used by forklifts according GWP (Global Warming Potential): ETC-Electricity Trickle Charge; EFC-Electricity Fast Charge; LPG-Liquefied Propane Gas; CNG-Compressed Natural Gas; Diesel and Gasoline Fig. 4 - 4 Fig. 4 -Acquisition costs of forklift Fig. 5 - 5 Fig. 5 -Comparison of fuels used by forklifts according GWP in lands with high percentage of thermal generation Fig. 6 - 6 Fig. 6 -Comparison of fuels used by forklifts according dollars in 50 months in USA [10] Table 1 - 1 PROMETHEE II Ranking Alternatives ETC EFC LPG CNG Diesel Gasoline PROMETHEE II Ranking 0.0724 0.0304 -0.0060 -0.0079 -0.0115 -0.0774 Score 100 91.92 85.47 85.13 84.52 74.06 Table 2 - 2 PROMETHEE II Ranking for a new analysis considering lands with high percentage of thermal energy generation. Alternatives LPG CNG Diesel ETC Gasoline EFC PROMETHEE II Ranking 0.0318 0.0301 0.0236 0.0113 -0.0408 -0.0559 Score 100 99.67 98.37 95.98 86.48 83.90
14,010
[ "1002012", "1002013", "992743", "1002014", "1002015" ]
[ "485920", "485921", "485920", "485920", "485921", "466782" ]
01470702
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01470702/file/978-3-642-40361-3_64_Chapter.pdf
M K Lim email: [email protected] H K Chan email: [email protected] Optimize Resource Utilization at Multi-site Facilities with Agent Technology Keywords: Multi-agent system, multi-site manufacturing, genetic algorithm de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Due to rapid expansion of market, vigorous acquisition and new facility development have taken place among manufacturing enterprises. The manufacturing has evolved from localized, single-site facility to more globalised, multi-site facilities [START_REF] Chan | Solving distributed FMS scheduling problems subject to maintenance: genetic algorithms approach[END_REF]. Process planning and production scheduling are two manufacturing functions traditionally treated as separate operations and majority of works predominantly focuses on singlesite facility and the methodologies are not designed for multi-site optimization. In the literature, multi-site research specifically in integrated process planning and production scheduling is rather limited, and hence the focus of this paper. Furthermore, multi-agent system (MAS) is a popular and promising tool for solving complex problems, such as in multi-site research, but yet its application in this area, particularly related to integrated process planning and scheduling, is rare. Therefore, this paper will investigate the performance and effectiveness of employing MAS to optimize process planning and production scheduling within multi-site manufacturing environment. This paper is organized as follows. Section 2 reviews the literature in process planning and production scheduling and the use of MAS in this domain. Section 3 defines the case study used and Section 4 describes the agent model and currency-based agent iterative bidding mechanism for multi-site resource optimization. Section 5 explains the genetic algorithm for currency tuning to facilitate agent bidding, and followed by simulation analysis in Section 6. Finally, a conclusion will be given in Section 7. 2 Literature Review Process Planning and Production Scheduling In order to have an efficient process planning and scheduling, it is necessary to have simultaneous assessment of process planning and scheduling decisions [START_REF] Chan | Solving distributed FMS scheduling problems subject to maintenance: genetic algorithms approach[END_REF]. There are a number of approaches to integrated process planning and production scheduling that can be found in the literature. These approaches can be classified into non-linear process planning (NLPP), closed-loop process planning (CLPP), and distributed process planning (DTPP) [START_REF] Larsen | Simultaneous engineering within process and production planning[END_REF]. NLPP generates possible alternative plans for each part prior to actual shop floor production. All the possible plans are ranked according to the process planning criteria. For an efficient planning and scheduling, it is vital to have feedback from the shop floor and CLPP provides such feedback by taking into account of the shop floor status at that time [START_REF] Khoshnevis | Integration of process planning and scheduling functions[END_REF]. DTPP performs in parallel and in two phases. The first phase is pre-planning, i.e. process planner analyses the operations to be carried out based on product data. The second phase is final planning, whereby the operations will be matched against the capability of the available resources. All these research works predominantly applied on single-site facility; very limited attention has been paid to optimizing process planning and production scheduling within multi-site manufacturing environment. Most research in multi-site has been focusing on more strategic issues, e.g. with regard to integrating production planning with distribution systems [START_REF] Alvarez | Multi-plant production scheduling in SMEs[END_REF]. Chung et al. [START_REF] Chung | Application of genetic approach for advanced planning in multi-factory environment[END_REF] applied a modified genetic algorithm for process planning and scheduling in multi-factory environment. The aforementioned works have limited focus on ways to optimize resource utilization within multi-sites, taking into account of the complexity of multi-site environment. To assist decisions making for multi-site optimization, MAS has been suggested by Wang and Chan [START_REF] Wang | Virtual organization for supply chain integration: Two cases in the textile and fashion retailing industry[END_REF] as a promising tool and their future research work. Multi-Agent System (MAS) The system consists of a group of intelligent autonomous agents interacting with each other to achieve a global goal, while bearing their own objectives to fulfill [START_REF] Ferber | Multi-agent systems: an introduction to distributed artificial intelligence[END_REF]. The agent's characteristics of intelligence and autonomous decision-making have attracted a large number of researchers using it to solve complex problems in manufacturing domains [START_REF] Zhang | Agent-based workflow management for RFID-enabled real-time reconfigurable manufacturing[END_REF]. However, these works are mainly on research domains related to singlesite manufacturing facility. There is a small number research works using agent concept in multi-site manufacturing facilities [START_REF] Wong | Dynamic shopfloor scheduling in multi-agent manufacturing systems[END_REF]. Based on the literature, most agentbased research focuses on strategic issues, e.g. improving communication/information sharing between multi-plants, but less on operational issues, such as integrating operational functions (e.g. process planning and production scheduling) to optimize the resources in multi-site facilities. This paper is aimed at addressing this gap. Case Study The make-to-order enterprise has two manufacturing facilities and recently acquired a new facility at a nearby location. All these facilities operate on cellular manufacturing systems whereby machines are grouped into cells based on the type of manufacturing processes offered. Each cell in each facility has different manufacturing attributes, such as machine capability (e.g. productivity, tolerance precision, quality, reliability) and availability, machine setup, production cost, shop floor layout, etc. When a customer places an order, the challenge is how to take advantage of owning these facilities by sharing the available resources, so as to secure the most efficient and cost-effective process plan and production scheduling to fulfil the order. This process and scheduling plan should optimize the overall utilization of resources in multi-site environment at the lowest (transportation and production) cost possible. In this study, we predominantly consider operations between the multi-site manufacturing facilities (mainly on transportation) and within each facility (productionrelated) with certain constraints. 4 An Agent Model for Multi-Site Manufacturing Facilities Agent Model In this study, the key entities in the multi-site manufacturing environment are represented by agents (Fig. 1). There is an order agent representing an order placed by a customer. A job agent represents a job (i.e. a series of operations to produce a batch of components ordered) to be performed, the responsibility of which is to identify the most appropriate manufacturing resources to fulfill the order. Each facility will be assigned with one job agent. In each facility, each machine in the cells is represented by a machine agent. These machine agents will interact with each other in order to find a group of machines to produce the components from within the same facility. In order to explore the possibility of obtaining better machines, the agents will extend their search to look for alternative machines in other facilities. A transportation agent represents the available transportation between the facilities. When there is a need to transport WIP between these facilities, the transportation agent will provide the necessary information and determines if the service requested is available. Currency-Based Iterative Agent-Bidding Mechanism A currency-based iterative agent bidding mechanism is proposed to perform dynamic integration of process planning and production scheduling in multi-site environment. The bidding process begins when the order agent informs job agents of a new order, and the job agent announces the job to all machine agents in their respective facility to bid. The announcement includes the information in relation to the machining operations required for the job and the virtual currency value assigned to each operation. Machine agents that have the technical capability to perform the first operation will come forward to become 'leaders', whose responsibility is to search for other machines to perform the remaining operations. The leaders then announce the second operation to all machine agents within the same facility, including the leaders themselves. To offer better bids, the machine agents may reschedule and optimize their machine buffer by shifting jobs if other operations' due dates are not violated. This aims to produce optional (and hopefully, better) bids. As to whether to forward a bid for an operation, the machine agents will base on the amount of virtual profit earned which is above a set threshold value. By shifting jobs in the job buffer, a machine agent may put forward more than one bid as long as the virtual profits are above the set threshold. When the bids are received, the leader will select winning bid that provides the shortest lead time. This process is continued until whole set of operations to be performed is concluded. Job agent evaluates the resulting job plan for due date adherence which can be denoted as follow: MA(x,y) = machine agent representing y th machine in cell x ∑ ∑ = = = = n i win i n i win i C C T T 1 1 , (3) The job agent evaluates the bids with the aim of fulfilling the due date D and achieving minimum total production cost C: D T T C C Min n i i n i i ≤ =       = ∑ ∑ = = 1 1 (4) If the due date is not fulfilled (i.e. T > D), or the cost is not considered minimum, the virtual currency allocated to operations will be tuned in the next iteration to look for a better plan. If the due date cannot be fulfilled after a predefined iteration, the leader will search across different facilities to find optimal plan, which is then forwarded to order agent to decide. Based on Eq. 3, the order agent will award the job to the outstanding machine group and this will be conveyed to the machine agents through the respective job agent. The machine agents in the group will then update their loading schedules. Genetic Algorithm (GA) for Currency Values Tuning In this study, GA is used by the job agents to tune the currency values iteratively in order to search for better and better process plans and schedules. The following describes the GA process: 1. Gene coding: A population of chromosomes (POP_SIZE) and a generation number (GEN) are determined. The genes in each chromosome represent the currency values allocated to the features in a component. 2. Evaluation of fitness function (announcement to machine agents): The job agent evaluates the bids from the machine agents for the best solution at this iteration. 3. Selection of chromosomes ("select-all" strategy): All the chromosomes have equal opportunity to be selected for crossover and mutation operations. 4. Crossover process, then Mutation process 5. Re-announcement to machine agents: The offspring chromosomes in the new population (achieved through above steps) are announced to machine agents, and chromosome which the bid carries the least production cost and satisfies the product due date, is recorded as the best solution found at this iteration. Steps 3-5 are repeated till the Gen number is achieved. Simulation and Discussions The MAS proposed in this study was implemented on Java Platform. Two orders, with details about features to produce in sequence, were placed at interval times to produce a batch of parts each, namely PA and PB. The currency values were an estimate based on history data. The simulation of iterative mechanism commences with the order agent analyzing the process requirements, and followed by announcing the Unit of Cost Number of Iterations jobs of producing PA to all job agents and let them coordinate with the machine agents to find the best machines to perform the jobs. This process repeats for PB. The best bid (which was considered to be near-optimum) for part PA received by the order agent has a production cost of 1465 units and lead time of 942 units put forward by the job agent of Facility B, Fig. 2. When the leader extended its search to other facility, no overall best bid was obtained. The same GA parameters were used for the next order placed to produce a batch of 80 units of part PB. The overall best bid received to produce part PB has a production cost of 2308 units and lead time of 1448 (Fig. 3). It was found that the best bids received by the job agents were the same as presented in figures above, however the simulation time has increased approximately triple of the generation size of 100 iterations. The bids received at each iteration are spread out as previous runs shown above. Conclusions This paper proposed a multi-agent system (MAS) to optimize the resources within a multi-site manufacturing environment, in particular through the integration of process planning and production scheduling. Each agent has individual objectives and a global goal to achieve. The global goal of the MAS is to find an optimized process plan and schedule (within a facility and across different facilities) that gives the lowest production cost while satisfying all requirements such as due date and product quality, while the machine agent's objective is to win the operation jobs and optimize its machine utilization, and the job agent is responsible for assigning the operations to the outstanding group of machines. The simulation results show that as the currency being tuned at each iteration and so does the bidding process, different bids were con- Fig. 1 . 1 Fig. 1. Agent model Fig. 2 . 2 Fig. 2. Bids received for part PA by job agent at each GA iteration in Facility B. structed. This is aimed at increasing the opportunity to explore wider non-elite solution spaces, so as to finding better and better bids optimizing resource utilization.
14,689
[ "879917" ]
[ "485923", "336687" ]
01470706
en
[ "info" ]
2024/03/04 23:41:46
2017
https://inria.hal.science/hal-01470706/file/cgf17_sketchsoup.pdf
R Arora I Darolia V P Namboodiri K Singh A Bousseau Spencer Nugent SketchSoup: Exploratory Ideation using Design Sketches Keywords: Descriptors (according to ACM CCS): I, 3, 3 [Computer Graphics]: Picture/Image Generation-Display Algorithms Input sketch images (b) Registration (top) and interpolation space (bottom) (c) Warping (top) and nonuniform blending (bottom) (d) Oversketching (e) Augmenting the interpolation space Figure 1: SketchSoup takes an unstructured set of sketches as input, along with a small number of correspondences (shown as red dots) (a), registers the sketches using an iterative match-warp algorithm harnessing matching consistency across images ((b), top), and embeds the sketches into a 2D interpolation space based on their shape differences ((b), bottom). Users can explore the interpolation space to generate novel sketches, which are generated by warping existing sketches into alignment((c), top), followed by spatially non-uniform blending ((c), bottom). These interpolated sketches can serve as underlay to inspire new concepts (d), which can in turn be integrated into the interpolation space to iteratively generate more designs (e). Introduction The early ideation stage of conceptual design is dominated by rapidly drawn freehand sketches whereby designers externalize their imagination into an evolving design (Figure 2). Such sketches are sparse and focus on capturing the essence of the shape being designed. They allow designers to quickly create and communicate a mental design space to peers and clients, by expressing model variations in the drawings and by showing different viewpoints; often leaving regions of the drawing ambiguous and subject to interpretation [START_REF] Eissen | Sketching: The Basics[END_REF]. Automatically turning this mental design space into a computationally explicit one is the important but largely unaddressed problem for which we present our solution -SketchSoup (Figure 1). Design exploration at the ideation stage has great significance as it can catch and avoid design problems that are costly later in the design pipeline. Yet, while post-ideation conceptual modeling has been well studied [NISA07, BBS08, SBSS12, XCS * 14], there is little research, barring tools for architectural drawing [DXS * 07, PKM * 11], that supports the earlier ideation stage of the design process. Conceptual modeling tools require the designer to follow drawing principles and create simplified vector sketches, slowing down a designer and distracting from the central ideation goal of exploring a design space quickly. Our sketches are dominated by many imperfect and often incomplete strokes -an artifact of the efficiency with which they are executed -and further convey the early and unfinished nature of the design. Vectorizing and lifting these sketches into 3D for the purpose of constructing a 2D design space is thus both difficult [OK11, BC13] and unnecessary. While crystallized product design concepts can be represented by a cleaned-up network of sketched strokes with coherent geometric properties [XCS * 14], this is not an assumption we can make of arbitrary drawings at the ideation stage. Moreover, ideation and early concept drawings are often executed on paper, as and when inspiration strikes, necessitating the need for handling raster inputs. However, image-based modeling and rendering techniques designed for natural [CDSHD13] or even cartoon-like images [RID10] are not directly applicable. Unlike natural images, ideation sketches are sparse, with only a small minority of pixels providing information for image matching. Such sketches are also noisy, and may represent inaccurate or inconsistent projections of a depicted shape, confounding view calibration methods such as structure-from-motion [SSS06]. Model variations depicted in sketches further requires estimating non-rigid transformations between the sketches. Our contribution is the first approach to enable the interactive exploration of the continuous design space induced by a collection of rough ideation sketches. Smooth view and shape transitions help understand the relationships between different design alternatives [START_REF] Heer | Animated transitions in statistical data graphics[END_REF], and the interpolated drawings can serve either as design alternatives themselves or as inspirational underlays for the creation of new concepts. From a technical standpoint, SketchSoup is enabled by the careful design of an end-to-end solution that accounts for properties of design sketches to successfully perform sketch filtering, registration, warping and blending. We employ a multi-image matching algorithm to identify the common features present in several sketches while neglecting the strokes that are specific to each sketch. Designers can optionally refine these matches. We then exploit this information to guide both image warping and image blending. In particular, we propose a novel image blending scheme that adjusts the contribution of the strokes according to the number of images where they appear: strokes that are present in many sketches are persistent throughout interpolation, while strokes that are only present in one sketch disappear quickly. This scheme reduces ghosting artifacts when creating novel sketches from existing ones. Finally, we embed the sketches in a 2D interpolation space using multi-dimensional scaling, where the relative distance between the sketches reflects their similarity as measured by their motion field magnitude. When dealing with multi-view sketches, motion due to shape variation is typically smaller in magnitude than that due to changes of viewpoints. The embedding thus provides a plausible estimate of relative camera positions, which gives designers the illusion of performing small rotations of the object in 3D. Related Work Our approach brings together the areas of sketch-based modeling and image-based rendering. We briefly discuss these two domains in the context of SketchSoup. Sketch-based modeling and interpolation. Sketch based modeling systems aim at creating consistent 3D models from drawings [OSSJ09]. Multi-view approaches adopt an iterative workflow where users progressively sketch strokes over existing or transient surfaces [NISA07, BBS08]. Single-view algorithms strive to estimate a complete 3D model from a clean vector drawing [XCS * 14]. We see our approach as a preliminary step for sketch-based modeling by allowing designers to explore a range of design concepts using rough unstructured preparatory drawings in 2D, well before attempting to model anything in 3D. Sketch-based 3D modelers also assume and construct a unique and definitive 3D shape representation for any sketch, while our method is meant to iteratively explore a collective of variations on a design concept, from which to select one or more to take forward along the design pipeline. Our approach is closer in spirit to drawing interpolation methods. In particular, Rivers et al. [RID10] describe a system to produce view interpolations from cartoon drawings that resemble ours. However, their method relies on vector drawings that are deformed by the user in each keyframe, which implicitly provides perfect correspondences. We also share the motivation of Baxter and Anjyo [BA06], who introduce a doodle space to create new drawings by interpolating a few examples. However, their system takes, as input, vector drawings executed using the same drawing sequence, which makes their stroke-matching algorithm inapplicable to the rough bitmap sketches we target. Similarly, the later system by Baxter et al. [START_REF] Baxter | N-way morphing for 2D Animation[END_REF] only registers the outer boundary of 2D shapes, while we also register inner strokes. Our goal is also related to the one of Shao et al. [SLZ * 13] who interpret sketches of objects composed of articulated parts. Here again, user intervention is necessary to model the object with simple geometric shapes. We instead strive to interpolate between 2D sketches without requiring any explicit 3D reconstruction. Our approach thus shares natural similarities with the problem of 2D cartoon registration and inbetweening [SDC09, WNS * 10, XWSY15], such as the concept of alternating between feature matching and shape-preserving regularization to warp one image onto another [SDC09]. However, while successive keyframes of cartoon animations have many features in common, we aim at registering design sketches that have similar content overall but different details at the stroke level. Finally, one of the applications enabled by SketchSoup is to generate morphed sketches that designers can use as underlays to draw new concept. This use of underlays as guidance is similar in spirit to the Shadow-Draw system [LZC11], although we aim at registering and morphing a small number of user-provided sketches rather than blending a large number of roughly aligned photographs. Image-based rendering and morphing. Image-based rendering methods vary wildly in terms of their target input complexity [LH96, CW93, CDSHD13]. Our method is most comparable to approaches which use implicit geometric information, that is, 2D correspondences between images [SD96]. Our work is also greatly inspired by the PhotoTourism system [SSS06] that provides interactive view interpolation as a way to explore unstructured photocollections of touristic sites. Similarly, our system offers a novel way to experience the design space captured by an unstructured set of sketches. However, while PhotoTourism exploits structure-frommotion to register photographs of a static scene, we face the challenge of registering line drawings that contain variations of viewpoints, shapes, and even styles. We thus share the motivation of Xu et al. [XWL * 08] and Averbuch-Elor et al. [START_REF] Averbuch-Elor H | Smooth image sequences for data-driven morphing[END_REF] who register and order multiple photographs of similar objects to create animations of animal motion and view and shape transitions respectively. However, while Xu et al. and Averbuch-Elor et al. select a subset of images to form a smooth 1D animation path, we seek to order all the provided drawings over a 2D interpolation space. In addition, they only register the outer boundaries of the objects, while our algorithm also builds correspondences between interior parts. Recent methods have improved on the classical morphing technique by computing dense correspondences using optical flow [MHM * 09], or by computing a half-way image for better alignment [LLN * 14]. However, these techniques rely on pixel neighborhoods to compute dense motion fields over natural images, which is poorly suited to sparse line drawings where many neighborhoods look alike. Our approach instead builds upon a recent multi-image matching algorithm [ZJLYE15], which we adapt to work well with sketch input. Registering Concept Sketches High quality morphing requires high quality correspondences between images. However, design sketches contain many sources of variations that make this requirement a challenge: they are drawn from different viewpoints, they represent slightly different shapes, and they are often polluted with decorative lines and hatching. Fortunately, all the sketches in a collection represent a similar object. The key to the success of our system is to leverage the redundancy offered by all the sketches to be robust to the variations specific to each sketch. In particular, we build on the concept of cycle consistency, as introduced in related work on shape and image matching [NBCW * 11, ZKP10, ZJLYE15]. This concept can be summarized as follow: if point p in sketch S i matches point q in sketch S j , and if point q in sketch S j matches point r in sketch S k , then by transitivity point p should also match point r. Cycle consistency offers a way to detect high quality matches, as well as to improve low quality matches by replacing them with better candidates through transitivity. Our approach follows the general methodology of the FlowWeb algorithm [ZJLYE15], which optimizes an initial set of matches by iterating three steps: 1. Cycle-consistency score. For each pair of images, for each match, count how many other images confirm the match as part of a consistent cycle. 2. Cycle-consistency improvement. For each pair of images, for each match, replace it with an alternative if it increases its cycleconsistency score. 3. Spatial propagation. For each pair of images, update the matches by propagating information from high quality matches to their low-quality spatial neighbors. We refer the interested reader to [ZJLYE15] for a detailed description of the FlowWeb algorithm. We now describe how we tailored this algorithm to the specifics of design sketches. Sketch pre-processing Hatching removal. Our goal is to build correspondences between the sketch lines that represent contours and surface discontinuities. However, these lines are often polluted by repetitive hatching lines that convey shading and shadows. As a pre-process, we blur out hatching lines by applying the rolling guidance filter [ZSXJ14], a filter that has been specifically designed to remove repetitive textures in images while preserving other details like contours. Figure 3 shows the effect of this filter on a sketch. Contour thinning and sampling. The original FlowWeb algorithm is designed to compute dense correspondence fields between natural images. However, concept sketches are predominantly composed of sparse contours rather than dense shading and texture areas. We thus designed our algorithm to build sparse correspondences between point samples distributed along the contours of the drawing. Since contours can have varying thickness and sketchyness, we first locate their centerline by applying a small blur followed by non-maximum suppression and hysteresis thresholding. This filtering is similar in spirit to the Canny edge detector, except that we process the image intensity rather than its gradient since the contours in the drawing already represent the edges we want to thin. We then distribute candidate point matches over the sketch by sampling the thinned contours. Our implementation generates point samples using a uniform grid of cell size 10 × 10 pixels. For each cell, we first try to find contour junctions by detecting Harris corners [START_REF] Harris | A combined corner and edge detector[END_REF]. For cells with no such junctions, a sample is positioned on the edge pixel closest to the center (if any). On the left, we show the resulting sampling on a typical sketch. This subsampling increases processing speed by allowing the later use of a warping mesh that is coarser than the image resolution. Nevertheless, alternative warping approaches based on dense regularization [SMW06, NNRS15] could alleviate the need for sub-sampling. Initializing the matches Given two sketches represented as sets of points distributed along the contours, our goal is now to find, for each point in one sketch, the most similar point in the other sketch. Similar to prior work [XWL * 08, CCT * 09], we use shape context [BMP02] as a local de-scriptor to measure similarity between points in different sketches. This descriptor encodes a log-polar histogram of the points around the point of interest. Shape contexts are intrinsically invariant to translations. They are easily made scale invariant by normalizing all radial distances by the mean distance between the n 2 point pairs in the shape. Moreover, the coarse histogram representation makes shape contexts robust against small geometric perturbations, occlusions, and outliers typical of concept sketches. Since shape contexts are histogram-based distributions, the cost of matching two points is computed with a χ 2 test. In order to take local appearance into account, we augment the shape context cost with a patch appearance cost computed as the Gaussian weighted intensity difference between small (9 × 9 pixels) patches around the two points. Following Belongie et al. [BMP02], we linearly combine the two costs, and then use a bipartite matching algorithm known as the Hungarian algorithm [Mun57] to compute the best matching between the two point sets. Further, to avoid strong distortions, we only keep spatially local matches by pruning out matches where the source and target points are farther than min(h, w)/10 pixels from each other, where h and w are the height and width of the sketch respectively. While we also experimented with other descriptors (SIFT, SSD) and found them less reliable than shape context, many alternatives exist such as recent deep-learned features. Computing and improving cycle consistency Given an initial set of matches between all pairs of sketches, we build on the FlowWeb algorithm to improve the matches by encouraging cycle consistency. Cycle-consistency score. We denote T pq i j the motion vector between point p in sketch S i and its matching point q in sketch S j . The cycle-consistency score C(T pq i j ) denotes the number of other sketches S k that confirm this match by transitivity, up to a small error tolerance ε C(T pq i j ) = card{S k ∈ {S i , S j } T pq i j -(T pr ik + T rq k j ) < ε, r ∈ S k }. (1) We fix ε to 0.02 × max(h, w) pixels in our implementation. Note that we only consider cycles formed by triplets of images. While higher degree cycles could be considered, 3-cycles have been shown sufficient while remaining tractable [ZJLYE15]. Cycle-consistency improvement. Once each match has been assigned a consistency score, the second step of the FlowWeb algorithm is to improve the matches by replacing them with alternatives that have a higher score. These alternatives are found by transitivity. Given a match T pq i j , the algorithm considers all other sketches S k ∈ {S i , S j } and any alternative matching point q with motion vector T Propagation with shape-preserving warp The last step of the algorithm is to propagate information from the high-quality matches to their low-quality neighbors, where we identify high-quality matches as the ones with a consistency score greater than 2. The original FlowWeb algorithm works on dense, pixel-wise correspondences and relies on a consistency-weighted Gaussian filter to propagate motion vectors in image space. However, such a filtering approach does not impose any regularization on the resulting motion field, which in our experience can result in strong distortions. We thus propose to perform spatial propagation using an iterative matching/warping scheme similar to the as-rigidas-possible registration algorithm of Sýkora et al. [SDC09]. Consider a pair of sketches, we first embed the point samples of the first sketch into a triangular warping mesh. We then warp the mesh using the high quality matches as guidance, subject to a regularization term that seeks to preserve the shape of the mesh triangles. This warp aligns the first sketch to be closer to the second one. We finally use the warped sketch to update the Shape Context descriptor of each sample and to update the low-quality matches by running the Hungarian algorithm with these new descriptors, as described in Section 3.2. We repeat this process for 3 iterations. Various strategies for shape preserving warps have been described in image morphing and image-based rendering [ZCHM09, LGJA09, SDC09]. We follow the formulation of [LGJA09], which minimizes an energy functional composed of two terms: E = wcEc + wsEs. (2) The first term tries to satisfy sparse correspondence constraints, while the second term penalizes strong distortions. Note that while we rely on a triangle mesh to compute these two terms, alternative meshless methods [SMW06, NNRS15] could also be used to generate a dense regularized propagation. Correspondence constraints. In our context, we set the correspondence constraints as the set of matches P that have a consistency score greater or equal to 2. For each such match T pq i j ∈ P, we have a point p in sketch S i and a corresponding point q in sketch S j . The warp should, therefore, satisfy W i j (p) = q † . Let the triangle in which p is contained be formed of the vertices (u, v, w) and let α(p), β(p), γ(p) be the barycentric coordinates of p w.r.t the triangle. The least-squares energy term for the correspondence constraint is therefore Ec(p) = C(T pq i j ) (α(p)u, β(p)v, γ(p)w) -q 2 , (3) where we weight each match by its consistency score C(T pq i j ). Points with cyclic consistency less than 2 are weighted by their combined shape context and intensity score normalized to [0, 1], ensuring that such matches have a lower level of influence. We sum this energy term over all matches T pq i j ∈ P i . † W i j (p) and T pq i j both represent the motion vector at point p -while W i j (p) encodes the motion of sketch S i towards S j for all pixels, T pq i j encodes the motion of the sparse point set. Triangle shape constraints. Consider a mesh triangle t = (u, v, w) and attach a local orthogonal frame to it: {vu, R 90 (vu)}, where R 90 is a counterclockwise rotation by 90 degrees. Assume that u is the origin of the frame. Now, in the frame, v is simply (1, 0) and let w = (a, b). To preserve the shape of this triangle, we need to ensure that the transformation it goes through is as close as possible to a similarity transformation. Thus, we try to ensure that the local frame remains orthogonal and the coordinates of the vertices remain the same. The energy to express this constraint, with (u , v , w ) denoting the warped coordinates of the triangle, is Es(t) = w -(u + a(v -u ) + b(R 90 (v -u ))) 2 . ( 4 ) We sum this energy term over all triangles of the mesh. Energy minimization. All the energy terms are quadratic, and the system of equations is overdetermined. This results in a standard least-squares problem which we solve with QR decomposition. In our experiments, we set wc = 5 and ws = 6. User guidance The algorithm described above converges to high-quality matches for simple, similar sketches. However, real-world sketches contain many variations of shape, viewpoint and line distribution which can disturb automatic matching. We improve accuracy by injecting a few user-provided correspondences in the algorithm, typically 3 to 5 annotations per image. The user only needs to provide correspondences between the most representative sketch and other sketches. We use these to automatically assign cycle-consistent correspondences to all other sketch pairs using transitivity. We keep the user-provided correspondences fixed during all steps of the algorithm, and assign them the highest possible consistency score of N -2 (N is the number of input sketches) to ensure that they impact other matches during cycle-consistency improvement and spatial propagation. Blending Concept Sketches Once all sketches are registered, we interpolate them using warping and blending. While applications involving images with similar appearance and topology, such as image-based rendering [CD-SHD13] and natural image morphing [LLN * 14] are served reasonably well by uniform alpha-blending, blending concept sketches from disparate sources proves more challenging as misaligned strokes produce severe ghosting. We first describe how we compute a generalized warp function from one sketch to a combination of other sketches. We follow up with a description of our blending method, and the user interface to control it. All these computations are performed in real-time. Generalized warping function Consider the N sketches input to the system, S 1 , S 2 , . . . , S N . The registration algorithm generates a family of pairwise warp functions W = {W i j }, 1 ≤ i, j ≤ N, such that applying the function W i j on a pixel p ∈ S i moves it towards its matched position in S j . To move sketch S i in-between multiple sketches S j=1..N , we follow [START_REF] Wolberg | Polymorph: morphing among multiple images[END_REF] that computes a generalized warp function as a linear combination of the pairwise warps W i (p) = N ∑ j=1 c j W i j (p), where the contribution weights c 1 , c 2 , . . . , c N satisfy c i ∈ [0, 1], and ∑ c i = 1. The interpolated sketch is given by the application of W i on S i , denoted as W i • S i . We compute the generalized warp function for all the sketches in real time so that users can interactively explore the interpolation space and create arbitrary combinations of sketches. Consistency weighted non-linear alpha blending Analogous to the warping weights c i which determine the shape of the interpolated sketch, one can define blending weights α 1 , α 2 , . . . , α N to combine the color information of the warped sketches. The appearance of the resulting interpolated image S is determined as S = N ∑ i=1 α i ( W i • S i ). Existing work on image blending often choose α i to be the same as c i . In order to reduce the ghosting caused by this simple function, we modify it to allow for non-uniform blending across the image space, and to vary α i as a non-linear function of c i . The idea is to follow linear alpha-blending for sketch contours that have good matches, but to quickly suppress contours that have poor matches as the contribution weight of their parent sketch decreases. We utilize the combination of two sigmoid-like functions to achieve this. We first define the matching confidence of pixels sampled in Section 3.1 using their cyclic consistency. conf(p ∈ S i ) = 1 N -2 N ∑ j=1 c j × C(T pq i j ), where p is matched to q in S j , and 1/(N -2) is the normalization factor. This confidence score is then propagated to all other pixels via linear interpolation (see Figure 4a). Our blending function for a pixel p in S i is defined as andn 2 are fixed such that α i (p, 0, k) = 0, α i (p, 1, k) = 1, and both the cases of the function evaluate to 2/3 at c i = 2/3. The non-linearity of the blending function results in images with the highest contribution c i = max j (c j ) contributing more strongly to the final image appearance as compared to standard alpha blending. In addition, the non-uniformity of the function in image space ensures that wellmatched regions smoothly transition into other images, while regions with poor matching "pop" in and out of existence based on which image has a high contribution. α i (p, c i , k) = m1 1+exp(-a(p,k)×(ci-2/3)) + n 1 if i = argmax j (c j ) m2 1+exp(-a(p,k)×(ci-2/3)) + n 2 otherwise, where a(p, k) = (conf(p)) -k . The values m 1 , n 1 , m 2 , The parameter k ∈ [0, 5] is controlled by using a slider in our interface. This allows the user to choose between mimicking alpha blending at one extreme (k = 0), and drawing contours predominantly from the image contributing the most to the interpolated shape at the other extreme (k = 5). The resulting difference in the interpolated image can be observed in Figure 4b. Finally, we normalize the contrast of the interpolated image to avoid contrast oscillation during interactive navigation. Notice that the poorly matched regions such as those with texture but no contours disappear first. User interface We present the user with a planar embedding of the sketches, which represents a 2D interpolation space. The embedding is computed using metric multi-dimensional scaling [CC00] on average spatial distance between pairwise sketch correspondences. The average distance between any two sketches S i and S j is given by d i j = 1 2 ∑ p∈Si W i j (p) -p 2 |S i | + ∑ q∈S j W ji (q) -q 2 |S j | . The embedding places similar sketches close to each other on the plane. Users can also invert the embedding about either axis, or switch the two axes with each other if they feel those variants represent a more natural orientation. When the sketches are very similar in shape but differ in viewpoint, the arrangement gives an approximation of the 3D camera positions of the sketch viewpoints, as illustrated in Figure 11 (cars, planes). Exploration of the design space then gives the impression of 3D-like rotations around the concept, as shown in the accompanying video. For rendering, we compute a Delaunay triangulation of the embedding. The contribution of each sketch is determined by the triangle under the user's mouse pointer. The sketches corresponding to the three vertices of this triangle contribute according to the barycentric coordinates of the point, while all other sketches have zero contribution. We also provide an alternate interface in which a user can simply choose the contributions of the sketches by manipulating sliders, one for each sketch. This interface also allows the user to turn off the blending equation, and use color information from a single sketch of her choice. While the interpolation space helps understand the relationship between various sketches, the slider-based interface gives users full control on which sketches they want to combine. Using either interface, a user can save the current interpolation at any time and oversketch it using traditional or digital methods to initiate the next design iteration with an augmented design space. Evaluation We now compare our algorithmic components with alternative solutions designed for other types of images than sketches. We provide animated comparisons of morphing sequences as supplemental material. Figure 5 shows the matches produced by the original FlowWeb algorithm and by our adaptation. Since the original FlowWeb was designed to build dense correspondences between natural images, it also attempts to match points in the empty areas of the drawings. For the sake of visualization, we only show the matches where both points lie on a stroke, while we remove all other matches, which contain one or more point in an empty area. This visualization reveals that our algorithm obtains more stroke-to-stroke matches, and of higher quality than the matches found by the original algorithm. Source sketch Target sketch [ZJLYE15] Ours We also show the effect of applying a dense registration algorithm [GKT * 08] after aligning the sketches. This post-process was used with success by Sýkora et al. to refine the registration of cartoon images that mostly differed in pose but shared similar details. In contrast, the design sketches we target often contain very different pen strokes and as such cannot be aligned with sub-pixel accuracy. The resulting local distortions are especially noticeable on animated sequences since this post-process is applied on a perframe basis, which produces temporal artifacts (please refer to the accompanying video). In cases where sketches have minor shape differences, this post-process can be helpful (Figure 7), but still leads to loss of interactivity and temporal artifacts (see accompanying video). While their method does not explicitly support user guidance, we augmented it to take advantage of user specified correspondences, if available. Similar to our own algorithm, we enforce these correspondences across all iterations. Figure 6 also provides a comparison of our method with the recent halfway-domain morphing algorithm [LLN * 14] using the same user-provided correspondences. Similarly to the original FlowWeb algorithm, the halfway-domain morphing builds dense correspondences between the images, which often yields erroneous matches between stroke pixels and empty areas, as also illustrated in the accompanying video. In contrast, our method ensures that stroke samples match to other stroke samples, up to the shapepreserving regularization. Finally, all the above algorithms have been developed for matching and morphing pairs of images, and it is unclear how to generalize them to multiple images. Results We have applied our approach on a number of real-world design sketches. Figure 11 shows the planar embeddings we obtain for several sets of sketches, as well as some of the interpolations generated by our algorithm during interactive exploration. The accompanying video shows how the shape and view transitions produced by our morphing provide a vivid sense of continuous navigation within the design space. In the field of data visualization, such animated transitions have been shown to improve graphical perception of interrelated data as they convey transformations of objects, cause-andeffect relationships and are emotionally engaging [START_REF] Heer | Animated transitions in statistical data graphics[END_REF]. Figures 8 and9 illustrate two application scenarios where our interpolated sketches support the creation of new concepts. In Figure 8, a designer aligns two sketches of airplanes using our tool before selectively erasing parts of each sketch to create a new mixand-match airplane with more propellers. In Figure 9, a designer uses interpolated sketches as inspirational underlays to draw new shapes. Once created, these new drawings can be injected back in our algorithm to expand the design space for iterative design exploration. The accompanying video shows how a design student used this feature to design an iron. While informal, this user test confirmed that blended sketches form an effective guidance for design exploration. At the core, our algorithm relies on good quality shape context matches for sketch alignment. In cases where the difference in shapes of the drawn concepts is too high or the viewpoints are very far apart, our algorithm fails to produce acceptable results, as shown in Figure 10. Providing manual correspondences, or adding more drawings to the dataset, can alleviate this limitation by relating very different sketches via more similar ones. Conclusion We have adapted and reformulated a number of 2D image processing techniques to enable matching, warping and blending design sketch input. These ideas can impact other problems in the space of sketch-based interfaces and modeling. In the future, we also plan to explore the use of non-rigid 3D reconstruction algorithms [START_REF] Carreira | Reconstructing pascal voc[END_REF] to recover the 3D object depicted by multiple concept sketches along with the variations specific to each sketch. While there is a rich body of work in exploring collections of well-defined 3D shapes, there are few tools to support the unstructured and messy space of design ideation. SketchSoup thus fills a void, empowering designers with the ability to continuously explore and refine sparsely sampled design sketch spaces in 2D itself, at the same level of detail and sophistication as the input sketches. By registering and morphing unstructured sketch collections, automatically or with minimal user interaction, SketchSoup further allows designers to present their design spaces better to others, via Figure 2 : 2 Figure 2: Designers explore the shape of a concept by drawing many variations from different viewpoints. Drawing by Spencer Nugent on sketch-a-day.com Figure 3 : 3 Figure 3: The rolling guidance filter prevents sketch details like hatching (left) from contributing to the sketch's inferred shape by selectively blurring them out (right). The score of an alternative match is given by the number of sketches S l ∈ {S i , S j , S k } that confirm both segments T pr ik and T rq k j . If the alternative match obtains a higher score then T pq i j is replaced by T pq i j . Figure 4 : 4 Figure4: Visualizing the spatial distribution of matching confidence over a sketch (a), with brighter regions depicting higher matching confidence; and an example to show the impact on blending as the parameter k increases from left to right (b). Notice that the poorly matched regions such as those with texture but no contours disappear first. Figure 5 : 5 Figure 5: Matches obtained by the original FlowWeb algorithm, compared to our adaptation for sketches. Notice the improvement in quality as well as quantity of stroke-to-stroke matches. Figure 6 6 Figure6compares our approach with our implementation of the registration algorithm ofSýkora et al. [SDC09]. While similar in spirit, our algorithm is customized for design sketches by using the ShapeContext descriptor, by sampling feature points along strokes rather than over the entire image, and by using FlowWeb for cycle consistency between multiple images. These different ingredients yield a better alignment of the design sketches overall. We also show the effect of applying a dense registration algorithm [GKT * 08] after aligning the sketches. This post-process was used with success by Sýkora et al. to refine the registration of cartoon images that mostly differed in pose but shared similar details. In contrast, the design sketches we target often contain very different pen strokes and as such cannot be aligned with sub-pixel accuracy. The resulting local distortions are especially noticeable on animated sequences since this post-process is applied on a perframe basis, which produces temporal artifacts (please refer to the accompanying video). In cases where sketches have minor shape differences, this post-process can be helpful (Figure7), but still leads to loss of interactivity and temporal artifacts (see accompanying video). While their method does not explicitly support user guidance, we augmented it to take advantage of user specified correspondences, if available. Similar to our own algorithm, we enforce these correspondences across all iterations. (a) Source image (b) Target image (c) Sýkora et al. [SDC09] (d) [SDC09] + [GKT * 08] (e) Liao et al [LLN * 14] (f) Ours Figure 6 : 6 Figure 6: Comparison of our approach for sketch alignment (f) with an existing natural image alignment algorithm [LLN * 14] (e); and with a cartoon image alignment method [SDC09], the latter both with (d) and without (c) a dense alignment post-process. Red dots on the input images show user correspondences. Fine-level distortions and misalignments have been indicated using orange and blue arrows, respectively. Please refer to the accompanying video for animated versions of these comparisons. Figure 7 : 7 Figure7: Applying a dense registration method [GKT * 08] as a per-frame post-process on our method can improve alignment when the shape difference between sketch pairs is small. Figure 9 : 9 Figure 9: Interpolations generated by our method (top) guide the creation of polished novel sketches (bottom, with faded blue guidance). Figure 10 : 10 Figure 10: Limitation. Our algorithm fails to align sketches properly when the shape or view difference is too large. Figure 11 : 11 Figure 11: 2D interpolation spaces and selected interpolated sketches for different concepts. Some sketches by Spencer Nugent. © 2017 The Author(s) Acknowledgements The authors thank Candice Lin for narrating the video. We also thank Mike Serafin and Spencer Nugent for allowing us to use their inspiring design sketches, and to Chris de Pauli for testing our system to create new sketches (shown in Figure 9). This work was partially supported by research and software donations from Adobe.
39,039
[ "945809", "912884" ]
[ "103564", "54450", "54450", "103564", "411861" ]
01470880
en
[ "info" ]
2024/03/04 23:41:46
2016
https://ens.hal.science/hal-01470880/file/main.pdf
Afonso Arriaga Manuel Barbosa Pooya Farshim Private Functional Encryption: Indistinguishability-Based Definitions and Constructions from Obfuscation Keywords: Function privacy, functional encryption, obfuscation, keyword search, inner-product encryption Private functional encryption guarantees that not only the information in ciphertexts is hidden but also the circuits in decryption tokens are protected. A notable use case of this notion is query privacy in searchable encryption. Prior privacy models in the literature were fine-tuned for specific functionalities (namely, identity-based encryption and inner-product encryption), did not model correlations between ciphertexts and decryption tokens, or fell under strong uninstantiability results. We develop a new indistinguishability-based privacy notion that overcomes these limitations and give constructions supporting different circuit classes and meeting varying degrees of security. Obfuscation is a common building block that these constructions share, albeit the obfuscators necessary for each construction are based on different assumptions. Our feasibility results go beyond previous constructions in a number of ways. In particular, a keyword search scheme that we base on point obfuscators tolerates arbitrary and possibly low-entropy correlations between encrypted data and queried keywords, even under (mildly restricted) adaptive token-extraction queries. Our more elaborate keyword search scheme achieves the strongest notion of privacy that we put forth (with no restrictions), but relies on stronger forms of obfuscation. We also develop a composable and distributionally secure hyperplane membership obfuscator and use it to build an inner-product encryption scheme that achieves an unprecedented level of privacy. This, in particular, positively answers a question left open by Boneh, Raghunathan and Segev (ASIACRYPT 2013) concerning the extension and realization of enhanced security for schemes supporting this functionality. Table of Contents 1 Introduction Standard notions of security for public-key functional encryption [START_REF] Boneh | Functional encryption: Definitions and challenges[END_REF][START_REF] O'neill | Definitional issues in functional encryption[END_REF] do not cover important use cases where, not only encrypted data, but also the circuits (functions) associated with decryption tokens contain sensitive information. The typical example is that of a cloud provider that stores an encrypted data set created by Alice, over which Bob wishes to make advanced data mining queries. Functional encryption provides a solution to this use case: Bob can send a decryption token to the cloud provider that allows it to recover the result of computing a query over the encrypted data set. Standard security notions guarantee that nothing about the plaintexts beyond query results are revealed to the server. However, they do not guarantee that the performed query, which may for example contain a keyword sensitive to Bob, is hidden from the server. Function privacy Function privacy is an emerging new notion that aims to address this problem. The formal study of this notion begins in the work of Boneh, Raghunathan and Segev [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF], where the authors focused on identity-based encryption (IBE) and presented the first constructions offering various degrees of privacy. From the onset, it became clear that formalizing such a notion is challenging, even for simple functionalities such as IBE, as the holder of a token for circuit C may encrypt arbitrary messages using the public key, and obtain a large number of evaluations of C via the decryption algorithm. Boneh et al. therefore considered privacy for identities with high min-entropy. In general, however, the previous observation implies that (non-trivial) function privacy can only be achieved as long as the token holder is unable to learn C through such an attack, immediately suggesting a strong connection between private functional encryption and obfuscation. Boneh, Raghunathan and Segev [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF][START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF] give indistinguishability-based definitions of function privacy for IBE and subspace membership (a generalization of inner-product encryption). Roughly speaking, the IBE model imposes that whenever the token queries of the adversary have high min-entropy (or form a block source), decryption tokens will be indistinguishable from those corresponding to identities sampled from the uniform distribution. For subspace membership, the definition requires the random variables associated with vector components to be a block source. Tokens for high-entropy identities, however, rarely exist in isolation and are often available in conjunction with ciphertexts encrypted for the very same identities. To address this requirement, the same authors [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF] proposed an enhanced model for IBE in which the adversary also gets access to ciphertexts encrypted for identities associated with the challenge tokens. This model was subsequently shown in [START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF] to be infeasible under the formalism of Boneh et al., as correlations with encrypted identities can lead to distinguishing attacks, e.g. via repetition patterns. (We will discuss this later in the paper.) Although the model can be salvaged by further restricting the class of admissible distributions, it becomes primitive-specific and formulating a definition for other functionalities is not obvious (and indeed a similar extension was not formalized for subspace membership in [START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF]). Additionally, this model also falls short of capturing arbitrary correlations between encrypted messages and tokens, as it does not allow an adversary to see ciphertexts for identities which, although correlated with those extracted in the challenge tokens, do not match any of them. Very recently, Agrawal et al. [START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF] have put forth a model for functional encryption that aims to address this problem with a very general UC-style definition (called "wishful security"). The core of the definition is an ideal security notion for functional encryption, which makes it explicit that both data privacy and function privacy should be simultaneously enforced. However, not only is this general simulation-based definition difficult to work with, but also aiming for it would amount to constructing virtual black-box obfuscation, for which strong impossibility results are known [START_REF] Barak | On the (im)possibility of obfuscating programs[END_REF][START_REF] Goldwasser | On the impossibility of obfuscation with auxiliary input[END_REF]. Indeed, the positive results of [START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF] are obtained in idealized models of computation. Contributions The above discussion highlights the need for a general and convenient definition of privacy that incorporates arbitrary correlations between decryption tokens and encrypted messages, and yet can be shown to be feasible without relying on idealized models of computation. The first contribution of our work is an indistinguishability-based definition that precisely models arbitrary correlations for general circuits. Our definition builds on a framework for unpredictable samplers and unifies within a single definition all previous indistinguishability-based notions. The second contribution of the paper is four constructions of private functional encryption supporting different classes of circuits and meeting varying degrees of security: (1) a simple and functionality-agnostic construction shown to be secure in the absence of correlated messages, (2) a more evolved and still functionality-agnostic construction (taking advantage of recent techniques from [START_REF] Ananth | From selective to adaptive security in functional encryption[END_REF][START_REF] Caro | On the achievability of simulation-based security for functional encryption[END_REF]) that achieves function privacy with respect to a general class of samplers that we call concentrated; (3) a conceptually simpler construction specific for point functions achieving privacy in the presence of correlated messages beyond all previously proposed indistinguishability-based security definitions; (4) a construction specific for point functions that achieves our strongest notion of privacy (but relies on a more expressive form of obfuscation than the previous construction). We also develop an obfuscator for hyperplane membership that, when plugged into the second construction above gives rise to a private inner-product encryption scheme, answering a question left open by Boneh, Raghunathan and Segev [START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF] on how to define and realize enhanced security (i.e., privacy in the presence of correlated messages) for schemes supporting this functionality. The unpredictability framework. At the core of our definitional work lies a precise definition characterizing which distributions over circuits and what correlated side information can be tolerated by a private FE scheme. We build on ideas from obfuscation [START_REF] Bellare | Poly-many hardcore bits for any one-way function and a framework for differing-inputs obfuscation[END_REF][START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF][START_REF] Barak | Obfuscation for evasive functions[END_REF], functional encryption [START_REF] Boneh | Functional encryption: Definitions and challenges[END_REF][START_REF] O'neill | Definitional issues in functional encryption[END_REF] and prior work in function privacy [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF][START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF][START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF][START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF] to define a game-based notion of unpredictability for general functions. Our definition allows a sampler S to output a pair of circuit vectors (C 0 , C 1 ) and a pair of message vectors (m 0 , m 1 ) with arbitrary correlations between them, along with some side information z. Unpredictability then imposes that no predictor P interacting with oracles computing evaluations on these circuits and messages can find a point x such that C 0 (x) = C 1 (x). (We do not impose indistinguishability, which is stronger, results in a smaller class of unpredictable samplers, and hence leads to weaker security.) The predictor P sees z and the outputs of the sampled circuits on the sampled messages. It can run in bounded or unbounded time, but it can only make polynomially many oracle queries to obtain additional information about the sampled circuits and messages. To avoid attacks that arise in the presence of computationally unpredictable auxiliary information [START_REF] Brzuska | Indistinguishability obfuscation versus multi-bit point obfuscation with auxiliary input[END_REF][START_REF] Bellare | Contention in cryptoland: obfuscation, leakage and UCE[END_REF] we adopt unbounded prediction later in our analyses. This formalism fixes the unpredictability notion throughout the paper. We can then capture specific types of samplers by imposing extra structural requirements on them. For instance, we may require the sampler to output a bounded number of circuits and messages, or include specific data in the auxiliary information, or do not include any auxiliary information at all. Imposing that the sampler outputs single-circuit vectors, no messages, and includes the circuits as auxiliary information leads to the notion of differing-inputs obfuscation [START_REF] Ananth | Differing-inputs obfuscation and applications[END_REF][START_REF] Bellare | Poly-many hardcore bits for any one-way function and a framework for differing-inputs obfuscation[END_REF]. Further imposing that the sampler also includes in the auxiliary information its random coins or allowing the predictor to run in unbounded time leads to public-coin differing-inputs obfuscation [START_REF] Ishai | Public-coin differing-inputs obfuscation and its applications[END_REF] and indistinguishability obfuscation [START_REF] Goldwasser | On best-possible obfuscation[END_REF][START_REF] Brzuska | Indistinguishability obfuscation versus multi-bit point obfuscation with auxiliary input[END_REF][START_REF] Garg | Candidate indistinguishability obfuscation and functional encryption for all circuits[END_REF], respectively. A sampler outputting circuits and messages comes to hand to model the privacy for functional encryption. We emphasize that our definition intentionally does not require the messages to be unpredictable. Further discussion on this choice can be found in Section 3. The PRIV model. Building on unpredictability, we put forth a new indistinguishability-based notion of function privacy. Our notion, which we call PRIV, bears close resemblance to the standard IND-CPA model for functional encryption: it comes with a left-or-right LR oracle, a tokenextraction TGen oracle and the goal of the adversary is to guess a bit. The power of the model lies in that we endow LR with the ability to generate arbitrary messages and circuits via an unpredictable sampler. Trivial attacks are excluded by the joint action of unpredictability and the usual FE legitimacy condition, imposing equality of images on left and right. The enhanced model of Boneh, Raghunathan and Segev [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF] falls in as a special case where the sampler is structurally restricted to be a block source. But our definition goes well beyond this and considers arbitrary and possibly low-entropy correlations. Furthermore, since unpredictability is not imposed on messages, PRIV implies IND-CPA security, and consequently it also guarantees anonymity for primitives such as IBE and ABE [START_REF] Boneh | Functional encryption: Definitions and challenges[END_REF]. Correlated circuits may be "low entropy" as long as they are identical on left and right, and since previous definitions adopted a real-or-random definition, they had to exclude this possibility. By giving the sampler the option to omit, manipulate and repeat the messages, our security notion implies previous indistinguishability-based notions in the literature, including those in [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF][START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF][START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF][START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF]. The implications of our new definition become clearer when we focus on (public-key encryption with) keyword search (KS) [START_REF] Boneh | Public Key Encryption with Keyword Search In EUROCRYPT[END_REF]. Consider a scenario where a client searches for a keyword, slightly modifies it by editing it, and then uploads an encryption of the keyword to the server. In this setting, the server sees ciphertexts encrypting unknown keywords that are closely related to keywords which the server holds tokens for. Our model ensures that if searched keywords are unpredictable from the perspective of the server, this uncertainty is preserved by the KS scheme after the searches are carried out. This does not imply that the server will be unable to distinguish a sequence of successful queries over the same high-entropy keyword, from a sequence of successful queries over different high-entropy keywords (this is impossible to achieve [START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF]). However, when keyword searches do not match any of the correlated ciphertexts, then search patterns are guaranteed to remain hidden, even in the presence of low-entropy correlated encrypted keywords. We note that this captures a strong notion of unlinkability and untraceability between unmatched queries. Constructions. We start by formalizing the intuition that obfuscating circuits before extraction should provide some level of privacy in FE. Using unpredictable samplers, we first generalize distributionally-indistinguishable (DI) obfuscators [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF] from point functions to general circuits. Our obfuscate-then-extract OX transform shows that PRIV security in the absence of correlated messages can be achieved using DI obfuscators. In the reverse direction, we also established that some weak form of DI obfuscation (for samplers outputting single-circuit vectors) is also necessary. We also show that (self-)composable VGB obfuscation implies full-fledged DI obfuscation. So, emerging positive results on composable VGB obfuscation [START_REF] Bitansky | On virtual grey box obfuscation for general circuits[END_REF][START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF] already lead to PRIV-secure functional encryption schemes (supporting the same class of circuits as the obfuscator) in the absence of correlated messages. To move beyond the above token-only model, we need to "decouple" the correlations between encrypted messages and challenge circuits so we can take advantage of FE security (that protects ciphertexts) and obfuscation (that protects the circuits) in a cumulative way. Building on ideas from [START_REF] Ananth | From selective to adaptive security in functional encryption[END_REF] and [START_REF] Bitansky | On virtual grey box obfuscation for general circuits[END_REF] we identify a class of concentrated samplers that can be used in conjunction with the so-called "trojan" method-a technique to boost selective security to adaptive security in FE-to achieve function privacy. This construction improves on the security guarantees of OX considerably, but comes with the caveat that a mild restriction on second-stage token queries must be imposed: they must reveal (via circuit outputs) no more information about encrypted correlated messages than those revealed by first-stage queries. We give non-trivial examples of concentrated samplers and derive constructions for classes of circuits that encompass, among other functionalities, IBE, KS and inner-product encryption. By giving a construction of a DI obfuscator for hyperplane membership, we resolve a question left open by Boneh, Raghunathan and Segev [START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF] on the extension and realization of enhanced security for inner-product encryption. Our third construction is specific to point functions, and besides being simpler and more efficient, can tolerate arbitrary correlations between challenge keywords and encrypted messages. Put differently this construction removes the concentration restriction on samplers. For this construction we require a functional encryption scheme that supports the OR composition of two DI-secure point obfuscations. The composable VGB point obfuscator of Bitansky and Canetti [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF] implies that the required DI point obfuscator exists. Furthermore, we also rely on a standard functional encryption scheme that supports the evaluations of four group operations in a DDH group (corresponding to the disjunction of two point function obfuscations), which is a relatively modest computation. We are, however, unable to lift the mild second-stage restriction. Our last construction lifts the second-stage restriction at the cost of relying on more expressive forms of obfuscators. The novelty in this construction resides in the observation that, in order to offer the keyword search functionality, it suffices to encrypt information that enables equality checks between words and messages to be carried out. In our fourth construction we encode a message m as an obfuscation of the point function C[m]. Concretely, we obfuscate words before extraction and messages before encryption. Equality with w can be checked using a circuit D[w] that on input an obfuscated point function Obf(C[m]) returns Obf(C[m])(w). We emphasize that D[w] is not a point function. We also need to ensure that an attacker cannot exploit the D[w] circuits by, say, encrypting obfuscations of malicious circuits of its choice. We do this using NIZK proofs to ensure the outputs of the point obfuscator are verifiable: one can publicly verify that an obfuscation indeed corresponds to some point function. To summarize, our construction relies on a DI obfuscator supporting point functions C[m](w) := (m = w) and circuits D[w](C) := C(w) and a general-purpose FE. The circuits C[m] and D[w] were used negatively by Barak et al. [9] to launch generic attacks against VBB and VGB obfuscators. Here, the restrictions imposed on legitimate PRIV samplers ensure that these attacks cannot be carried out in our setting, and obfuscators supporting them can be used positively to build private FE schemes. Preliminaries Notation. We denote the security parameter by λ ∈ N and assume it is implicitly given to all algorithms in unary representation 1 λ . We denote the set of all bit strings of length by {0, 1} and the length of a string x by |x|. The bit complement of a string x is denoted by x. We use the symbol ε to denote the empty string. A vector of strings x is written in boldface, and x[i] denotes its ith entry. The number of entries of x is denoted by |x|. For a finite set X, we denote its cardinality by |X| and the action of sampling a uniformly random element x from X by x ←$ X. For a random variable X we denote its support by [X]. For a circuit C we denote its size by |C|. We call a real-valued function µ(λ) negligible if µ(λ) ∈ O(λ -ω (1) ) and denote the set of all negligible functions by Negl. Throughput the paper ⊥ denotes a special failure symbol outside the spaces underlying a cryptographic primitive. We adopt the code-based game-playing framework. As usual "ppt" stands for probabilistic polynomial time. Circuit families. Let MSp := {MSp λ } λ∈N and OSp := {OSp λ } λ∈N be two families of finite sets parametrized by a security parameter λ ∈ N. A circuit family CSp := {CSp λ } λ∈N is a sequence of circuit sets indexed by the security parameter. We assume that for all λ ∈ N, all circuits in CSp λ share a common input domain MSp λ and output space OSp λ . We also assume that membership in sets can be efficiently decided. For a vector of circuits C = [C 1 , . . . , C n ] and a vector of messages m = [m 1 , . . . , m m ] we define C(m) to be an n × m matrix whose ijth entry is C i (m j ). When OSp λ = {0, 1} for all values of λ we call the circuit family Boolean. where game PRP A PRP (1 λ ) is defined in Fig. 1. For our purposes, we rely on the non-strong security notion where inverse queries are not allowed. Furthermore, we do not necessarily require the inverse map D to be efficiently computable. PRP A PRP (1 λ ): b ←$ {0, 1} k ←$ K(1 λ ) b ←$ A Fn (1 λ ) return (b = b ) Fn(x): if T [x] = ⊥ then T [x] ←$ MSp λ \ T if b = 1 return T [x] else return E(k, x) Functional encryption Syntax. A functional encryption scheme FE associated with a circuit family CSp is specified by four ppt algorithms as follows. (1) FE.Gen(1 λ ) is the setup algorithm and on input a security parameter 1 λ it outputs a master secret key msk and a master public key mpk; (2) FE.TGen(msk, C) is the token-generation algorithm and on input a master secret key msk and a circuit C ∈ CSp λ outputs a token tk for C; (3) FE.Enc(mpk, m) is the encryption algorithm and on input a master public key mpk and a message m ∈ MSp λ outputs a ciphertext c; (4) FE.Eval(c, tk) is the deterministic evaluation (or decryption) algorithm and on input a ciphertext c and a token tk outputs a value y ∈ OSp λ or failure symbol ⊥. CC A FE (1 λ ): (msk, mpk) ←$ FE.Gen(1 λ ) (m, C) ←$ A TGen (mpk) c ←$ FE.Enc(mpk, m) tk ←$ FE.TGen(msk, C) y ←$ FE.Eval(c, tk) return (y = C(m)) TGen(C): tk ←$ FE.TGen(msk, C) return tk IND-CPA A FE (1 λ ): (msk, mpk) ←$ FE.Gen(1 λ ) b ←$ {0, 1} b ←$ A LR,TGen (mpk) return (b = b ) LR(m 0 , m 1 ): c ←$ FE.Enc(mpk, m b ) MList ← (m 0 , m 1 ) : MList return c TGen(C): tk ←$ FE.TGen(msk, C) TList ← C : TList return tk We adopt a computational notion of correctness for FE schemes and require that no ppt adversary is able to produce a message m and a circuit C that violates the standard correctness property of the FE scheme (that is, FE.Eval(FE.Enc(mpk, m), FE.TGen(msk, C)) = C(m)), even with the help of an (unrestricted) token-generation oracle. We also adopt the standard notion of IND-CPA security [START_REF] Boneh | Functional encryption: Definitions and challenges[END_REF][START_REF] O'neill | Definitional issues in functional encryption[END_REF] where an adversary with access to a token-generation oracle cannot distinguish encryptions of messages m 0 , m 1 under the standard restriction that it cannot obtain a decryption token for a circuit C for which C(m 0 ) = C(m 1 ). Correctness. We will adopt a game-based definition of computational correctness for FE schemes which has been widely adopted in the literature [START_REF] Abdalla | Searchable encryption revisited: Consistency properties, relation to anonymous IBE, and extensions[END_REF][START_REF] Goldreich | The foundations of cryptography[END_REF] and suffices for the overwhelming majority of use cases. Roughly speaking, this property requires that no efficient adversary is able to come up with a message and a circuit which violates the correctness property of the FE scheme, even with the help of an (unrestricted) token-generation oracle. Formally, scheme FE is computationally correct if for all ppt adversaries A Adv cc FE,A (λ) := Pr CC A FE (1 λ ) ∈ Negl , where game CC A FE (1 λ ) is shown in Fig. 2 on the left. Perfect correctness corresponds to the setting where the above advantage is required to be zero. Security. A functional encryption scheme FE is IND-CPA secure [START_REF] Boneh | Functional encryption: Definitions and challenges[END_REF][START_REF] O'neill | Definitional issues in functional encryption[END_REF] if for any legitimate ppt adversary A Adv ind-cpa FE,A (λ) := 2 • Pr IND-CPA A FE (1 λ ) -1 ∈ Negl , where game IND-CPA A FE (1 λ ) is defined in Fig. 2 on the right. We say A is legitimate if for all messages pairs queried to the left-or-right oracle, i.e., for all (m 0 , m 1 ) ∈ MList, and all extracted circuits C ∈ TList we have that C(m 0 ) = C(m 1 ). The IND-CPA notion self-composes in the sense that security against adversaries that place one LR query is equivalent to the setting where an arbitrary number of queries is allowed. It is also well known that IND-CPA security is weaker than generalizations of semantic security for functional encryption [START_REF] Boneh | Functional encryption: Definitions and challenges[END_REF][START_REF] O'neill | Definitional issues in functional encryption[END_REF][START_REF] Barbosa | On the semantic security of functional encryption schemes[END_REF], and strong impossibility results for the latter have been established [START_REF] Boneh | Functional encryption: Definitions and challenges[END_REF][START_REF] Gorbunov | Functional encryption with bounded collusions via multi-party computation[END_REF][START_REF] Agrawal | Functional encryption: New perspectives and lower bounds[END_REF]. On the other hand, IND-CPA-secure FE schemes for all polynomial-size circuit families have been recently constructed [START_REF] Gorbunov | Functional encryption with bounded collusions via multi-party computation[END_REF][START_REF] Garg | Candidate indistinguishability obfuscation and functional encryption for all circuits[END_REF][START_REF] Goldwasser | Reusable garbled circuits and succinct functional encryption[END_REF]. Other recent feasibility results have been established in weaker forms of the IND-CPA model such as the selective model [START_REF] Gorbunov | Functional encryption with bounded collusions via multi-party computation[END_REF][START_REF] Garg | Candidate indistinguishability obfuscation and functional encryption for all circuits[END_REF][START_REF] Goldwasser | Reusable garbled circuits and succinct functional encryption[END_REF] where the adversary commits to its challenge messages at the onset; or the weak model for Boolean circuits, where the adversary is restricted to extract tokens that evaluate to 0 on the challenge messages [START_REF] Gorbunov | Predicate encryption for circuits from LWE[END_REF]. Keyword search Set circuits. In this work we are interested in Boolean circuits that assume the value 1 on only a polynomially large subset of their domains. We call these set circuits. We define the canonical representation of a set circuit C with its corresponding set S as the circuit C[S] that has the set S explicitly hardwired in it: C[S](m) := 1 if m ∈ S; 0 otherwise. Formally, a family of Boolean circuits CSp is a set circuit family if there is a polynomial poly such that for all λ ∈ N and all C ∈ CSp λ we have that |S(C)| ≤ poly(λ) where S(C) := {m ∈ MSp λ : C(m) = 1}. Point circuits/functions correspond to the case where poly(λ) = 1. We use C[m] to denote the point circuit that on input m returns 1 and 0 otherwise. Throughout the paper, we assume that non-obfuscated set circuits are canonically represented. Syntax. A public-key encryption with keyword search scheme (or simply a keyword search scheme) KS is a functional encryption scheme for a point circuit family over the message space: CSp λ := {C[m] : m ∈ MSp λ }. We often identify circuit C[m] with its message m, but in order to distinguish circuits from messages we use the term keyword to refer to the former. We write the algorithms associated to a KS scheme as KS.Gen(1 λ ), KS.Enc(pk, m), KS.TGen(sk, w) and KS.Test(c, tk), where the latter outputs either 0 or 1. The computational correctness of a KS scheme is defined identically to that of an FE scheme. We say the scheme has no false negatives if correctness advantage is negligible and, whenever A outputs (w, C[w]), it is 0. IND-CPA security is also defined identically to FE schemes for point function families. Note that weak and standard IND-CPA notions are equivalent for KS schemes. NIZK proof systems Syntax. A non-interactive zero-knowledge proof system for an NP language L with an efficiently computable binary relation R consists of three ppt algorithms as follows. (1) NIZK.Setup(1 λ ) is the setup algorithm and on input a security parameter 1 λ it outputs a common reference string crs; (2) NIZK.Prove(crs, x, ω) is the proving algorithm and on input a common reference string crs, a statement x and a witness ω it outputs a proof π or a failure symbol ⊥; (3) NIZK.Verify(crs, x, π) is the verification algorithm and on input a common reference string crs, a statement x and a proof π it outputs either true or false. Perfect completeness. Completeness imposes that an honest prover can always convince an honest verifier that a statement belongs to L, provided that it holds a witness testifying to this Complete A NIZK (1 λ ): crs ←$ NIZK.Setup(1 λ ) (x, ω) ←$ A(1 λ , crs) if (x, ω) / ∈ R return 0 π ←$ NIZK.Prove(crs, x, ω) return ¬(NIZK.Verify(crs, x, π)) Sound A NIZK (1 λ ): crs ←$ NIZK.Setup(1 λ ) (x, π) ←$ A(1 λ , crs) return (x / ∈ L ∧ NIZK.Verify(crs, x, π)) ZK-Real A NIZK (1 λ ): crs ←$ NIZK.Setup(1 λ ) b ←$ A Prove (1 λ , crs) Prove(x, ω): if (x, ω) / ∈ R return ⊥ π ←$ NIZK.Prove(crs, x, ω) return π ZK-Ideal A,Sim NIZK (1 λ ): (crs, tp) ←$ Sim 1 (1 λ ) b ←$ A Prove (1 λ , crs) Prove(x, ω): if (x, ω) / ∈ R return ⊥ π ←$ Sim 2 (crs, tp, x) return π Fig. 3. Games defining the completeness, soundness and zero-knowledge properties of a NIZK proof system. fact. We say a NIZK proof is perfectly complete if for every (possibly unbounded) adversary A Adv complete NIZK,A (λ) := Pr Complete A NIZK (1 λ ) = 0 , where game Complete A NIZK (1 λ ) is shown in Fig. 3 on the left. Perfect soundness. Soundness imposes that a malicious prover cannot convince an honest verifier of a false statement. We say a NIZK proof is perfectly sound if for every (possibly unbounded) adversary A we have that Adv sound NIZK,A (λ) := Pr Sound A NIZK (1 λ ) = 0 , where game Sound A NIZK (1 λ ) is shown in Fig. 3. Computational zero knowledge. The zero-knowledge property guarantees that proofs do not leak information about the witnesses that originated them. Technically, this is formalized by requiring the existence of a ppt simulator Sim = (Sim 1 , Sim 2 ) where Sim 1 takes the security parameter 1 λ as input and outputs a simulated common reference string crs together with a trapdoor tp, and Sim 2 takes the trapdoor as input tp together with a statement x ∈ L for which it must forge a proof π. We say a proof system is computationally zero knowledge if, for every ppt adversary A, there exists a simulator Sim such that Adv zk NIZK,A,Sim (λ) := Pr ZK-Real A NIZK (1 λ ) -ZK-Ideal A,Sim NIZK (1 λ ) ∈ Negl , where games ZK-Real A NIZK (1 λ ) and ZK-Ideal A,Sim NIZK (1 λ ) are shown in Fig. 3 on the right. Unpredictable Samplers The privacy notions that we will be developing in the coming sections rely on multistage adversaries that must adhere to certain high-entropy requirements on the sampled circuits. Rather than speaking about specific distributions for specific circuit classes, we introduce a uniform treatment for any circuit class via an unpredictability game. Our framework allows one to introduce restricted classes of samplers by imposing structural restrictions on their internal operation without changes to the reference unpredictability game. Our framework extends that of Bellare, Stepanov and Tessaro [START_REF] Bellare | Poly-many hardcore bits for any one-way function and a framework for differing-inputs obfuscation[END_REF] for obfuscators and also models the challenge-generation phase in private functional encryption in prior works [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF][START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF][START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF][START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF]. mPred P S (1 λ ): (i, m) ←$ P Sam,Func,Sp (1 λ ) (C 0 , C 1 , m 0 , m 1 ) ← list[i] return (C 0 (m) = C 1 (m)) Sam(st): (C 0 , C 1 , m 0 , m 1 , z) ←$ S(st) list ← list : (C 0 , C 1 , m 0 , m 1 ) return z Func(i, m, C): (C 0 , C 1 , m 0 , m 1 ) ← list[i] return (C 0 (m), C(m 0 )) Sp(i, j): (C 0 , C 1 , m 0 , m 1 ) ← list[i] (C 0 , C 1 , m 0 , m 1 ) ← list[j] return C 0 (m 0 ) Pred P S (1 λ ): (st, st ) ←$ P 1 (1 λ ) (C 0 , C 1 , m 0 , m 1 , z) ←$ S(st) m ←$ P Func 2 (1 λ , C 0 (m 0 ), z, st ) return (C 0 (m) = C 1 (m)) Func(m, C): return (C 0 (m), C(m 0 )) Fig. 4. Left: Game defining the multi-instance unpredictability of a sampler S. Right: Game defining single-instance unpredictability of a sampler S against P = (P1, P2). Definitions Syntax. A sampler for a circuit family CSp is an algorithm S that on input the security parameter 1 λ and possibly some state information st outputs a pair of vectors of CSp λ circuits (C 0 , C 1 ) of equal dimension, a pair of vectors of MSp λ messages (m 0 , m 1 ) of equal dimension, and some auxiliary information z. We require the components of the two circuit (resp., message) vectors to be encoded as bit strings of equal length. Input st may encode information about the environment where the sampler is run (e.g., the public parameters of a higher-level protocol) and z models side information available on the sampled circuits or messages. In the security games we will be considering later on, the goal of adversary will be to distinguish which of two circuit distributions produced by an unpredictable sampler was used to form some cryptographic data (e.g., an obfuscated circuit or an FE token). Our unpredictability definition formalizes the intuition that by examining the input/output behavior of the sampled circuits on messages of choice, the evaluation of legitimate circuits of choice on sampled messages, and the evaluation of sampled circuits on sampled messages, a point leading to differing outputs on some pair of sampled circuits cannot be found. Drawing a parallel to the functional encryption setting, once decryption tokens or encrypted messages become available, the tokens can be used by a legitimate adversary to compute the circuits underneath on arbitrary values, including some special messages that are possibly correlated with the circuits. Unpredictability. A legitimate sampler S is statistically (multi-instance) unpredictable if for any unbounded legitimate predictor P that places polynomially many queries Adv (m)pred S,P (λ) := Pr (m)Pred P S (1 λ ) ∈ Negl , where games mPred P S (1 λ ) and Pred P S (1 λ ) are shown in Fig. 4. Sampler S is called legitimate if C 0 (m 0 ) = C 1 (m 1 ) for all queries made to the Sp oracle in game mPred P S (1 λ ), or simply required that C 0 (m 0 ) = C 1 (m 1 ) in game Pred P S (1 λ ). Predictor P is legitimate if C(m 0 ) = C(m 1 ) for all queries made to the Func oracle (in both multi-instance and single-instance games). 4The mPred game is multi-instance and the predictor can place polynomially many queries to Sam and set st arbitrarily. The latter essentially ensures that S generates fresh entropy on any input st. We emphasize that the winning condition demands component-wise inequality of circuit outputs. In particular the predictor is not considered successful if it outputs a message which leads to different outputs across different Sam queries or within the same Sam query but on different circuit indices. A number of technical choices have been made in devising these definitions. By the legitimacy of the sampler C 0 (m 0 ) = C 1 (m 1 ) and hence only one of these values is provided to the predictor. Furthermore, since the goal of the predictor is to find a differing input, modifying the experiment so that Func returns C 1 (m) (or both values) would result in an equivalent definition. Our definition intentionally does not consider unpredictability of messages. Instead, one could ask the predictor to output either a message that results in differing evaluations on challenge circuits or a circuit that evaluates differently on challenge messages. This would, however, lead to an excessively restrictive unpredictability notion and excludes many circuit samplers of practical relevance. Composition. A standard guessing argument shows that any stateless sampler (one that keeps no internal state and uses independent random coins on each invocation, but might still receive st explicitly passed as input) is multi-instance unpredictable (where P can place q queries to Sam) if and only if it is single-instance unpredictable (where P can only place a single Sam query). The reduction in one direction is trivial. In the other direction we guess the index i * that the multiinstance predictor P will output and simulate Sam queries 1, . . . , (i * -1) and (i * + 1), . . . , q by running the sampler S in the reduction-this is where we need the stateless property-and answer the i * th one using the Sam oracle in the single-instance game. Queries to Func with index i * are answered analogously using the single-instance counterpart whereas those with index different from i * will use the explicit knowledge of the circuits and messages generated by the reduction. Queries Sp with index (i, j) are answered as follows. If both i and j are different from i * , use the explicit knowledge of circuits and messages. If i = i * but j = i * , use the explicit knowledge of the messages and the single-instance Func oracle on i * . If i = i * but j = i * , use the knowledge of the circuit and single-instance Func. For (i * , i * ) queries, use the Sp oracle in the single-instance game. Note that the legitimacy of the constructed single-instance predictor follows from the legitimacy of the multi-instance predictor and the sampler. Proposition 1 (Unpredictability composition). A stateless sampler is multi-instance unpredictable (Fig. 4 on the left) if and only if it is single-instance unpredictable (Fig. 4 on the right). The samplers that we study in this work are stateless and therefore we use the definition in Fig. 4 on the right for simplicity. Nevertheless, our framework can be used to analyze stateful samplers as well. We leave the study of such samplers as an important (and practically relevant) direction for future work. In Appendix A we define a number of special classes of samplers by imposing structural restrictions on their internal operation. This serves to illustrate how various samplers that previously appeared in the literature can be modeled within our framework. In particular, definitions of high-entropy and block source samplers for keywords [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF], block sources for inner products [START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF], and circuit sampler distributions used in various obfuscation definitions can be seen as particular cases within this framework. The case of point functions We take a moment to discuss the relation between our notion of unpredictability and previous approaches in the literature for point circuits. Previous notions of unpredictability consider distributions on vectors of points whose components have high min-entropy [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF][START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF][START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF][START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF]. Such distributions, when viewed as (possibly inefficient) samplers can be shown to be unpredictable according to the above definition. 5 The converse implication, however, does not hold in general. Consider a sampler that returns vectors which contain low-entropy circuits that are identical on left and right. The outputs of such circuits cannot be used to directly distinguish the challenge bit, but the circuits themselves might leak information about other correlated high-entropy challenge components, which might lead to a breach in privacy. It is unclear how previous approaches [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF][START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF][START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF][START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF], which adopt a real-or-random formalization, can be modified to model this class of attackers. A similar argument can be made for low-entropy messages. Consider a scenario where a sampler outputs two low-entropy messages that are equal on the left and right. 6 Then, such messages could provide side information that permits distinguishing the high-entropy queries, which is a different type of privacy breach not previously addressed by existing security models. Agrawal et al. [START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF] propose an admissibility definition where only a single challenge circuit is present, there are no correlated messages, no auxiliary information is present, and no state is passed to the sampler. The predictor there is also restricted to run in polynomial time. Although our framework permits modeling such samplers, this definition becomes unsatisfiable for large classes of circuits. Indeed, a function-private FE for a circuit class under this definition immediately leads to a virtual black-box (VBB) obfuscator for the same class, and the latter is known to be impossible to achieve for wide classes of circuits [START_REF] Barak | On the (im)possibility of obfuscating programs[END_REF][START_REF] Goldwasser | On the impossibility of obfuscation with auxiliary input[END_REF]. 7 Our semi-bounded formulation of a predictor, on the other hand, leads to obfuscation notions that are implied by (and potentially weaker than) virtual grey-box (VGB) obfuscation (see Section 4). Negative results for VGB are more restricted and general feasibility results have been recently emerging [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF][START_REF] Bitansky | On virtual grey box obfuscation for general circuits[END_REF]. As mentioned above, Agrawal et al.'s definition does not address correlations either, which is a central point of our work. Obfuscators Syntax. An obfuscator for a circuit family CSp is a uniform ppt algorithm Obf that on input the security parameter 1 λ and the description of a circuit C ∈ CSp λ outputs the description of another circuit C. We require any obfuscator to satisfy the following two requirements. Functionality preservation : For any λ ∈ N, any C ∈ CSp λ and any m ∈ MSp λ , with overwhelming probability over the choice of C ←$ Obf(1 λ , C) we have that C(m) = C(m). Polynomial slowdown : There is a polynomial poly such that for any λ ∈ N, any C ∈ CSp λ and any C ←$ Obf(1 λ , C) we have that |C| ≤ poly(|C|). Security definitions for obfuscators can be divided into the indistinguishability-based and simulationbased notions. Perhaps the most natural notion is the virtual black-box (VBB) property [START_REF] Barak | On the (im)possibility of obfuscating programs[END_REF], which requires that whatever can be computed from an obfuscated circuit can be also simulated using CVGB-Real S,A Obf (1 λ ): (C, z) ←$ S(1 λ , ε) C ←$ Obf(1 λ , C) b ←$ A(1 λ , C, z) return b CVGB-Ideal S,Sim Obf (1 λ ): (C, z) ←$ S(1 λ , ε) b ←$ Sim Func (1 λ , 1 |C| , z) return b Func(m): return C(m) DI S,A Obf (1 λ ): b ←$ {0, 1} b ←$ A Sam (1 λ ) return (b = b ) Sam(st): (C 0 , C 1 , z) ←$ S(1 λ , st) C ←$ Obf(1 λ , C b ) return (C, z) OW A Obf (1 λ ): 1 t ←$ A 1 (1 λ ) w ←$ MSp λ C ← [C[w], . . . , C[w]] / / t copies C ←$ Obf(1 λ , C) w ←$ A 2 (1 λ , C) return (w = w ) Fig. 5. Games defining the CVGB, DI and OW security of an obfuscator. One-way security is defined for pointfunctions only. oracle access to the circuit. Here, we consider a weakening of this notion, known as virtual grey-box (VGB) security [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF][START_REF] Bitansky | On virtual grey box obfuscation for general circuits[END_REF] that follows the VBB approach, but allows simulators to run in unbounded time, as long as they make polynomially many queries to their oracles; we call such simulators semi-bounded. Below we present a self-composable strengthening of this notion where the VGB property is required to hold in the presence of multiple obfuscated circuits. In the context of security definitions for obfuscators, we consider samplers that do not output any messages. Furthermore, we call a sampler one-sided if its sampled circuits are identical on left and right with probability 1. Composable VGB. An obfuscator Obf is composable VGB (CVGB) secure if for every ppt adversary A there exists a semi-bounded simulator Sim such that for every ppt one-sided circuit sampler S the advantage Adv cvgb Obf,S,A,Sim (λ ) := Pr CVGB-Real S,A Obf (1 λ ) -Pr CVGB-Ideal S,Sim Obf (1 λ ) ∈ Negl, where games CVGB-Real S,A Obf (λ) and CVGB-Ideal S,Sim Obf (λ) are shown in Figure 5. By considering samplers that only output a single circuit we recover the standard (worst-case) VGB property. The VBB property corresponds to the case where the simulator is required to run in polynomial time. Average-case notions of obfuscation correspond to definitions where the circuit samplers are fixed. A result of Bitansky and Canetti [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF]Proposition A.3] on the equivalence of VGB with and without auxiliary information can be easily shown to also hold in the presence of multiple circuits, from which one can conclude that CVGB with auxiliary information is the same as CVGB without auxiliary information. We also introduce the following adaptation of an indistinguishability-based notion of obfuscation introduced in [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF] for point functions. Distributional indistinguishability. An obfuscator Obf is DI secure if, for every unpredictable ppt sampler S and every ppt adversary A, Adv di Obf,S,A (λ) := 2 • Pr DI S,A Obf (1 λ ) -1 ∈ Negl , where game DI S,A Obf (1 λ ) is defined in Fig. 5. The above definition strengthens the one in [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF] and gives the sampler the possibility to leak auxiliary information to the adversary. In particular, we can consider the case where images of an (internally generated) vector of messages that are correlated with the circuits are provided to A. (Our constructions will rely on this property for point obfuscators.) Throughout the paper we consider DI adversaries that place a single query to the Sam oracle. It can easily be shown that the DI self-composes for stateless samplers, meaning that security against adversaries that place one Sam query is equivalent to the setting where an arbitrary number of queries are allowed. Note also that we allow the adversary to pass some state information st to the sampler. Security with respect to all ppt and statistically unpredictable samplers can be shown to be equivalent to a variant definition where the adversary is run after the sampler and st is set to the empty string ε. We recover the definition of indistinguishability obfuscation (iO) [START_REF] Garg | Candidate indistinguishability obfuscation and functional encryption for all circuits[END_REF] when samplers are required to output a single circuit on left and right and include these two circuits explicitly in z. Differinginputs obfuscation (diO) [START_REF] Ananth | Differing-inputs obfuscation and applications[END_REF] is obtained if the predictor is also limited to run in polynomial time. It has been shown that, for point functions, the notions of CVGB and DI (without auxiliary information) are equivalent [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF]Theorem 5.1]. Following a similar argument to the first part of the proof in [14, Theorem 5.1], we show in Appendix B that CVGB for any circuit family implies distributional indistinguishability even with auxiliary information for the same circuit family. Hence, our notion of DI obfuscation is potentially weaker than CVGB. This proof crucially relies on the restriction that samplers are required to be unpredictable in the presence of unbounded predictors. The proof of the converse direction in [14, Theorem 5.1] uses techniques specific to point functions and we leave a generalization to wider classes of circuits for future work. Proposition 2 (CVGB =⇒ DI). Any CVGB obfuscator for a class of circuits CSp is also DI secure with respect to all statistically unpredictable samplers for the same class CSp. We conclude this discussion by introducing a new notion of one-way point obfuscation that requires it to be infeasible to recover the point given many obfuscations of it. One-way point obfuscation. Let Obf be an obfuscator for a point circuit family CSp. We say Obf is OW secure if for every ppt adversary A Adv ow Obf,A (λ ) := Pr OW A Obf (1 λ ) ∈ Negl , where game OW A Obf (1 λ ) is shown in Fig. 5 on the right. The next proposition, proved in Appendix C, shows that OW is a weakening of DI for point circuits. Proposition 3 (DI =⇒ OW for point circuits). Let Obf be an obfuscator for a point circuit family CSp. If Obf is DI secure with respect to all ppt samplers, then it is also OW secure. Instantiations and obfuscation-based attacks. A concrete instantiation of a CVGB obfuscator for point functions with auxiliary information (AI) is given by Bitansky and Canetti [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF]. This construction is based on the hardness of a variant of the DDH assumption called strong vector-DDH (SVDDH) assumption. The SVDDH assumption is an assumption that is formulated without reference to any auxiliary information. Recently, Bellare, Stepanovs and Tessaro [START_REF] Bellare | Contention in cryptoland: obfuscation, leakage and UCE[END_REF] have shown that the SVDDH assumption (and verifiable point obfuscation) in presence of arbitrary AI is in contention with the existence of VGB obfuscation for general circuits (that is, one of the two cannot exist). We take a moment to clarify how these two results relate to each other. In this discussion we assume that all obfuscation notions are considered for a single circuit only (i.e., we do not consider composability). First, note that the notion of AIPO (auxiliary-information point obfuscation) used in [START_REF] Bellare | Contention in cryptoland: obfuscation, leakage and UCE[END_REF] follows a notion equivalent to distributional indistinguishability where the right distribution is fixed to be uniform. As shown in [14, Theorem 5.1] any point obfuscation (without AI) is equivalent to VGB point obfuscation. It is also shown in [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF]Proposition A.3] that VGB obfuscation without AI is equivalent to VGB obfuscation with AI for any circuit class. (Intuitively, to construct a simulator that works for all possible AI, one uses the fact that the simulator is unbounded to find the best simulator that works for a non-uniform adversary that takes a value of the AI as advice.) Together with Proposition 2 above we get that all these notions (in their non-composable variants) are equivalent. This then raises the question whether the results of [START_REF] Bellare | Contention in cryptoland: obfuscation, leakage and UCE[END_REF] are also in contention with PO without AI. To see that this does not follow from the equivalence of notions, note that in Proposition 2 we crucially rely on a predictor that runs a possibly unbounded simulator. Put differently, the AI must be statistically unpredictable. Indeed, the results of [START_REF] Bellare | Contention in cryptoland: obfuscation, leakage and UCE[END_REF] rely on special forms of AI which only computationally hide the sampled point. (Roughly speaking, the AI contains a VGB obfuscation of a (non-point) circuit that depends on the sampled point.) To avoid such feasibility problems, and in line with the above equivalence results, we constrain auxiliary information to be statistically unpredictable throughout this work. Hyperplane membership. Let p be a prime and d a positive integer. Given a vector a ∈ Z d p , consider the hyperplane membership circuit C[a](x) := 1 if x, a = 0; 0 otherwise. In Appendix G we build on the results of [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF][START_REF] Canetti | Obfuscation of hyperplane membership[END_REF] to construct a DI-secure obfuscator for this family of circuits under a generalization of the Strong Vector DDH (SVDDH) assumption used in [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF]. In order to avoid attacks similar to the one described in [START_REF] Bellare | Contention in cryptoland: obfuscation, leakage and UCE[END_REF] that puts a one element instance of SVDDH with arbitrary auxiliary information (or AI-DHI assumption, as referred to by [START_REF] Bellare | Contention in cryptoland: obfuscation, leakage and UCE[END_REF]) in contention with the existence of VGB obfuscators supporting specific classes of circuits, we assume that our generalized SVDDH assumption holds only in the presence of random auxiliary information. This immediately translates to an obfuscator that tolerates the same type of leakage, which is enough to serve as a candidate to instantiate our functionality-agnostic constructions and obtain private inner-product encryption schemes, from which it is known how to derive expressive predicates that include equality tests, conjunctions, disjunctions and evaluation of CNF and DNF formulas (among others) [START_REF] Katz | Predicate encryption supporting disjunctions, polynomial equations, and inner products[END_REF]. Function Privacy: A Unified Approach We now define what function privacy for general functional encryption schemes means and derive the model specific to keyword search schemes by restriction to point circuit families. Our definition follows the indistinguishability-based approach to defining FE security and comes with an analogous legitimacy condition that prevents the adversary from learning the challenge bit simply by extracting a token for a circuit that has differing outputs for the left and right challenge messages. The model extends the IND-CPA game via a left-or-right (LR) oracle that returns ciphertexts and tokens for possibly correlated messages and circuits. Since the adversary in this game has access to tokens that depend on the challenge bit, we use the unpredictability framework of Section 3 to rule out trivial guess attacks. The game follows a left-or-right rather than a real-or-random formulation of the challenge oracle [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF][START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF][START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF][START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF] as this choice frees the definition from restrictions that must be imposed to render samplers compatible with uniform distribution over circuits. In particular, it allows the sampler to output low-entropy circuits as long as they are functionally-equivalent on left and right. It PRIV A,S FE (1 λ ): (msk, mpk) ←$ FE.Gen(1 λ ) b ←$ {0, 1} b ←$ A LR,TGen (mpk) return (b = b ) LR(st): (C 0 , C 1 , m 0 , m 1 , z) ←$ S(st) TList ← TList : (C 0 , C 1 ) MList ← MList : (m 0 , m 1 ) tk ←$ FE.TGen(msk, C b ) c ←$ FE.Enc(mpk, m b ) return (tk, c, z) TGen(C): TList ← TList : (C, C) tk ←$ FE.TGen(msk, C) return tk Fig. 6. Game defining enhanced privacy of a functional encryption scheme FE. also allows analyzing security under repetitions of functionally-equivalent circuits in the presence of correlated messages, which until now were properties captured separately by unlinkability [START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF] and enhanced security [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF], and never considered together, not even for the simple case of point functions. The sampler allows us to model, within a single game, (a) token-only adversarial strategies via samplers that output no message, as the non-enhanced security model in [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF] and those in [START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF][START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF]; (b) adversarial strategies that admit simple correlations between encrypted messages and extracted circuits, as the enhanced security model in [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF] for point circuits that allows the adversary to obtain ciphertexts that match the tokens; (c) adversarial strategies that admit arbitrary correlations between extracted circuits and encrypted messages (i.e., not only exact matches). Our model is functionality-agnostic and unifies all previous indistinguishability-based models in this area. When restricted to point circuits or inner-products families, it gives rise to a new privacy notion that offers significant improvements over those in prior works [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF][START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF][START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF]. PRIV security. A functional encryption scheme FE is PRIV secure if, for every unpredictable ppt sampler8 S and every ppt adversary A Adv priv FE,A,S (λ) := 2 • Pr PRIV A,S FE (1 λ ) -1 ∈ Negl , where game PRIV A,S FE (1 λ ) is defined in Fig. 6. We exclude adversaries (A, S) that attempt to trivially win the PRIV game via decryption tokens, by either extracting them explicitly via the tokengeneration oracle, or implicitly via the left-or-right oracle. Formally, the pair (A, S) is legitimate if, with overwhelming probability ∀(C 0 , C 1 ) ∈ TList , ∀(m 0 , m 1 ) ∈ MList : C 0 (m 0 ) = C 1 (m 1 ) . Note also that for two sampler classes S 1 and S 2 with S 1 ⊂ S 2 security with respect to samplers in S 2 is a stronger security guarantee that one for those only in S 1 . In particular a stronger restriction on sampler classes results in a weaker definition. The definition also provides the adversary with the ability to adaptively obtain multiple challenges and tokens. However, similarly to unpredictability, a hybrid argument shows that for (stateless) samplers the definition self-composes and we consider the simpler single-shot game in the remainder of the paper. Restricted PRIV and PRIV-TO. We call an adversary token-only if S does not output any messages, and call the resulting security notion PRIV-TO. Note that, for token-only adversaries, the additional legitimacy constraint above is redundant. We call an adversary restricted if for every second-phase TGen query C 2 there is a first-phase TGen query C 1 such that C 2 (m b ) = C 1 (m b ) for b ∈ {0, 1}. Intuitively, this amounts to imposing that images exposed via second-stage queries (i.e., those placed after receiving the challenge) can reveal no more than the images obtained in the first stage (i.e., from queries placed before receiving the challenge). We call the resulting security notion Res-PRIV. We emphasize that the Res-PRIV model inherits many of the strengths of the full PRIV model such as arbitrary correlations and a wide range of adaptive token queries. 9Keyword search. Two important aspects of our definition are that it considers (1) challenge keywords that do not match any of the encrypted messages and challenge messages that not match any of the keywords-we call these keywords and messages unpaired ; and (2) low-entropy messages/keywords that are correlated with the high-entropy searches whose privacy must be protected. The former aspect entails that the full equality pattern of challenge messages and keywords may remain hidden from the adversary (and hence a wider class of non-trivial attacks can be launched). Although the adversary always obtains the image matrix resulting from evaluating tokens on ciphertexts (and hence sees the equality pattern between paired challenge keywords and messages), the repetition patterns among unpaired messages or unpaired keywords is not necessarily leaked. In practice, this repetition pattern may reveal sensitive information as well. Low entropy messages and keywords model the presence of ciphertexts and tokens in the system, over which the uncertainty of the adversary may be small, but which are correlated with sensitive data that must still be protected. Indeed, our unpredictability notion allows the sampler to output such low-entropy keywords and messages as long as low-entropy keywords are equal on left and right. A real-or-random modeling of this setting cannot capture this scenario. When low-entropy messages differ on the left and right, the adversary cannot learn them via the TGen oracle due to the legitimacy condition: imposing that they are not leaked maps to IND-CPA security. When they are equal on the left and right, they can be learned by successive queries to the token extraction oracle, which permits capturing attack scenarios where adaptive searches over low entropy correlated messages may be carried out. In particular, this permits an adversary to recover a correlated repetition search pattern after the PRIV challenge has been revealed. As a result, low-entropy messages and keywords are tolerated, even when correlated with other messages or keywords. Furthermore, the values and equality patterns of high-entropy keywords are protected, as well as those of all encrypted messages for which a token was not explicitly extracted. Our main results in Sections 6.3 and 6.4 show the existence of keyword search schemes which are secure in the aforementioned scenarios. On revealing images. The outputs of challenge circuits on challenge messages can be always computed by the adversary, and by imposing equality of images we ensure that they do not lead to trivial distinguishing attacks. (This is similar to the legitimacy condition in FE security models.) It is however less clear why these image values should be explicitly provide to the predictor in the unpredictability game, even when they are equal for left and right circuits-messages pairs. To see this, consider the sampler that for a random word w outputs Note that Finally, without access to the images C[w 0 ](m 0,i ) the sampler can be shown to be unpredictable as w is chosen randomly. On the other hand, in the presence of images, the sampler is trivially predicable. This counterexample is similar to that briefly discussed in [START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF] and can be modified to show that the enhanced model of Boneh, Raghunathan and Segev [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF] for the so-called (k 1 , . . . , k T )-distributions is not achievable. w 0 = w, w 1 = w, m 0,i := w if w[i] = 1 ; w otherwise, C[w 0 ](m 0,i ) = C[w 1 ](m 1,i ) = w[i] Relations among notions. Clearly PRIV implies its weaker variant Res-PRIV, which in turn implies PRIV-TO. It is not too difficult to see that PRIV also implies IND-CPA. 10 A noteworthy consequence of this is that for all-or-nothing functionalities (such as PEKS, IBE or ABE) any PRIV-secure construction is also index hiding (aka. anonymous), whereby ciphertexts do not leak any information about their intended recipients (i.e., about tokens that may permit recovering the payload). Res-PRIV would imply a restricted analogue of IND-CPA (where images in the second phase should match one in the first phase), which for point functions is equivalent to the standard IND-CPA model. IND-CPA security does not imply PRIV-TO: consider an IND-CPAsecure scheme that is modified to append circuits in the clear to their tokens. PRIV-TO does not imply IND-CPA either: consider a PRIV-TO-secure scheme that is modified to return messages in the clear with ciphertexts. (Note that these separations hold even for point functions.) Figure 7 summarizes relations among notions of security. 6 Constructions The Obfuscate-Extract (OX) transform Our first construction formalizes the intuition that obfuscating circuits before computing a token for them will provide some form of token privacy. The OX transform. Let Obf be an obfuscator supporting a circuit family CSp and let FE be a functional encryption scheme supporting all polynomial-size circuits. We construct a functional encryption scheme OX[FE, Obf] via the OX transform as follows. Setup, encryption and evaluation algorithms are identical to those of the base functional encryption scheme. The token-generation algorithm creates a token for the circuit that results from obfuscating the extracted circuit, i.e., OX[FE, Obf].TGen(msk, C) := FE.TGen(msk, Obf(1 λ , C)). Correctness of this construction follows from those of its underlying components. We now show that this construction yields function privacy against PRIV-TO adversaries. Since PRIV-TO does not imply IND-CPA security-see the discussion in Section 5-we establish IND-CPA security independently. The proof of the following theorem is straightforward and results from direct reductions to the base FE and Obf schemes used in the construction. A formal proof can be found in Appendix E. Theorem 1 (OX is PRIV-TO ∧ IND-CPA). If obfuscator Obf is DI secure, then scheme OX[FE, Obf] is PRIV-TO secure. Furthermore, if FE is IND-CPA secure OX[FE, Obf] is IND-CPA secure. We note that this proof holds for arbitrary classes of circuits and arbitrary (circuits-only) samplers. Using the composable VGB point-function obfuscator of Bitansky and Canetti [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF] and any secure functional encryption scheme that is powerful enough to support one exponentiation and one equality test (e.g., supports NC 1 circuits) we obtain a private keyword search scheme in the presence of tokens for arbitrarily correlated keywords. If the underlying functional encryption scheme supports the more powerful functionality that permits attaching a payload to the point, one obtains a PRIV-TO anonymous identity-based encryption scheme where arbitrary correlations are tolerated. In this case, on input (ID, m), the functionality supported by the underlying FE scheme would return m if C(ID) = 1, where C was sampled from Obf(C[ID ]) during token generation; it would return ⊥ otherwise. The above theorem shows that DI is sufficient to build a PRIV-TO scheme. It is however easy to see that the existence of a single-circuit DI obfuscator is also necessary. Indeed, given any PRIV-TO scheme FE we can DI-obfuscate a single circuit C by generating a fresh FE key pair, and outputting FE.Eval(•, tk) where tk is a token for C. A formal proof of this argument appears in Appendix D. Proposition 4 (PRIV-TO vs. DI). A PRIV-TO-secure functional encryption for a circuits family CSp exists if a DI obfuscator for CSp exists. Conversely, a single-circuit DI obfuscator for CSp exists if a PRIV-TO-secure functional encryption for CSp exists. A similar line of reasoning shows that the extractor-based constructions of private FE by Boneh, Raghunathan and Segev [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF] and Arriaga, Tang and Ryan [START_REF] Arriaga | Trapdoor privacy in asymmetric searchable encryption schemes[END_REF] give rise to single-circuit DI obfuscators for point functions for the specific classes of samplers considered in those works. Agrawal et al. [START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF] have proposed a simulation-based definition of privacy that strikes a different balance between practical relevance and feasibility. However, the definition in [START_REF] Agrawal | On the practical security of inner product functional encryption[END_REF] implies VBB obfuscation, which is known to be feasible only for restricted classes of circuits [START_REF] Brakerski | Black-box obfuscation for d-CNFs[END_REF][START_REF] Barak | Obfuscation for evasive functions[END_REF], in idealized models of computation [START_REF] Canetti | Obfuscating branching programs using black-box pseudo-free groups[END_REF][START_REF] Brakerski | Virtual black-box obfuscation for all circuits via generic graded encoding[END_REF][START_REF] Barak | Protecting obfuscation against algebraic attacks[END_REF] or with restricted forms of auxiliary information. The above proposition shows that our model is closer to the weaker form of DI obfuscation, which as shown in Proposition 2 is implied by VGB (and hence VBB) obfuscation, and is therefore more amenable to instantiations in the standard model. It is possible to show via separating counterexamples that IND-CPA and PRIV-TO do not jointly suffice to give PRIV security. (Roughly speaking, one considers a malicious token-generation oracle that on specially crafted words leads to a break of PRIV security.) We will explore the feasibility of PRIV security in the next sections. The Trojan-Obfuscate-Extract (TOX) transform We now present a generic construction that achieves Res-PRIV security for a class of samplers that we call concentrated. To this end, we build on the ideas from [START_REF] Ananth | From selective to adaptive security in functional encryption[END_REF][START_REF] Caro | On the achievability of simulation-based security for functional encryption[END_REF] on converting selective to adaptive security and achieving simulation-based security from IND-CPA security for FE schemes. The TOX transform. Given a symmetric encryption scheme SE, a general-purpose obfuscator Obf and a functional encryption FE for all circuits, our Trojan-Obfuscate-Extract (TOX) transform operates as follows. The master public key of the scheme is the same as that of the base FE scheme. Its master secret key includes a symmetric key k and the master secret key for the base FE scheme. To encrypt a message m we call the base FE encryption routine on (0, 0 λ , m). To generate a token for a circuit C, we first generate an obfuscation C ←$ Obf(C), a ciphertext c ←$ SE.Enc(k, 0 n ) and construct the following circuit. Troj[ C, c](b, k, m) := C(m) if b = 0 ; C * (m) if b = 1, where C * = SE.Dec(k, c) . Finally, we extract a token for Troj[ C, c]. Evaluation simply invokes the corresponding operation in the underlying FE. The correctness and IND-CPA security of this construction follow easily from the correctness and IND-CPA security of the underlying functional encryption scheme via straightforward reductions. Intuitively, during the normal operation of the scheme, the tokens in the construction will simply evaluate an obfuscation of the extracted circuit. In the proof of privacy, however, we will take advantage of the fact that a totally independent circuit can be hidden inside the token within the symmetric encryption ciphertext, and unlocked by a message containing the correct symmetric decryption key. For the proof to go through, the hidden circuit must be carefully selected so that the legitimacy condition is observed throughout. In order to meet this latter restriction, we consider the following constrained class of samplers. Concentrated samplers. We say a sampler S is S * -concentrated if for all st, all CSp λ -vectors C we have that Pr [C(m 0 ) = C(m 1 ) = C(m * )] ∈ Negl and Pr [C 0 (m 0 ) = C * (m * )] ∈ Negl , where the probability space of these is defined by the operations (C * , m * ) ←$ S * (z, C) and (C 0 , C 1 , m 0 , m 1 , z) ←$ S(st). Concentration is a property independent of unpredictability and we will be relying on both in our construction. Unpredictability is used in the reduction to the DI assumption. Concentration guarantees the existence of a sampler S * that generates circuits C * and messages m * which permit decoupling circuits and messages in the security proof. Intuitively, quantification over all C means that adversarially generated circuits will lead to image matrices that collide with those leaked by the sampler with overwhelming probability. The additional restriction on C * (m * ) guarantees that one can switch from the honest branch of challenge tokens to one corresponding to the trojan branch. Both of these properties are important to guarantee legitimacy when making a reduction to the security of the FE scheme. We however need to impose that legitimacy also holds for secondphase TGen queries as well, and this is where we need to assume Res-PRIV security: the extra legitimacy condition allows us to ensure that by moving to m * the legitimacy condition is not affected in the second phase either. Finally, an important observation is that, because we are dealing with concentrated samplers, our security proof goes through assuming obfuscators that need only tolerate random auxiliary information. Theorem 2 (TOX is Res-PRIV). If obfuscator Obf is DI secure, SE is IND-CPA secure and FE is IND-CPA secure, then scheme TOX[FE, Obf, SE] is Res-PRIV secure with respect to concentrated samplers. Proof. The proof proceeds via a sequence of three games as follows. Game 0 : This game is identical to Res-PRIV: challenge vector C b is extracted and m b is encrypted for a random bit b and for all TGen queries, string 0 n is encrypted using SE in the trojan branch. Game 1 : In this game, instead of 0 n we encrypt the circuits queried to the (first or second-phase) TGen oracle under a symmetric key k * in the trojan branch. In the challenge phase, we sample (C * , m * ) ←$ S * (z, C), where C are all first-phase TGen queries, and encrypt C * under k * for the challenge circuits in the trojan branch. This transition is negligible down to IND-CPA security of SE. Game 2 : In this game, instead of encrypting (0, 0, m b ) we encrypt (1, k * , m * ) in the challenge phase where the latter is generated using S * (z, C). We reduce this hop to the IND-CPA security of FE. We generate a key k * , answer first-stage TGen queries using the provided TGen oracle and encrypt circuits under k * in the trojan branch to get st. We run S(st) and get (C 0 , m 0 , C 1 , m 1 , z). We then run S * (z, C), where C are all first-phase TGen queries, to get (C * , m * ). We prepare challenges tokens by encrypting C * under k * in the trojan branch and using the provided TGen oracle we generate the challenge tokens. We query the provided LR on (0, 0, m b ) and (1, k * , m * ) and receive the corresponding vector of ciphertexts. Second-stage TGen queries are handled using provided TGen oracle and k * . Finally, we return the same bit that the distinguisher returns. Legitimacy of first-stage TGen queries follows from the first condition on concentration that with high probability C(m b ) = C(m * ). For the challenge tokens, this follows from the second concentration requirement that C b (m b ) = C * (m * ). For the second-stage queries we rely on the restriction on the adversary. Recall that in the Res-PRIV model, any second-stage queries must have an image vector which matches one for a first-stage query. Since the first-stage images match those on m * (and hence are legitimate), the second-stage ones will be also legitimate. We output (b = b) where the distinguisher outputs b . As a result of this game, the challenge messages no longer depend on b. It is easy to see that according to the IND-CPA challenge bit this reduction interpolates between games Game 1 and Game 2 . Game 3 : In this game we use C 1 in challenge token generation even if b = 0. We show this hop in unnoticeable down to the security of the obfuscator. We sample an FE key pair and a symmetric key and simulating the first-stage TGen queries for the adversary as before. We define a DI sampler that outputs the circuits that the Res-PRIV sampler outputs, but extends the circuit list to include another copy of C 1 on both sides. This sampler also outputs as auxiliary information z the original auxiliary information output by the PRIV sampler, extended with the random coins used to generate the FE key, the symmetric key, and to run the first stage of the adversary (this will allow the second stage DI adversary to reconstruct the keys and first stage TGen queries). It follows that this sampler is unpredictable as long as the Res-PRIV sampler is. When we receive the obfuscations and z , we generate (C * , m * ) ←$ S * (z, C), where C are all first-phase TGen queries. We form the challenge tokens using the received obfuscations and C * , taking the C 1 obfuscations from the duplicated part of the challenge, and the C 0 obfuscations from the original part (these can now be either C 0 or C 1 depending on the external challenge bit). Challenge ciphertexts are generated by encrypting m * (rules of Game 2 ). We answer the second-stage TGen queries using the FE key and the symmetric key. We return whatever the distinguisher returns. It is easy to see that according to the DI challenge bit this reduction interpolates between games Game 2 and Game 3 . In Game 3 both the challenge tokens and challenge ciphertexts are independent of the bit b and hence the advantage of any adversary is 0. Examples. Consider keyword samplers which output high-entropy keywords and messages with arbitrary image matrices. All such samplers are concentrated around a sampler S * that outputs uniformly random keywords and messages subject to the same image pattern. The second concentration condition is immediate and the first follows from the fact that all messages and circuits have high entropy and C is selectively chosen. Although this argument can be extended to samplers outputting low-entropy keywords whose complete image matrix is predictable or is included in z, the latter requirement may not always be the case in general. Consider, for example, a vector C consisting of circuits for w = 0 n and messages m 0 = m 1 whose components are randomly set to 0 n and 1 n . The image matrix in this setting is unpredictable as long as a sufficiently large number of messages are output. As another example, consider hyperplane membership circuits C[v](w) that return 1 iff v, w = 0 (mod p) for a prime p. Samplers which output n vectors v i ∈ Z d p and m messages w i ∈ Z d p where all vector entries have high entropy can be easily shown to be unpredictable. Given the corresponding n × m image matrix, whenever d(n + m) > nm, a high-entropy pre-image to the image matrix can be sampled as the system will be underdetermined. Under this condition, the second requirement needed for concentration is met, and the first condition follows as this pre-image is high entropy and C is selectively chosen. This observation implies that a DI obfuscator for the hyperplane membership problem will immediately yield a private functional encryption scheme for the same functionality under arbitrary correlations via the TOX construction, a problem that was left open in [START_REF] Boneh | Function-private subspace-membership encryption and its applications[END_REF]. In Appendix G we give a direct construction of a DI obfuscator for hyperplane membership by proving that the obfuscator of Canetti, Rothblum and Varia [START_REF] Canetti | Obfuscation of hyperplane membership[END_REF] is DI secure in the presence of random auxiliary information under a variant of the DDH assumption in the style of those used in [START_REF] Canetti | Obfuscation of hyperplane membership[END_REF][START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF]. The Disjunctively-Obfuscate-Extract (DOX) transform In this section, we present a construction specific to point functions. We were able to remove the limitation of the TOX transform that provides security guarantees only against concentrated samplers, and achieve privacy in the presence of arbitrary correlations between searched keywords and encrypted messages. Our construction demands less from the underlying functional encryption and obfuscator, and hence can potentially allow more efficient instantiations of these primitives. The DOX transform. Let Obf be an obfuscator supporting a point circuit family CSp over message space MSp. Let FE be a functional encryption scheme supporting general circuits, and let PRP be a pseudorandom permutation (see Section 2.1 for the formal definition). We construct a keyword search scheme KS for keyword space WSp = MSp via the Disjunctively-Obfuscate-Extract (DOX) transform as follows. The key-generation algorithm samples a PRP key k ←$ K(1 λ ) and an FE key pair (msk, mpk) ←$ FE.Gen(1 λ ). It returns ((k, msk), mpk). The encryption operation is identical to that of the FE scheme. The test algorithm is identical to the evaluation algorithm of FE. The token-generation algorithm computes FE.TGen(msk, Obf(1 λ , C[w]) ∨ Obf(1 λ , C[E(k, w)])) . The FE-extracted circuits are two-point circuits implemented as the disjunction of two obfuscated point functions. One of the points will correspond to the searched query, whereas the other point will be pseudorandom and will be only used for proofs of security. (In a loose sense, the second point represents the second branch in the TOX construction.) As in OX, the composable VGB obfuscator of [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF] for point functions and any general-purpose functional encryption scheme (such as those in [START_REF] Goldwasser | Reusable garbled circuits and succinct functional encryption[END_REF][START_REF] Garg | Candidate indistinguishability obfuscation and functional encryption for all circuits[END_REF]) can be used to instantiate the above construction. The supported circuit class would roughly amount to two parallel group operations and two comparisons in a DDH group. In Appendix F.1 we show that DOX is computationally correct. The proof relies on the fact that correctness remains intact unless the adversary finds one of the hidden PRP values, and the probability of the latter can be bounded by the one-way security of the obfuscator and the pseudorandomness of PRP. The proof of Res-PRIV security of this construction involves an intricate game hopping argument, in order to deal with all possible correlations allowed by the Res-PRIV model (which are the same as those allowed by full PRIV). We outline it below, highlighting how various ingredients are used in the construction, and provide a detailed proof in Appendix F.2. Theorem 3 (DOX is Res-PRIV). If FE is an IND-CPA-secure functional encryption scheme, PRP is a PRP-secure pseudorandom permutation family and Obf is a DI-secure obfuscator then scheme DOX[FE, PRP, Obf] is a Res-PRIV-secure keyword search scheme. Proof (Outline). The proof proceeds along six games as follows. Roughly speaking, after moving to a random permutation in Game 1 (and some bookkeeping in Game 2 ), in Game 3 we move from correlations between messages and keywords to their repetition patterns. In Game 4 , we use obfuscation to deal with repetitions among keywords that do not match any of the messages (and were not queried to TGen in first phase). In Game 5 , we use FE security to remove repetitions among messages that do not match any of the challenge keywords and were not queried to token-generation either (either due to legitimacy or adversarial restriction). Repetitions in all other cases can be dealt with using explicit values, the image matrix, or obfuscations. These steps make challenge ciphertexts independent of the challenge bit. In Game 6 , using the security of the obfuscator we move to a setting where challenge tokens are also independent of the bit. In Game 6 advantage of any adversary is 0. Game 0 : This game is identical to the Res-PRIV game. Game 1 : Instead of PRP, a truly random permutation is used in TGen. We simulate the random permutation via a lazily sampled table T . This transition is sound down to PRP security. Game 4 : Call a challenge keyword unpaired if it was not any of the challenge messages, and new if it is not queried to first-phase TGen. In this game, instead of T values we use forgetful random values for all new and unpaired keywords. We bound this hop using DI. We simulate first-phase TGen using a lazily sampled T and a msk. Next, we run the Res-PRIV sampler explicitly and identify all new unpaired keywords. We define a DI sampler to sample consistent values on left and forgetful values on right (both independently of T ), together with a second set consisting of sufficiently many consistent values on both sides. (The DI sampler does not need to respect any equality patterns.) This sampler can be shown to be statistically unpredictable. Once we receive the obfuscations, we use the first set, the explicit knowledge of challenge values and table T to form the challenge tokens and ciphertexts. For second-phase TGen queries we need to use consistent T values throughout. For values which match a first-phase query or a challenge messages we use T . If a query happens to match a new unpaired keywords-we can check this using the explicit knowledge of the keywords-we use a value from the second set of obfuscations. Otherwise we sample T values. We return 1 iff the adversary succeeds. Game 5 : Call a challenge message unpaired if it is not any of the challenge keywords, LR-identical if m * 0 = m 1 , LR-differing if not equal, and new if it is unpaired and not queried to first-phase TGen. In this game instead of T values we use forgetful values for all unpaired LR-differing messages and all new LR-identical messages. We bound this hop down to IND-CPA. We will use the provided TGen oracle and only need to set T -values correctly. For first-phase TGen queries we lazily sample T . Next we run the sampler explicitly to obtain the challenges. For paired keywords or messages we use T -consistent values. For new unpaired keywords we use forgetful values (rule in Game 4 ). For unpaired messages, if LR-identical and queried to first-phase TGen (hence not new) we also use consistent T values. For LR-differing or new LR-identical messages we call LR in FE game, asking for T -consistent values on the left and independent forgetful values on the right. Note that LR-differing messages and new LR-identical messages are not queried to TGen at all due to our restriction on the adversary. If a second-phase TGen query matches a forgetful value generated in computing the LR query, we stop and guess that forgetful values were encrypted. (These values are information theoretically hidden if not encrypted.) Otherwise, we return 1 iff the adversary succeeds. Game 6 : In this game, irrespective of the bit, we use the second set of keywords for challenge token generation. We reduce this transition to the DI game. First-phase TGen queries are answered using a lazily sampled T and a generated msk. We set the DI sampler to run the PRIV sampler and on top of the output keywords, also ask for obfuscations of messages that are at the same time LR-identical, unpaired and new (it also outputs the random coins of first stage adversary, key generation and token extraction, along with a full image matrix as extra auxiliary information that will be needed for the second stage simulation). Using the symmetry of roles for keywords and messages in point functions, this sampler can be shown to be unpredictable whenever the PRIV sampler is. The obfuscations of messages will allow us to check if any of these messages (hidden under the obfuscation) match a first-phase TGen query. We need this as according to the rules of Game 5 we must use T -consistent values. For paired messages, which we can find using the image matrix, we also use T -consistent values. For unpaired keywords we use forgetful values. For all other unpaired messages (be it LR-differing or never queried to TGen) we use forgetful values (Game 5 ). Second phase TGen queries are answered using T -consistent values relying on the fact that we can use the obfuscations to check matches with paired keywords and the restriction that adversary cannot query a new unpaired LR-identical messages to TGen. We return 1 iff the adversary succeeds. Challenge tokens in Game 6 are independent of the challenge bit. Due to the modifications in Game 4 and Game 5 , the challenge ciphertexts are also independent of it. To see this note that ciphertexts contain on left and right: (1) identical T -consistent values that follow the correct repetition pattern for paired massages; (2) forgetful (independent) values for LR-differing messages; (3) identical T -consistent values that follow the correct repetition pattern for LR-identical messages queried in the first stage; (4) forgetful (independent) values for LR-identical messages not queried in the first stage. The adversary, therefore, has zero advantage in this game. The Verifiably-Obfuscate-Encrypt-Extract (VOEX) transform We now present a fourth construction for point functions, which although simpler, conceptually relies on the observation that messages can be encoded as circuits that other circuits can evaluate. The obfuscator that we will rely on in our construction needs to be verifiable, meaning that there is an efficient algorithm to determine if a circuit C is an obfuscation of a point function C[m] for a message m ∈ MSp λ . This property can be easily added by attaching a NIZK proof that there exist (m, r) such that C = Obf(1 λ , C[m]; r). The VOEX transform. Let NIZK = (NIZK.Setup, NIZK.Prove, NIZK.Verify) be a non-interactive zero-knowledge proof system (see Section 2.4). Let Obf be an obfuscator supporting a circuit family CSp := {CSp 1 λ ∪ CSp 2 λ } λ∈N , where CSp 1 λ := {C[m] : m ∈ MSp λ } and CSp 2 λ := {D[crs, m] : m ∈ MSp λ , crs ∈ [NIZK.Setup(1 λ )]} with D[crs, w](C, π) := 1 if NIZK.Verify(crs, C, π) ∧ C(w) = 1 ; 0 otherwise. Let RSp := {RSp λ } λ∈N denote the randomness space of Obf. Let FE be a functional encryption scheme supporting general circuits. We construct a keyword search scheme KS := VOEX[FE, NIZK, Obf] via the Verifiably-Obfuscate-Encrypt-Extract (VOEX) transform for keyword space WSp := MSp as follows. Setup: Algorithm KS.Gen(1 λ ) generates a key pair (msk, mpk) ←$ FE.Gen(1 λ ) and a common reference string crs ←$ NIZK.Setup(1 λ ). It returns the key pair ((msk, crs), (mpk, crs)). Encryption: ,C,(m,r)) and finally returns FE.Enc(mpk, (C, π)). Token generation: Algorithm KS.TGen((msk, crs), w) generates a token for circuit D[crs, w] using the token-extraction algorithm FE.TGen and returns the result. Evaluation: Algorithm KS.Test(c, tk) simply runs FE.Eval(c, tk). Algorithm KS.Enc((mpk, crs), m) generates C ←$ Obf(1 λ , C[m]; r) for r ←$ RSp λ . It sets π ←$ NIZK.Prove(crs Correctness of the construction follows from the correctness of the obfuscator and that of the functional encryption scheme, as well as the completeness of the proof system. Before presenting the theorem, we clarify the requirements on the underlying obfuscation scheme. PRIV-restricted samplers. As shown in the work of Barak et al. [9], no 2-circuits generalpurpose VBB obfuscator exists. This impossibility result can be extended to rule out generalpurpose DI obfuscation as well and, in particular, DI obfuscation supporting the class of circuits we require for instantiating our construction above. Briefly, consider the circuits D[w](C) := C(w) and a sampler S that outputs circuits (D[w], C[w]) on the left and (D[w], C[w]) on the right for a uniform keyword w. This sampler can be shown to be unpredictable. However, the DI game can be won by evaluating (an obfuscation of) the first challenge circuit on an obfuscation of the second challenge circuit. For our particular construction, however, we rely on a weaker form of obfuscation that is only required to support samplers that output circuits and messages that are restricted by the PRIV legitimacy condition (this is a result of our reduction strategy). Concretely, such circuits and messages will result on image matrices that are identical on the left and right, which completely rules out attacks akin to those in [START_REF] Barak | On the (im)possibility of obfuscating programs[END_REF]. We call this class of DI samplers PRIV-restricted. Formally, a DI sampler S is PRIV-restricted for circuit class CSp if for a legitimate PRIV sampler S and a non-interactive zero-knowledge proof system NIZK it operates as follows. S(1 λ , st, crs) : (C 0 , C 1 , m 0 , m 1 , z) ←$ S (1 λ , st); if crs / ∈ [NIZK.Setup(1 λ )] return ([], [], ) else return ((D[crs, C 0 ], C[m 0 ]), (D[crs, C 1 ], C[m 1 ]), (z, C 0 (m 0 ))) The PRIV security of the VOEX construction is established in the following theorem. Theorem 4 (VOEX is PRIV secure). If FE is IND-CPA secure, NIZK is perfectly sound and computationally zero-knowledge, and obfuscator Obf is DI secure with respect to PRIV-restricted samplers, then scheme VOEX[FE, Obf, NIZK] is PRIV secure. Proof (Outline). The proof follows a sequence of games as follows. Game 0 : This is the PRIV game for the VOEX construction. Game 1 : We say a message m b [i] is unpaired if m b [i] / ∈ w b . Note that for all legitimate samplers if m 0 [i] is unpaired, so is m 1 [i]. In this game, the LR oracle replaces all unpaired messages (on both sides) which are LR-differing (that is, when m 0 [i] = m 0 [i]) with random and independently sampled values. The distance to the previous game can be upper bounded using the IND-CPA security of the FE scheme. The legitimacy of the algorithm playing the IND-CPA game in the reduction is guaranteed because: (1) replaced messages are LR-differing and therefore the adversary cannot ask tokens for those in the PRIV game (and hence also the IND-CPA hame); (2) replacements are random and information-theoretically hidden from the adversary when the original messages are encrypted, and if the adversary asks for token for one of the random replacements, it can be only because the ciphertexts are leaking one of these replacements. Game 2 : In this game we use Sim to generate simulated proofs in the LR oracle without using the explicit knowledge of the messages. The distance to the previous game can be bounded by the zero-knowledge property of the NIZK proof system. Game 3 : In this game, regardless of bit b, we use the second set of keywords and messages to generate the challenge. We reduce this transition to the DI game. We set the DI sampler to take a crs along with the state st required to run the PRIV sampler; it runs the PRIV sampler to obtain keywords and messages, and it outputs a D circuit (with a hardwired crs) for every keyword and a C circuit for every message (after carefully replacing LR-differing unpaired messages with random values as in Game 1 ). This DI sampler is by definition PRIV-restricted and it is unpredictable whenever the underlying PRIV sampler is unpredictable. The proof of this fact relies on the perfect soundness of the proof system under the binding crs, whose validity we assume can be efficiently checked [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF]. The PRIV predictor uses its oracle to answer the DI predictor's queries (if a query contains an obfuscated point circuit and the attached proof verifies, the unbounded PRIV predictor can reverse-engineer the obfuscated circuit to recover its underlying point, and query its own oracle on it). The adversary against the DI game simulates the environment of Game 2 as follows. It generates a key pair (msk, mpk) and simulated (crs, tp) for the NIZK, and then it runs the first stage of the PRIV adversary until it obtains state st for the LR oracle call. It then calls its own LR oracle on (st, crs), obtaining a set of obfuscations. As before, the trapdoor tp is used to produce simulated proofs of the obfuscations of C circuits corresponding to messages, resulting in wellformed challenge ciphertexts. It then runs the second stage of the PRIV adversary, answering its token extraction queries using the master secret key and, when this adversary returns a bit b , it uses it as its own guess. In Game 3 , the challenge is independent of bit b and therefore the adversary has zero advantage. Concluding Remarks The main open problem we leave for future work is to construct functional encryption schemes that achieve full PRIV security under milder assumptions or are more efficient under restricted versions of the PRIV model. For general circuits, a possible path towards a solution to this open problem would be to consider the FE construction of Garg et al. [START_REF] Garg | Candidate indistinguishability obfuscation and functional encryption for all circuits[END_REF]. There, a token for a circuit C is (roughly speaking) an indistinguishability obfuscation of the circuit C(PKDec(sk, •)) for a PKE decryption circuit PKDec. A natural question is whether this construction already achieves some form of privacy under the conjecture that the indistinguishability obfuscator achieves VGB obfuscation [15, Section 1.1]. For specific classes, one can follow the various constructions presented here and explore variations and optimizations of their underlying primitives. Indeed, since Res-PRIV constitutes a very mild weakening of PRIV, it could be that a modification of it allows the proof of security to be extended to the PRIV model. Finally, we note that our work leaves open the task of formalizing and realizing privacy notions for more expressive cryptographic primitives such as multi-input or randomized FE schemes. A Taxonomy of Samplers We define a number of special classes of samplers by imposing structural restrictions on their internal operation. Stateless. The sampler does not keep any internal state and uses independent set of coins on each invocation. All samplers will be stateless in this work unless stated otherwise. (t, s)-bounded. For polynomials t and s, with overwhelming probability |C 0 | = |C 1 | ≤ t(λ) and |m 0 | = |m 1 | ≤ s(λ) . Circuits-only. The sampler outputs no messages with overwhelming probability, i.e. it is (•, 0)bounded. One-sided. C 0 = C 1 and m 0 = m 1 with overwhelming probability. In this case we will simply write (C, m, z) ←$ S(1 λ , st) for the sampling operation. Note that every one-sided sampler is trivially unpredictable. Input-independent. For any 1 λ and st, S(1 λ , st) = S(1 λ , ε) with overwhelming probability. Aux-free. With overwhelming probability z = ε. Simple. If the sampler is both aux-free and input-independent. Random-aux. For a polynomial poly and a ppt algorithm S the sampler takes the form S(1 λ , st) : z ←$ {0, 1} poly(λ) ; (C 0 , C 1 , m 0 , m 1 ) ←$ S (1 λ , z, st); return (C 0 , C 1 , m 0 , m 1 , z) . Differing-inputs. With overwhelming probability z contains the sampler's output circuits (C 0 , C 1 ). Note that statistical unpredictability would imply that the sampled circuits are functionally equivalent, whereas computational unpredictability would lead to a notion of differing-inputs samplers used to formulate differing-inputs obfuscation [START_REF] Ananth | Differing-inputs obfuscation and applications[END_REF][START_REF] Bellare | Poly-many hardcore bits for any one-way function and a framework for differing-inputs obfuscation[END_REF]. Block-source. A t-block-source is a random variable X = (X 1 , . . . , X t ) where for every j ∈ [t] and x 1 , . . . , x j-1 it holds that X j | X 1 =x 1 ,...,X j-1 =x j-1 has high min-entropy. There is therefore sufficient decorrelation between different components in such a distribution. We can model block sources in our framework by restricting attention to ppt samplers that take the form S(1 λ , st) : (C 0 , C 1 ) ←$ S (1 λ , st); j ←$ [t]; return ((C 0 [j], C 1 [j]), (C 0 [1..(j -1)], C 1 [1..(j -1)])) where S is a (t, 0)-bounded sampler. The rationale here is that any indistinguishability-based security definition that imposes an adversary to output two block sources, and later on distinguish some computation performed on the sampled values, e.g. [START_REF] Boneh | Function-private identity-based encryption: Hiding the function in functional encryption[END_REF], would remain the same if a sampler such as the one above was used instead (note that in this case, the adversary can only have an advantage when outputting distributions that component-wise differ with non-negligible probability). B Proof of Proposition 2: CVGB =⇒ DI Proof. Let (S, A) be a DI adversary against the obfuscator. We show that the advantage of A must be negligible if S is unpredictable and the obfuscator is CVGB secure. Also, let RSp λ denote the randomness space of A. Consider a one-sided circuit sampler S that selects r ←$ RSp λ , runs A(1 λ ; r) until it outputs st, runs S(1 λ , st), chooses a bit b uniformly at random, and outputs the left or right outputs of S according to the bit b, along with auxiliary information z and coins r. Let B be a CVGB-Real adversary that runs A on the same coins and answers Sam oracle query with its challenge vector of obuscations. B outputs whatever A outputs. By the CVGB property, for (S , B) there is a (possibly unbounded) simulator Sim such that: Adv cvgb Obf,S ,B,Sim (λ) = Pr CVGB-Real S ,B Obf (λ) -Pr CVGB-Ideal S ,Sim Obf (λ) . Note that Adv di Obf,S,A (λ) = Pr CVGB-Real S ,B Obf (λ)|b = 1 -Pr CVGB-Real S ,B Obf (λ)|b = 0 . Hence, Adv di Obf,S,A (λ) ≤ Pr CVGB-Ideal S ,Sim Obf (λ)|b = 1 -Pr CVGB-Ideal S ,Sim Obf (λ)|b = 0 + + 2 • Adv cvgb Obf,S ,B,Sim (λ) . Let Q(λ) denote the number of queries of Sim. We claim that there is a predictor P making at most Q(λ) queries such that Pr CVGB-Ideal S ,Sim Obf (λ)|b = 1 -Pr CVGB-Ideal S ,Sim Obf (λ)|b = 0 ≤ Q(λ) • Adv pred S,P (λ) . From this it follows that Adv di Obf,S,A (λ) ≤ Q(λ) • Adv pred S,P (λ) + 2 • Adv cvgb Obf,S ,B,Sim (λ) . We prove the claim via unpredictability of the sampler. Observe that the views of Sim in the CVGB-Ideal game for b = 0 and b = 1 are identical unless Sim queries its oracle on a point that results in different outputs for the left and right circuits. This event, however, immediately leads to a break of unpredictability. Consider a (possibly unbounded) predictor P = (P 1 , P 2 ) as follows. P 1 selects random coins r ←$ RSp λ and runs A(1 λ ; r) until it outputs st. P 1 then outputs (st, r). P 2 (1 λ , , z, r) chooses a random index i ←$ [Q(λ)] indicating a guess for the first query of Sim that leads to a break of unpredictability. It runs Sim(z||r) and answers its oracle queries using its own provided oracle (which always respond for left circuits b = 0). At query i algorithm P 2 stops and outputs the queried value. With probability 1/Q(λ) this is the first query that the bad event occurs. Hence P 2 runs Sim perfectly until query i, at which point it wins the unpredictability game. This concludes the proof as the above holds for any poly (which in turn implies that the left hand side is negligible). C Proof of Proposition 3: DI =⇒ OW Proof. We show that OW is a weakening of DI for point circuits. Given an OW adversary A we construct a sampler S and a distinguisher D attacking DI security as follows. First, we partition each set CSp λ into two sets (of super-polynomial size) CSp 0 λ and CSp 1 λ , such that |CSp 0 λ | = |CSp 1 λ | + negl(λ). This partition can be based on some lexicographic criterion (e.g., the most significant bit of the point), as long as one can efficiently decide membership in each partition. Our sampler S samples two point circuits C 0 and C 1 , uniformly at random from CSp 0 λ and CSp 1 λ , respectively. It then outputs two t-sized vectors C 0 = (C 0 , . . . , C 0 ) and C 1 = (C 1 , . . . , C 1 ). (Here t is the length parameter initially output by the one-wayness adversary A.) (Recall that auxiliary information z is empty.) It is clear that S is unpredictable, and therefore legitimate as a DI sampler. On obtaining the obfuscations, the distinguisher D runs adversary A on the same inputs and recover a circuit C . Observe that the distribution of the obfuscations provided to A is statistically close to the correct distribution given the combined action of S and the challenge bit in the DI game. Distinguisher D then returns 0 if C ∈ CSp 0 λ and 1 otherwise. It is straightforward to establish that a non-negligible advantage for A in the OW game translates to a non-negligible advantage for (S, D) in the DI game. D Proof of Proposition 4: PRIV-TO =⇒ 1-DI Proof. We first describe the operation of the required obfuscator. Given a circuit C, the required obfuscator Obf generates an FE key pair (mpk, msk) and uses the master secret key to extract a token tk for C. It then defines the obfuscated circuit to be one that first encrypts m under mpk using trivial random coins, and then evaluates the resulting ciphertext using tk, i.e., the circuit C[mpk, tk](•) := FE.Eval(FE.Enc(•, mpk; 0 poly(λ) ), tk) . The correctness of this obfuscator follows from that of the FE scheme. The proof of DI security for this construction with respect to samplers that output a single circuit pair is a direct reduction to PRIV-TO. We construct a PRIV-TO adversary that uses the DI sampler without change and does nothing in the first stage. In the second stage, on obtaining the challenge token tk, constructs C[mpk, tk](•) and passes it on to the DI distinguisher. It will return whatever the distinguisher outputs. This simulation is easily seen to be perfect. Remark. For arbitrary DI samplers the argument above fails. This is due to the fact that communication between the sampler and the distinguisher is restricted (by the unpredictability condition) and hence hybrid arguments cannot be made to go through. E PRIV-TO and IND-CPA Security of OX Transform Proof. The proof is straightforward and results from direct reductions to the underlying components used in the construction. We start by proving that OX[FE, Obf] is PRIV-TO-secure for a circuits family CSp and (circuits-only) sampler class S if Obf is DI-secure for CSp and S. Given an adversary (S, A 1 ) against PRIV-TO security of OX[FE, Obf], we construct an adversary (S , B 1 ) against the DI security of Obf as follows. We set S to be the same as S. Algorithm B 1 runs FE.Gen(1 λ ) to generate on its own a master secret key and master public key pair (msk, mpk). Then, B 1 runs A 1 on mpk, answering all its token-generation queries by running FE.TGen(msk, •), until A 1 calls LR on some state st. At this point, B 1 calls its own LR oracle on st and receives as a challenge a vector of obfuscated circuits. B 1 generates a token for each circuit, and forwards the result to A 1 . Thereafter, B 1 continues running A 1 , answering its second-stage token-generation queries as before until A 1 outputs a bit b , which B 1 outputs as its own guess. The simulation is perfect and S is unpredictable because S is unpredictable. We now prove the second part of the theorem, i.e. that OX[FE, Obf] is IND-CPA-secure if FE is. Let A 2 be an adversary against IND-CPA security of OX[FE, Obf]. We construct an adversary B 2 against IND-CPA security of FE. Algorithm B 2 (mpk) runs A 2 (mpk), answering its first-stage TGen(C) queries by first computing an obfuscation C of circuit C, placing a token-generation query on C to its own TGen oracle, and forwarding the token to A 2 . When A 2 asks to be challenged on messages (m 0 , m 1 ), B 2 calls its own LR oracle on these messages and forwards the challenge ciphertext to A 2 . Second-stage TGen queries are answered as before. Finally, B 2 outputs A 2 's guess b as its own guess. Here again, the simulation is perfect and legitimacy of B 2 follows from the legitimacy of A 2 and the fact that the obfuscator preserves the functionality of the circuit, which means that B 2 has precisely the same advantage as A 2 . Therefore, we conclude that Adv priv-to OX[FE,Obf],S,A 1 (λ) = Adv di Obf,S,B 1 (λ), and that Adv ind-cpa OX[FE,Obf],A 2 (λ) = Adv ind-cpa FE,B 2 (λ) . F Analysis of DOX Transform Adv cc KS,A (λ) ≤ Adv cc FE,B 3 (λ) + (t + 1) • Adv ow Obf,B 2 (λ) + Adv prp PRP,B 1 (λ) + t + 1 |WSp λ | . Proof. The proof is simple and follows two game hops as follows. Game 0 : This is the CC game with respect to FE and PRP. Game 1 : In this game instead of a PRP a truly random permutation (simulated via lazy sampling) is used in the calculation of tokens in TGen oracle and preparing tp. Game 2 : In this game, if a token generation query or one of A's output words matches any of the randomly generated words (via lazy sampling) the game aborts. The analyses of the game transitions are straightforward. The transition from Game 0 to Game 1 relies on the security of the PRP. The transition from Game 1 to Game 2 is down to the one-way security of the obfuscator (note that the only information leaked to the adversary about each of the random keywords is via an obfuscated circuit included in the extracted tokens). Finally, the advantage of the adversary in Game 2 can be bounded down to the correctness of FE. We give the details next. Game 0 to Game 1 . Any adversary A with visible advantage difference in these two games can be converted to an adversary B 1 against the security of the PRP. Assume that lazy sampling is implemented using a table T , i.e., T [w] indicates the random value assigned to w. Algorithm B 1 starts by generating an FE key pair. It handles a queries w of A to TGen by first computing w ← Fn(w) via its Fn oracle, obfuscating the circuits associated with these keywords, and finally generating a token for the disjunction of the obfuscated circuits using the master secret key. Token generation after A terminates is handled similarly, and the remaining operations in of the CC game can be simulated using mpk. B 1 will finally check if A succeeded in breaking correctness. If so, then its output will be 0. Else, it will be 1. Note that when the Fn oracle implements the PRP, Game 0 is simulated for A, and when it implements a random permutation Game 1 is simulated. A simple probability analysis yields, Pr[Game 0 (1 λ )] -Pr[Game 1 (1 λ )] = Adv prp PRP,B 1 (λ) . Game 1 to Game 2 . Let us consider that the game is aborted if, at the end of the execution of Game 2 , one considers all keywords explicitly output by the adversary (i.e., all w * in the list of keywords queried from TGen plus the challenge keyword and message output by the adversary when it terminates), and for some keyword w in table T we have: w * = T [w] . We bound the probability that this bad flag is set via the one-way security of the obfuscation. We build the required B 2 against the (t + 1)-OW security of Obf as follows. B 2 first guesses the query i in which A first produces w by choosing an index i ←$ [t + 1], where t is an upper bound on the number of TGen queries that A makes and the extra 1 accounts for the challenge keyword it produces on termination. B 2 then generates an FE key pair, runs A and answers its TGen queries using the master secret key and constructing T [w] as before, except when the i-th query comes (and all future w queries). In the latter case, B 2 uses a new challenge obfuscated circuit it receives in the one-wayness game. Note that we have implicitly programmed T [w] to be an unknown value, which leads to an inconsistency with probability at most (t+1)/|WSp|: an upper bound on the probability that this value collides with one of the values in T during the entire game. When the bad event is detected, and if B 2 's guess was correct, then B 2 can recognize the faulty keyword by checking the obfuscated circuits it received for a match, and it can win the one-wayness game. Hence, Pr[Game 1 (1 λ )] -Pr[Game 2 (1 λ )] ≤ (t + 1) • Adv ow Obf,t,B 2 (λ) + t + 1 |WSp λ | . Analysis of Game 2 . In this game we use A to build an adversary against the computational correctness of the underlying FE scheme. Note that if Game 2 does not abort, then m = T (w) when A terminates. We show that if A wins without any aborts we can build an adversary B 3 which wins the FE correctness game. Algorithm B 3 gets mpk and runs A on it. It answers A's TGen queries using its own oracle, still lazily sampling T [w] and asking for a trapdoor on the disjunction of the obfuscated circuits. When A returns (m, w), algorithm B 3 also returns these. Note that this is winning pair iff it is a winning pair in the FE game. We therefore have Pr[Game 2 (1 λ )] = Adv cc FE,B (λ) ≤ 2 • Adv prp PRP,B 1 (λ) + 2 • (t + s + q) • Adv ow Obf,B 2 (λ) + 2 • Adv ind-cpa FE,B 3 (λ) + 2 • Adv di Obf,S 4 ,B 4 (λ) + 2 • (Adv ind-cpa FE,B 5 (λ) + s • q |WSp λ | ) + Adv di Obf,S 6 ,B 6 (λ) . Proof. The proof follows from a sequence of six game hops. We refer the reader to Fig. 11 for a formal description of each game in a code-based language. Since the definition of Res-PRIV composes for stateless samplers, we assume A calls LR oracle exactly once. Game 0 : This game is identical to the Res-PRIV game. Game 1 : Instead of a PRP, a truly random permutation (simulated via lazy sampling) is used in token generation. The table used to maintain the lazy sampling, which we denote by T , has at most (t + q) entries. The distance to the previous game can be bounded using the security of the PRP. Game 2 : When sampler S outputs, we generate PRP values of all messages in m b as well. Since Game 1 , these are now simulated via lazy sampling, which causes the expansion of table T to at most (t + s + q) entries. All keywords and messages whose PRP value was generated are registered in list. Before setting the outcome of the game, if there are values w 1 and w 2 in list such that w 1 = T [w 2 ], game aborts. Throughout the game T [w] is obfuscated, so the distance to the previous game can be upper bounded by the one-wayness property of the obfuscator. (Obf(1 λ , C[w]) ∨ Obf(1 λ , C[r])), where r is a fresh random value uniformly sampled from WSp λ . We precise that by fresh we mean that a new and independent random value is sampled each time, even in case of multiple token extractions of the same keyword. The distance to the previous game can be upper bounded to the DI security of the obfuscator. where r is a fresh random value uniformly sampled from WSp λ . We precise that by fresh we mean that a new and independent random value is sampled each time, even in case of repetitions of the same message. We bound this hop down to IND-CPA. Game 6 : In this game, irrespective of the bit b, we use the second set of keywords for challenge token generation. We now analyze the transitions between each game and the reduction of Game 5 to DI game. Game 0 to Game 1 . Any adversary (S, A) with visible advantage difference in these two games can be converted to an adversary B 1 against the security of PRP. Assume that lazy sampling is implemented using a table T , i.e., T [w] indicates the random value assigned to w. Algorithm B 1 runs adversary (S, A) inside it, simulating all the details of Game 0 , bar the computation of the PRP. For this, algorithm B 1 uses its Fn oracle. When A terminates, B 1 checks if A succeeded in winning the game. If so, it outputs 0. Otherwise, it outputs 1. When the Fn oracle implements the PRP, Game 0 is simulated for A, and when Fn implements a random permutation, Game 1 is simulated. Therefore, Pr[Game 0 (1 λ )] -Pr[Game 1 (1 λ )] = Adv prp PRP,B 1 (λ) . Game 1 to Game 2 . Both games are exactly the same unless the bad event that causes abortion is triggered. Game 2 aborts if there are values w 1 and w 2 in list such that w 1 = T [w 2 ]. We show that an adversary (S, A) that triggers the bad event in Game 2 can be converted to an adversary B 2 against the one-wayness property of Obf. For an intuition on this game hop, observe that all occurrences of T [w] are obfuscated and T [w] is uniformly distributed. During the simulation of Game 2 , table T expands up to (t + s + q) entries. Algorithm B 2 receives in its challenge (t + q) obfuscated copies of a random point circuit. At the beginning of its execution, B 2 randomly guesses the first occurrence of w 2 in the game, by sampling i is uniformly from {1, ..., (t + s + q)}. (Keyword w 2 is of course unknown to B 2 at this point, the guess reflects a prediction of when such keyword involved in the bad event will be added to table T .) Then, B 2 simulates for adversary (S, A) all the details of Game 2 until a new keyword comes that will cause table T to expand to i entries. Instead of sampling T [w 2 ], B 2 embeds one of its challenge circuits in the computation of Obf(1 λ , C[T [w 2 ]]). (If w 2 is a message, nothing needs to be done.) Thenceforth, B 2 embeds a new circuit from its challenge each time it needs to extract a token for w 2 . In any case, B 2 never needs more than (t + q) challenge circuits to complete its simulation. At the end of the game, if B 2 's guess is correct, which happens with probability 1/(t + s + q), there is w 1 ∈ list such that w 1 = T [w 2 ]. This equality can be checked by evaluating w 1 on one of B 2 's obfuscated circuits. If so, B 2 outputs w 1 and wins the game. Hence, Pr[Game 1 (1 λ )] -Pr[Game 2 (1 λ )] ≤ (t + s + q) • Adv ow Obf,B 2 (λ) . Sampler S 4 computes two t × (t + q) matrices M 0 and M 1 . Each row in M 0 contains (t + q) repetitions of a unique random point circuit. M 1 contains t × (t + q) fresh random point circuits. S 4 is clearly unpredictable. Algorithm B 4 runs S and A inside it, simulating all the details common to Game 3 and Game 4 , which only differ on unpaired keywords not in FirstPhase list. For those, B 4 carefully picks circuits from its challenge matrix of obfuscated circuits: A new row is assigned to a new keyword; a circuit is picked from a new column in case of repetitions. If M 0 is selected in game DI, algorithm B 4 will simulate Game 3 . On the other hand, if M 1 is selected in game DI, algorithm B 4 will simulate Game 4 . Finally, when A outputs its guess, B 4 checks if A succeeded in winning the game. If so, B 4 outputs 0. Otherwise, it outputs 1. Therefore, we have that Second phase TGen queries are answered using T -consistent values relying on the fact that we can use the obfuscations to check matches with paired keywords and the restriction that adversary cannot query a new unpaired message to TGen. Finally, we output whatever the PRIV adversary A outputs. More details on how to construct S 6 , B 6 are available in Fig. 10. It remains to show that S 6 is unpredictable if S is. For this, we build a predictor Q against sampler S, from a predictor P against sampler S 6 (bottom of Fig. 10). Since S 6 outputs the same vector of circuits as S plus circuits that are LR-identical, a distinguishing message for the output of S 6 is also a distinguishing message for the output of S. Therefore, we have that Adv pred S,Q (λ) = Adv pred S 6 ,P (λ) . To conclude our proof, we put everything together: Adv res-priv DOX,S,A (λ ) := 2 • Pr[Game 0 (1 λ )] -1 ≤ 2 • Adv prp PRP,B 1 (λ) + 2 • (t + s + q) • Adv ow Obf,B 2 (λ) + 2 • Adv ind-cpa FE,B 3 (λ) + 2 • Adv di Obf,S 4 ,B 4 (λ) + 2 • (Adv ind-cpa FE,B 5 (λ) + s • q |WSp λ | ) + Adv di Obf,S 6 ,B 6 (λ) . 40 any distinguishing adversary has a negligible advantage in the game in Figure 13. We note that the unpredictability restriction on the sampler essentially excludes any challenge where a polynomialsize set of black-box linear tests could be used by a semi-bounded predictor to distinguish the hidden bit. This is a natural restriction, since the adversary is given enough information to trivially perform such tests on its own. The assumption therefore states that ppt adversaries cannot do better than what can be achieved with such linear tests. In particular, we note that such linear tests can be used to extract coefficient equality patterns that might permit trivial distinguishing attacks by checking group element repetitions in the received obfuscations. DI obfuscation for hyperplane membership. The following theorem can be trivially proven using a direct reduction. Theorem 5. The hyperplane membership obfuscator of Canetti, Rothblum and Varia [START_REF] Canetti | Obfuscation of hyperplane membership[END_REF] is DI secure in the presence of random auxiliary information if the assumption in Figure 13 holds in G. More precisely, for every unpredictable hyperplane membership sampler S, any DI adversary A that breaks the DI property can be used (without change) to break the underlying assumption with the same advantage. 2. 1 1 Pseudorandom permutations Pseudorandom permutations. Let KSp := {KSp λ } λ∈N and MSp := {MSp λ } λ∈N be two families of finite sets parametrized by a security parameter λ ∈ N. A pseudorandom permutation (PRP) family PRP := (K, E, D) is a triple of ppt algorithms as follows. (1) K on input the security parameter outputs a uniform element in KSp λ ; (2) E is deterministic and on input a key k ∈ KSp λ and a point x ∈ MSp λ outputs a point in MSp λ ; (3) D is deterministic and on input a k ∈ KSp λ and a point x ∈ MSp λ outputs a point in MSp λ . The PRP family PRP is correct if for all λ ∈ N, all k ∈ KSp λ and all x ∈ MSp λ we have that D(k, E(k, x)) = x. A pseudorandom permutation PRP := (K, E, D) is called PRP secure if for every ppt adversary A we have that Adv prp PRP,A (λ) := 2 • Pr PRP A PRP (1 λ ) -1 ∈ Negl Fig. 1 . 1 Fig. 1. Game defining the PRP security of a pseudorandom permutation PRP := (K, E, D). Fig. 2 . 2 Fig. 2. Left: Computational correctness of FE. Right: IND-CPA security of FE. and m 1 Fig. 7 . 17 Fig. 7. Relations among security notions for private functional encryption. The dotted implication only holds for keyword search schemes as weak (aka. restricted) and standard IND-CPA security models are equivalent for point circuits. and hence the images are equal on left and right. Word w 0 can be recovered bit by bit from the image values C[w b ](m 0,i ) and computing 1 -C[w b ](w 0 ) would then reveal the challenge bit b. Game 2 : 2 We introduce a bad flag. We generate PRP values for all keywords and messages. If there are two T -values (x 1 , T (x 2 )) and (x 2 , T (x 2 )) such that x 1 = T (x 2 ) we set bad. By the OW security of the obfuscator, these PRP values remain hidden and bad can only be set with negligible probability. Game 3 : We compute the ciphertexts by encrypting T (m * b ) instead of m * b . This hop is reduced to the IND-CPA security of the FE, via explicit knowledge of challenge keywords and message by running the ppt sampler. Legitimacy will be violated if there is a w queried to TGen such that w = T (m b ) or m b = T (w). Both of these events set bad. Game 3 : 3 LR oracle computes the vector of ciphertexts c by encrypting T [m b ] instead of m b . The distance to the previous game can be upper bounded using the IND-CPA security property of the underlying FE scheme. Game 4 : We say a keyword w is unpaired if w ∈ w b and w / ∈ m b . All first-phase queries to TGen oracle are recorded in FirstPhase list, i.e. all keywords A queries to TGen oracle before calling LR. During the simulation of LR oracle and second-phase TGen oracle, if w is an unpaired keyword not in FirstPhase list, we extract its token from circuit Game 5 : 5 Analogously, we say a message m is unpaired if m ∈ m b and m / ∈ w b . During the simulation of LR oracle, if m is an unpaired message not in FirstPhase list, we encrypt r instead of T [m], Game 2 to Game 3 . 3 Any legitimate adversary (S, A) with visible advantage difference in these two games can be converted to an adversary B 3 against IND-CPA security of FE. For an intuition on this reduction, observe that all tokens are extracted from circuits of the form (Obf(1 λ , C[w]) ∨ Obf(1 λ , C[T [w]])), which return 1 when evaluated on both w and T [w]. Illegitimate tokens that would allow to distinguish encryptions of m from encryption of T [m] have been excluded in Game 2 , given that the game aborts if adversary (S, A) outputs a value sampled for the simulation of the random permutation. Algorithm B 3 runs adversary (S, A) inside it, simulating all the details common to Game 2 and Game 3 . B 3 receives mpk and runs adversary A with it. For token-generation and encryption, B 3 relies on its oracles. When B 3 needs to compute a token for some keyword w, it queries its own TGen oracle with circuit (Obf(1 λ , C[w]) ∨ Obf(1 λ , C[T [w]])).When B 3 needs to compute the encryption of some message m in Game 2 or T [m] in Game 3 , it queries its own LR oracle with (m, T [m]). The ciphertexts output by LR oracle in game IND-CPA allow B 3 to interpolate between the simulation of Game 2 and Game 3 . The simulation is perfect. Eventually, A outputs b , which B 3 forwards as its own guess. Now, let's analyze legitimacy of B 3 . Legitimacy condition of IND-CPA requires that for all C queried to TGen and all (m 0 , m 1 ) queried to LR, we have that C(m 0 ) = C(m 1 ). In the execution of B 3 , queried circuits are of the form (Obf(1 λ , C[w]) ∨ Obf(1 λ , C[T [w]])) and queried messages of the form (m, T [m]). More precisely, legitimacy requires that ∀w ∈ TList, ∀m ∈ MList,(Obf(1 λ , C[w]) ∨ Obf(1 λ , C[T [w]]))(m) = (Obf(1 λ , C[w]) ∨ Obf(1 λ , C[T [w]]))(T [m]) ⇔ (C[w] ∨ C[T [w]])(m) = (C[w] ∨ C[T [w]])(T [m]) / / functionality preserving ⇔ C[w](m) ∨ C[T [w]](m) = C[w](T [m]) ∨ C[T [w]](T [m]) ⇔ C[w](m) = C[T [w]](T [m]) / / bad event in Game 2 ⇔ (w ? = m) = (T [w] ? = T [m]) ⇔True .Therefore, B 3 is a legitimate adversary against IND-CPA and we have thatPr[Game 2 (1 λ )] -Pr[Game 3 (1 λ )] = Adv ind-cpa FE,B 3 (λ) .Game 3 to Game 4 . Any legitimate adversary (S, A) with visible advantage difference in these two games can be converted to an adversary (S 4 , B 4 ) against DI security of Obf. The intuition here is the following: Without a ciphertext that encrypts T [w], the adversary cannot detect if tokens for w are extracted from (Obf(1 λ , C[w]) ∨ Obf(1 λ , C[T [w]])) or from (Obf(1 λ , C[w]) ∨ Obf(1 λ , C[r])), where r is a fresh random value uniformly sampled from WSp λ . Details of adversary (S 4 , B 4 ) are shown in Fig. 8. Pr[Game 3 3 (1 λ )] -Pr[Game 4 (1 λ )] = Adv di Obf,S 4 ,B 4 (λ) .Game 4 to Game 5 . Any legitimate adversary (S, A) with visible advantage difference in these two games can be converted to an adversary B 5 against IND-CPA security of FE. The intuition here is paired, we sample a new T -value the first time and answer consistently throughout the game. For all unpaired messages that were never queried to TGen, we use forgetful values (rules of Game 5 ). Pr[Game 5 5 (1 λ )] -Pr[Game 6 (1 λ )] ≤ Pr[Game 5 (1 λ )] -Adv di Obf,S 6 ,B 6 (λ) . 1 ]Fig. 13 . 113 Fig.[START_REF] Bellare | Contention in cryptoland: obfuscation, leakage and UCE[END_REF]. Game defining a DDH-style computational assumption. Res-PRIV security of DOX). If FE is an IND-CPA secure functional encryption scheme, PRP is pseudorandom and Obf is a DI-secure obfuscator then scheme DOX[FE, PRP, Obf] is Res-PRIV secure. More precisely, for any adversary (S, A) in game Res-PRIV against scheme DOX[FE, PRP, Obf], in which A places at most q queries to TGen oracle and S outputs a tuple (w 0 , w 1 , m 0 , m 1 , z) such that |w 0 | = |w 1 | = t and |m 0 | = |m 1 | = s, there exists adversaries B 1 , B 2 , B 3 , (S 4 , B 4 ), B 5 , (S 6 , B 6 ) such that F.2 Res-PRIV security Theorem (Adv res-priv DOX,S,A 3 (λ) . We do not impose that C0(m) = C1(m) within the Func oracle as this is exactly the event that P is aiming to invoke to win the game. The restriction we do impose allows for a sampler to be unpredictable while possibility outputting low-entropy messages that might even differ on left and right. Given a predictor P for such a sampler, run it and return its output while simulating its Func oracle by always answering with 0. This simulation is consistent unless P queries Func on a message matching the point underlying one of the circuits, an event that immediately contradicts the high min-entropy of the points in the original distribution. Furthermore, if P outputs a point where two circuits differ, this must be because it guessed one of the points. In practice, this could correspond, for example, to encryptions of low-entropy data that may or may not be matched by correlated search queries. Such impossibility results hold even in the case where no composition is considered, i.e., where only a single circuit is obfuscated. Note also that if auxiliary information is made available, the definition becomes unachievable for even simpler circuits: VBB multi-bit point obfuscation with auxiliary information is impossible if indistinguishability obfuscation exists[START_REF] Brzuska | Indistinguishability obfuscation versus multi-bit point obfuscation with auxiliary input[END_REF]. We limit samplers to ppt because in proving the security of our constructions, samplers are used to construct computational adversaries against other schemes. In general, one could consider unbounded samplers. When the restriction here is imposed on the IND-CPA model for point function, the resulting model remains as strong as the full IND-CPA model. Consider a sampler which does not output any circuits and simply returns (possibly low-entropy) messages included in the state st passed to it. This sampler is trivially unpredictable. Furthermore, the legitimacy conditions in the two games exactly match. Acknowledgements. A. Arriaga is supported by the National Research Fund, Luxembourg (AFR Grant No. 5107187). S 4 (1 λ , st): for i ∈ {1, ..., t} T [i] ←$ WSp λ \ T for j ∈ {1, ..., (t + q)} M 0 We now analyze legitimacy of B 5 . Legitimacy condition of IND-CPA requires that for all C queried to TGen and all (m 0 , m 1 ) queried to LR, we have that C(m 0 ) = C(m 1 ). In the execution of B 5 , queried circuits are of the form (Obf( )) and queried messages of the form (T [m], r). More precisely, legitimacy requires that ∀w ∈ TList, ∀m ∈ Game 5 to Game 6 . In this game, irrespective of the bit, we use the second set of keywords for challenge token generation. We construct an adversary (S 6 , B 6 ) against DI as follows. B 6 generates by itself a master secret key and master public key pair (msk, mpk), then runs A(mpk). First-phase TGen queries are answered using a lazily sampled T and a generated msk. We set the DI sampler S 6 to run the PRIV sampler S and on top of the output keywords, also ask for obfuscations of messages that match a first-phase query. By legitimacy of A, these messages must be LR-identical. Using the symmetry of roles for keywords and messages in point functions, this sampler can be shown to be unpredictable whenever the PRIV sampler is. We find paired messages using the image matrix. The obfuscations of messages will allow us to check if any of these messages (hidden under the obfuscation) match a first-phase TGen query. For messages that were queried during the first stage, we select the correct T -value. For messages that are at the same time new and S 6 (1 λ , st): Game 0 (1 λ ): G Distributional Indistinguishability for Hyperplane Membership Hyperplane membership. Let CSp := {CSp d p } be a set circuit family of hyperplane membership testing functions that is defined for each value of the security parameter λ such that there is a λ-bit prime p and a positive integer d. = g a,x , so this is the case if a, x = 0.) We assume that the resulting obfuscated circuit is canonically represented by (g 1 , . . . , g d ), generated as described above. We will now prove that this same construction satisfies distributional indistinguishability under a generalization of the SVDDH assumption, a DDH-style assumption we present in Fig. 13. Unpredictable hyperplane membership samplers. We begin by refining the notion of unpredictable samplers to the case of hyperplane membership circuits. In general, a sampler for the hyperplane membership functionality will output two lists of message vectors corresponding to candidate hyperplane members, and two lists of hyperplane vectors, plus some auxiliary information z, which in this paper we will assume to be a random string of polynomial size poly(λ). However, since we are dealing with obfuscation, we will consider samplers where no messages are produced. We recall the unpredictability experiment for this special case in Figure 12, where notation w, m denotes the vector that results from computing w[i], m ? = 0, for all 1 ≤ i ≤ |w|. (Note that here w is a list of vectors in Z d p and m is a vector in Z d p .) Sampler outputs only random auxiliary information. Pred P S (1 λ ): (st, st ) ←$ P 1 (1 λ ) z ←$ {0, 1} poly(λ) (w 0 , w 1 ) ←$ S(1 λ , z, st) m ←$ P Func Computational Assumption. Our computational assumption is a vectorized version of the DDH variant introduced in [START_REF] Canetti | Obfuscation of hyperplane membership[END_REF], in the style of the assumption that is used in [START_REF] Bitansky | On strong simulation and composable point obfuscation[END_REF] to establish the DI property of a point function obfuscator. The assumption states that, for every unpredictable sampler,
137,893
[ "771995" ]
[ "366875", "19144", "59704" ]
01470888
en
[ "info" ]
2024/03/04 23:41:46
2016
https://ens.hal.science/hal-01470888/file/main.pdf
Martin R Albrecht Pooya Farshim Dennis Hofheinz Enrique Larraia RHUL Kenneth G Paterson à la diffusion de documents scientifiques de niveau Keywords: Multilinear map, indistinguishability obfuscation, homomorphic encryption, decisional Diffie-Hellman, Groth-Sahai proofs recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction 1.Main contribution In this paper, we explore the relationship between multilinear maps and obfuscation. Our main contribution is a construction of multilinear maps for groups of prime order equipped with natural hard problems, using indistinguishability obfuscation (IO) in combination with other tools, namely NIZK proofs, homomorphic encryption, and a base group G 0 satisfying a mild cryptographic assumption. This complements known results in the reverse direction, showing that various forms of indistinguishability obfuscation can be constructed from multilinear maps [GGH + 13b, [START_REF] Canetti | Obfuscation of probabilistic circuits and applications[END_REF][START_REF] Zimmerman | How to obfuscate programs directly[END_REF]. The relationship between IO and multilinear maps is a very natural question to study, given the rich diversity of cryptographic constructions that have been obtained from both multilinear maps and obfuscation, and the apparent fragility of current constructions for multilinear maps. More on this below. We provide two distinct but closely related constructions. One is for multilinear maps in the symmetric setting, that is non-degenerate multilinear maps e : G 1 κ -→ G T for groups G 1 and G T of prime order N. Our construction relies on the existence of a base group G 0 in which the (κ -1)-SDDH assumption holds-this states that, given a κ-tuple of G 0 -elements (g, g ω , . . . , g ω κ-1 ), we cannot efficiently distinguish g ω κ from a random element of G 0 . Under this assumption, we prove that the κ-MDDH problem, a natural analogue of the DDH problem as stated below, is hard. (The κ-MDDH problem, informal) Given a generator g 1 of G 1 and κ + 1 group elements g a i 1 in G with a i ←$ Z N , distinguish e(g 1 , . . . , g 1 ) κ+1 i=1 a i from a random element of G T . This problem can be used as the basis for several cryptographic constructions [START_REF] Boneh | Applications of multilinear forms to cryptography[END_REF] including, as the by now the classic example of multiparty non-interactive key exchange (NIKE) [START_REF] Garg | Candidate multilinear maps from ideal lattices[END_REF]. Our other construction is for the asymmetric setting, that is multilinear maps e : G 1 ו • •×G κ -→ G T for a collection of κ groups G i and G T all of prime order N. It uses a base group G 0 in which we require only that the standard DDH assumption holds. For this construction, we show that a natural asymmetric analogue of the κ-MDDH assumption holds (wherein all but two of the κ + 1 group elements input to e come from distinct groups). In Section 7, we also show the intractability of the rank problem for our construction for multilinear maps in the symmetric setting; this is a generalization of DDH-like problems to matrices that has proven to be useful in cryptographic constructions [BHHO08, NS09, GHV12, BLMR13, EHK + 13]. At a high level, then, our constructions are able to "bootstrap" from rather mild assumptions in a standard cryptographic group to much stronger multilinear assumptions in a group (or groups, in the asymmetric setting) equipped with a κ-linear map. Here κ is fixed up-front at construction time, but is otherwise unrestricted. Of course, such constructions cannot be expected to come "for free," and we need to make use of powerful tools including probabilistic IO (PIO) for obfuscating randomized circuits [START_REF] Canetti | Obfuscation of probabilistic circuits and applications[END_REF], dual-mode NIZK proofs enjoying perfect soundness (for a binding CRS), perfect witness indistinguishability (for a hiding CRS), and perfect zero knowledge, and additive homomorphic encryption for the group (Z N , +) (or alternatively, a perfectly correct FHE scheme). It is an important open problem arising from our work to weaken the requirements on, or remove altogether, these additional tools. General approach Our approach to obtaining multilinear maps in the symmetric setting is as follows (with many details to follow in the main body). 1 Let G 0 with generator g 0 be a group of prime order N in which the (κ -1)-SDDH assumption holds. We work with redundant encodings of elements h of the base group G 0 of the form h = g x 0 0 (g ω 0 ) x 1 where g ω 0 comes from a (κ -1)-SDDH instance; we write x = (x 0 , x 1 ) for the vector of exponents representing h. Then G 1 consists of all strings of the form (h, c 1 , c 2 , π) where h ∈ G 0 , ciphertext c 1 is a homomorphic encryption under public key pk 1 of a vector x representing h, ciphertext c 2 is a homomorphic encryption under a second public key pk 2 of another vector y also representing h, and π is a NIZK proof showing consistency of the two vectors x and y, i.e., a proof that the plaintexts x, y underlying c 1 , c 2 encode the same group element h. Note that each element of the base group G 0 is multiply represented when forming elements in G 1 , but that equality of group elements in G 1 is easy to test. An alternative viewpoint is to consider (c 1 , c 2 , π) as being auxiliary information accompanying element h ∈ G 0 ; we prefer the perspective of redundant encodings, and our abstraction in Section 3 is stated in such terms. When viewed in this way, our approach can be seen as closely related to the Naor-Yung paradigm for constructing CCA-secure PKE [START_REF] Naor | Public-key cryptosystems provably secure against chosen ciphertext attacks[END_REF]. Addition of two elements in G 1 is carried out by an obfuscation of a circuit C Add that is published along with the groups. It has the secret keys sk 1 , sk 2 hard-coded in; it first checks the respective proofs, then uses the additive homomorphic property of the encryption scheme to combine ciphertexts, and finally uses the secret keys sk 1 , sk 2 as witnesses to generate a new NIZK proof showing equality of encodings. Note that the new encoding is as compact as that of the two input elements. The multilinear map on inputs (h i , c i,1 , c i,2 , π i ) for 1 ≤ i ≤ κ is computed using the obfuscation of a circuit C Map that has sk 1 and ω hard-coded in. This allows C Map to "extract" full exponents of h i in the form (x i,1 + ω • x i,2 ) from c i,1 , and thereby compute the element g i (x i,1 +ω•x i,2 ) 0 . This is defined to be the output of our multilinear map e, and so our target group G T is in fact G 0 , the base group. The multilinearity of e follows immediately from the form of the exponent. In the asymmetric case, the main difference is that we work with different values ω i in each of our input groups G i . However, the groups are all constructed via redundant encodings, just as above. This provides a high-level view of our approach, but no insight into why the approach achieves our aim of building multilinear maps with associated hard problems. Let us give some intuition on why the κ-MDDH problem is hard in our setting. We transform a κ-MDDH tuple h = ((g a i 1 ) i≤κ+1 , g d T ), where d is the product of the a i ∈ Z N , g 1 is in the "encoded" form above, thus g 1 = (h 1 , c 1 , c 2 , π), and g T is a generator of G T = G 0 , into another κ-MDDH tuple h with exponents a i = a i + ω for i ≤ κ. This means that the exponent of the challenge element in the target group d = κ 1 (a i + ω)a κ+1 can be seen as a degree κ polynomial in ω. Therefore, with the knowledge of the a i and a (κ -1)-SDDH challenge, with ω implicit in the exponent, we are able to randomize g d T replacing g ω κ T with a uniform value. Nevertheless, in the preceding simplistic argument we have made two assumptions. The first is that we are able to provide an obfuscation of a circuit C Map that has the same functionality as C Map over G 1 without the explicit knowledge of ω. We resolve this by showing a way of evaluating the κ-linear map on any elements of G 1 using only the powers g ω i 0 for 1 ≤ i ≤ κ -1, and vectors extracted from the accompanying ciphertexts, and then applying IO to the two circuits. 2The second assumption we made is that we can indeed switch from h to h without being noticed. In other words, that the vectors x i , y i representing g a i can be replaced (without being noticed) with vectors h i whose second coordinate is always fixed. Intuitively this is based on the IND-CPA security of the FHE scheme, but in order to give a successful reduction we also have to change the circuit C Add (since C Add uses both decryption keys). We show two ways to do this: one is based on probabilistic indistinguishability obfuscation [START_REF] Canetti | Obfuscation of probabilistic circuits and applications[END_REF], and the other uses only (deterministic) indistinguishability obfuscation, and additionally exploits the specific structure of a particular (pairing-based) NIZK implementation due to Groth and Sahai [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF]. We note that in this work we do not construct graded encoding schemes as in [START_REF] Garg | Candidate multilinear maps from ideal lattices[END_REF]. That is, we do not construct maps from G i × G j to G i+j . On the other hand, our construction is noiseless and is closer to multilinear maps as defined by Boneh and Silverberg [START_REF] Boneh | Applications of multilinear forms to cryptography[END_REF]. Attacks on multilinear maps Multilinear maps have been in a state of turmoil, with the discovery of attacks [CHL + 15, HJ15, CLR15, MF15, Cor15] against the GGH13 [START_REF] Garg | Candidate multilinear maps from ideal lattices[END_REF], CLT [START_REF] Coron | Practical multilinear maps over the integers[END_REF] and GGH15 [START_REF] Gentry | Graph-induced multilinear maps from lattices[END_REF] proposals, and a sequence of countermeasures and fixes [CLT15, CGH + 15], which since have been broken, too. Hence, our confidence in constructions for graded encoding schemes (and thereby multilinear maps) has been shaken. On the other hand, when IO is constructed from graded encoding schemes via Barrington's theorem or dual-input straddling sets [GGH + 13b, AB15, Zim15], then none of the known attacks on graded encoding schemes seem to apply [CGH + 15]. Indeed, when building IO from multilinear maps one restricts the pool of available operations to an attacker by fixing a circuit a priori which means that certain "interesting" elements cannot be (easily) constructed. Hence, currently it is perhaps more plausible to assume that IO exists than it is to assume that secure multilinear maps exist. However, we stress that more cryptanalysis of IO constructions is required to investigate what security they provide. Moreover, even though current constructions for IO rely on graded encoding schemes, it is not implausible that alternative routes to achieving IO without relying on multilinear maps will emerge in due course. And setting aside the novel applications obtained directly from IO, multilinear maps, and more generally graded encoding schemes, have proven to be very fruitful as constructive tools in their own right (cf. [START_REF] Boneh | Applications of multilinear forms to cryptography[END_REF][START_REF] Papamanthou | Optimal authenticated data structures with multilinear forms[END_REF], resp., [FHPS13, GGH + 13c, HSW13, GGSW13, BWZ14, TLL14, BLR + 15]). This rich set of applications coupled with the current uncertainty over the status of graded encoding schemes and multilinear maps provides additional motivation to ask what additional tools are needed in order to upgrade IO to multilinear maps. As an additional benefit, we upgrade (via IO) noisy graded encoding schemes to clean multilinear mapssometimes now informally called "dream" or "ideal" multilinear maps. Related work The closest related work to ours is that of Yamakawa et al. [START_REF] Yamakawa | Selfbilinear map on unknown order groups from indistinguishability obfuscation and its applications[END_REF][START_REF] Yamakawa | Selfbilinear map on unknown order groups from indistinguishability obfuscation and its applications[END_REF]; indeed, their work was the starting point for ours. Yamakawa et al. construct a self-pairing map, that is a bilinear map from G × G to G; multilinear maps can be obtained by iterating their self-pairing. Their work is limited to the RSA setting. It uses the group of signed quadratic residues modulo a Blum integer N, denoted QR + N , to define a pairing function that, on input elements g x , g y in QR + N , outputs g 2xy . In their construction, elements of QR + N are augmented with auxiliary information to enable the pairing computation-in fact, the auxiliary information for an element g x is simply an obfuscation of a circuit for computing the 2xth power modulo ord(QR + N ), and the pairing is computed by evaluating this circuit on an input g y (say). The main contribution of [START_REF] Yamakawa | Selfbilinear map on unknown order groups from indistinguishability obfuscation and its applications[END_REF] is in showing that these obfuscated circuits leak nothing about x or the group order. A nice feature of their scheme is that the degree of linearity κ that can be accommodated is not limited up-front in the sense that the pairing output is also a group element to which further pairing operations (derived from auxiliary information for other group elements) can be applied. However, the construction has several drawbacks. First, the element output by the pairing does not come with auxiliary information. 3 Second, the size of the auxiliary information for a product of group elements grows exponentially with the length of the product, as each single product involves computing the obfuscation of a circuit for multiplying, with its inputs already being obfuscated circuits. Third, the main construction in [START_REF] Yamakawa | Selfbilinear map on unknown order groups from indistinguishability obfuscation and its applications[END_REF] only builds hard problems for the self-pairing of the computational type (in fact, they show the hardness of the computational version of the κ-MDDH problem in QR + N assuming that factoring is hard). Still, this is sufficient for several cryptographic applications. In contrast, our construction is generic with respect to its platform group. Furthermore, the equivalent of the auxiliary information in our approach does not itself involve any obfuscation. Consequently, the description of a product of group elements stays compact. Indeed, given perfect additive homomorphic encryption for (Z p , +), we can perform arbitrary numbers of group operations in each component group G i . It is an open problem to find a means of augmenting our construction with the equivalent of auxiliary information in the target group G T , to make our multilinear maps amenable to iteration and thereby achieve graded maps as per [START_REF] Garg | Candidate multilinear maps from ideal lattices[END_REF][START_REF] Coron | Practical multilinear maps over the integers[END_REF]. Preliminaries Notation We denote the security parameter by λ ∈ N and assume that it is implicitly given to all algorithms in the unary representation 1 λ . By an algorithm we mean a stateless Turing machine. Algorithms are randomized unless stated otherwise and PPT as usual stands for "probabilistic polynomialtime" in the (unary) security parameter. Given a randomized algorithm A we denote the action of running A on input(s) (1 λ , x 1 , . . .) with fresh random coins r and assigning the output(s) to y 1 , . . . by (y 1 , . . .) ←$ A(1 λ , x 1 , . . . ; r). For a finite set X, we denote its cardinality by |X| and the action of sampling a uniformly random element x from X by x ←$ X. Vectors are written in boldface x and by slight abuse of notation, running algorithms on vectors of elements indicates component-wise operation. Throughout the paper ⊥ denotes a special error symbol, and poly(•) stands for a fixed polynomial. A real-valued function negl(λ) is negligible if negl(λ) ∈ O(λ -ω (1) ). We denote the set of all negligible functions by NEGL, and use negl(λ) to denote an unspecified negligible function. Homomorphic public-key encryption CIRCUITS. A polynomial-sized deterministic circuit family C := {C λ } λ∈N is a sequence of sets of poly(λ)-sized circuits for a fixed polynomial poly. We assume that for all λ ∈ N, all circuits C ∈ C λ share a common input domain ({0, 1} λ ) a(λ) , where a(λ) is a the arity of the circuit family, and codomain {0, 1} λ . A randomized circuit family is defined similarly except that the circuits now also take random coins r ∈ {0, 1} r(λ) . To make the coins used by a circuit explicit (e.g., to view a randomized circuit as a deterministic one) we write C(x; r). SYNTAX AND COMPACTNESS. A tuple of PPT algorithms Π := (Gen, Enc, Dec, Eval) is called a homomorphic public-key encryption (HPKE) scheme for deterministic circuit family C = {C λ } λ∈N of arity a(λ) if (Gen, Enc, Dec) is a conventional public-key encryption scheme with message space {0, 1} λ and Eval is a deterministic algorithm that on input a public key pk a circuit C ∈ C λ and ciphertexts c 1 , . . . , c a(λ) outputs a ciphertext c. We require HPKE schemes to be compact in the sense that the outputs of Eval have a size that is bounded by a polynomial function of the security parameter (and independent of the size of the circuit). Without loss of generality, we assume that secret keys of an HPKE scheme are the random coins used in key generation. This will allow us to check key pairs for validity. CORRECTNESS. We require the following perfect correctness requirements from a HPKE scheme. (1) Scheme Π := (Gen, Enc, Dec) is perfectly correct as a PKE scheme; that is for any λ ∈ N, any (sk, pk) ←$ Gen(1 λ ), any m ∈ {0, 1} λ , and any c ←$ Enc(m, pk) we have that Dec(c, sk) = m. (2) The evaluation algorithm in also perfectly correct in the sense that for any λ ∈ N, any (sk, pk)←$ Gen(1 λ ), any m i ∈ {0, 1} λ for i ∈ [a(λ)], any c i ←$ Enc(m i , pk), any C ∈ C λ and any c ← Eval(pk, C, c 1 , . . . , c a(λ) ) we have that Dec(c, sk) = C(m 1 , . . . , m a(λ) ). We note that although most proposals in the literature for HPKE are not perfectly correct, this is usually assumed in the literature (cf. [GGI + 14]). Indeed, it is plausible that perfectly correct HPKE can be achieved from standard HPKE constructions by adapting the probability distribution of the noise to a bounded distribution and by applying worst-case bounds in all steps. Moreover, in this work we will only need a mod-p additively homomorphic scheme of arity 2, traditionally known as a singly homomorphic PKE scheme for (Z p , +). Formally, such a scheme corresponds to a family of circuits of arity 2 which add two λ-bit numbers modulo λ-bit primes p. SECURITY. The IND-CPA security of an HPKE scheme is defined identically to a standard PKE scheme without reference to the Dec and Eval algorithms. Formally, we require that for any legitimate PPT adversary A := (A 1 , A 2 ), Adv ind-cpa Π,A (λ) := 2 • Pr IND-CPA A Π (λ) -1 ∈ NEGL , where game IND-CPA A Π (λ) is shown in Figure 1 (left). Adversary A is legitimate if it outputs two messages of equal lengths. Obfuscators SYNTAX AND CORRECTNESS. A PPT algorithm Obf is called an obfuscator for (deterministic or randomized) circuit class C = {C λ } λ∈N if Obf on input the security parameter 1 λ and the description of a (deterministic or randomized) circuit C ∈ C λ outputs a deterministic circuit C. For deterministic circuits, we require Obf to be perfectly correct in the sense the circuits C and C are functionally equivalent; that is, that for all λ ∈ N, all C ∈ C λ , all C ←$ Obf(1 λ , C), and all m i ∈ {0, 1} λ for i ∈ [a(λ)] we have that C(m 1 , . . . , m a(λ) ) = C(m 1 , . . . , m a(λ) ). For randomized circuits, the authors of [START_REF] Canetti | Obfuscation of probabilistic circuits and applications[END_REF] define correctness via computational indistinguishability of the outputs of C and C. For our constructions we do not rely on this property and instead require that C and C are functionally equivalent up to a change in randomness; that is, for all λ ∈ N, all C ∈ C λ , all C ←$ Obf(1 λ , C) and all m i ∈ {0, 1} λ for i ∈ [a(λ)] we require there is an r such that C(m 1 , . . . , m a(λ) ) = C(m 1 , . . . , m a(λ) ; r). In this paper by correctness we refer to this latter property. We note that the construction from [START_REF] Canetti | Obfuscation of probabilistic circuits and applications[END_REF] is correct as it relies on a correct (indistinguishability) obfuscator (and a PRF to internally generate the required random coins). SECURITY. The security of an obfuscator Obf requires that for any legitimate PPT adversary A := (A 1 , A 2 ) Adv ind Obf,A (λ) := 2 • Pr IND A Obf (λ) -1 ∈ NEGL , where game IND is shown in Figure 1 (middle). Depending on the notion of legitimacy different security notions for the obfuscator emerge; we consider two such notions below. FUNCTIONALLY EQUIVALENT SAMPLERS. We call (the first phase of) A a functionally equivalent sampler if for any (possibly unbounded) distinguisher D Adv eq A,D (λ) := Pr C 0 (x) = C 1 (x) : (C 0 , C 1 , st) ←$ A 1 (1 λ ); x ←$ D(C 0 , C 1 , st) ∈ NEGL . The security notion associated with equivalent samplers is called indistinguishability. We call an obfuscator meeting this level of security an indistinguishability obfuscator [GGH + 13b] and use IO instead of Obf to emphasize this. [START_REF] Canetti | Obfuscation of probabilistic circuits and applications[END_REF]. Roughly speaking, the first phase of A is an X-IND sampler if there is a set X of size at most X such that the circuits output by A are functionally equivalent outside X and furthermore within X the outputs of the two sampled circuits are indistinguishable. Formally, let X(•) be a function such that X(λ) ≤ 2 λ for all λ ∈ N. We call A an X-IND sampler if there is a set X λ of size at most X(λ) such that the following two conditions holds: (1) for all (possibly unbounded) D the advantage function below is negligible X-IND SAMPLERS Adv eq$ A,D (λ) := Pr C 0 (x; r) = C 1 (x; r) ∧ x / ∈ X λ : (C 0 , C 1 , st) ←$ A 1 (1 λ ); (x, r) ←$ D(C 0 , C 1 , st) . (2) For all non-uniform PPT distinguishers D := (D 1 , D 2 ) X(λ) • Adv sel-ind A,D (λ) := X(λ) • Pr Sel-IND D A (1 λ ) ∈ NEGL , where game Sel-IND D A (1 λ ) is shown in Figure 1 (right). This game has a static (or selective) flavor as D 1 chooses a differing-input x before it gets to see the challenge circuit pair. We call an obfuscator meeting this level of security a probabilistic indistinguishability obfuscator [START_REF] Canetti | Obfuscation of probabilistic circuits and applications[END_REF] and use PIO instead of Obf to emphasize this. IND-CPA A Π (λ): (sk, pk) ←$ Gen(1 λ ) (m 1 , m 1 , st) ←$ A 1 (pk) b ←$ {0, 1} c ←$ Enc(m, pk) b ←$ A 2 (c, st) Return (b = b ) IND A Obf (λ): (C 0 , C 1 , st) ←$ A 1 (1 λ ) b ←$ {0, 1} C ←$ Obf(1 λ , C b ) b ←$ A 2 (C, st) Return (b = b ) Sel-IND D A (λ): (x, z) ←$ D 1 (1 λ ) (C 0 , C 1 , st) ←$ A 1 (1 λ ) b ←$ {0, 1}; r ←$ {0, 1} r(λ) y ← C b (x; r) b ←$ D 2 (y, C 0 , C 1 , st, z) Return (b = b ) Dual-mode NIZK proof systems In our constructions we will be relying on special types of non-interactive zero-knowledge proof systems [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF]. These systems have "dual-mode" common reference string (CRS) generation algorithms that produce indistinguishable CRSs in the "binding" and "hiding" modes. They also enjoy perfect completeness in both modes, are perfectly sound and extractable in the binding mode, and perfectly witness indistinguishable (WI) and zero-knowledge (ZK) in the hiding mode. The standard prototype for such schemes are pairing-based Groth-Sahai proofs [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF], and using a generic NP reduction to the satisfiability of quadratic equations we can obtain a suitable proof system for any NP language. 4 We formalize the syntax and security of such proof systems next. SYNTAX. A relation with setup is a pair of PPT algorithms (S, R) such that S(1 λ ) outputs (gpk, gsk) and R(gpk, x, w) is a ternary relation and outputs a bit b ∈ {0, 1}. A dual-mode non-interactive zero-knowledge (NIZK) proof system Σ for (S, R) consists of five algorithms as follows. (1) Algorithm BCRS(gpk, gsk) outputs a (binding) common reference string crs and an extraction trapdoor td ext ; (2) HCRS(gpk, gsk) outputs a (hiding) common reference string crs and a simulation trapdoor td zk ; (3) Prove(gpk, crs, x, w) on input crs, an instance x, and a witness w, outputs a proof π; (4) Verify(gpk, crs, x, π) on input a bit string crs, an instance x, and a proof π, outputs accept or reject; (5) WExt(td ext , x, π) on input an extraction trapdoor, an instance x, and a proof π, outputs a witness w; and (6) Sim(td zk , crs, x) on input the simulation trapdoor td zk , the CRS crs, and an instance x, outputs a simulated proof π. We require a dual-mode NIZK to meet the following requirements. CRS INDISTINGUISHABILITY. The common reference strings generated through BCRS(gpk, gsk) and HCRS(gpk, gsk) are computationally indistinguishable. We denote the distinguishing advantage of a PPT adversary A in the relevant security game by Adv crs Σ,A (λ). PERFECT COMPLETENESS UNDER BCRS/HCRS. For any λ ∈ N, any (gpk, gsk) ←$ S(1 λ ), any crs ←$ BCRS(gpk, gsk), any (x, w) such that R(gpk, x, w) = 1, and any π ←$ Prove(gpk, crs, x, w) we have that Verify(gpk, crs, x, π) = 1. We require this property to also hold for any choice of crs ←$ HCRS(gpk, gsk). PERFECT SOUNDNESS UNDER BCRS. For any λ ∈ N, any (gpk, gsk) ←$ S(1 λ ), any common reference string crs ←$ BCRS(gpk, gsk), any x for which for all w ∈ {0, 1} * , we have R(gpk, x, w) = 0, and any π ∈ {0, 1} * we have that Verify(gpk, crs, x, π) = 0. PERFECT EXTRACTABILITY UNDER BCRS. For any λ ∈ N, any (gpk, gsk) ←$ S(1 λ ), any (crs, td zk ) ←$ BCRS(gpk, td ext ), any (x, π) with Verify(gpk, crs, x, π) = 1, and for w ←$ WExt(td ext , x, π), we always have that R(gpk, x, w) = 1. PERFECT WI UNDER HCRS. For any λ ∈ N, any (gpk, gsk) ←$ S(1 λ ), any (crs, td zk ) ←$ HCRS(gpk, gsk), any (x, w b ) such that R(gpk, x, w b ) = 1 for b ∈ {0, 1}, we have that π b ←$ Prove(gpk, crs, x, w b ) for b ∈ {0, 1} are identically distributed. PERFECT ZK UNDER HCRS. For any λ ∈ N, any (gpk, gsk) ←$ S(1 λ ), any (crs, td zk ) ←$ HCRS(gpk, gsk), any (x, w) such that R(gpk, x, w) = 1, we have that π 0 ←$ Prove(gpk, crs, x, w) and π 1 ←$ Sim(td zk , x) are identically distributed. Hard membership problems Finally, we will use languages with hard membership problems. More specifically, we say that a family L = {L λ } of families L λ = {L} of languages L ⊆ U in a universe U = U λ has a hard subset membership problem if the following holds. Namely, we require that no PPT algorithm can, given L ←$ L λ , efficiently distinguish between x ←$ L and x ←$ U. Multilinear Groups with Non-unique Encodings Before presenting our constructions, we formally introduce what we mean by a multilinear group (MLG) scheme. Our abstraction differs from that of Garg, Gentry and Halevi [START_REF] Garg | Candidate multilinear maps from ideal lattices[END_REF] in that our treatment of MLG schemes is a direct adaptation of the "dream" MLG setting (called the "cryptographic" MLG setting in [START_REF] Boneh | Applications of multilinear forms to cryptography[END_REF]) to a setting where group elements have non-unique encodings. In our abstraction, on top of the procedures needed for generating, manipulating and checking group elements, we introduce an equality procedure which generalizes the equality relation for groups with unique encodings. SYNTAX. A multilinear group (MLG) scheme Γ consists of six PPT algorithms as follows. Setup(1 λ , 1 κ ): This is the setup algorithm. On input the security parameter 1 λ and the multilinearity 1 κ , it outputs the group parameters pp. These parameters include generators g 1 , . . . , g κ+1 , identity elements 1 1 , . . . , 1 κ+1 , and integers N 1 , . . . , N κ+1 , which will represent group orders. (Generators, identity elements and group orders are discussed below.) We assume pp is provided to the various algorithms below. Val i (h): This is the validity testing algorithm. On input (the group parameters), a group index 1 ≤ i ≤ κ + 1 and a string h ∈ {0, 1} * , it returns b ∈ {0, 1}. We define G i , which is also parameterized by pp, as the set of all h for which Val i (h) = 1. We write h ∈ G i when Val i (h) = 1 and refer to such strings as group elements (since we will soon impose a group structure on G i ). We require that the bit strings in G i have lengths that are polynomial in 1 λ and κ, a property that we refer to as compactness. Eq i (h 1 , h 2 ): This is the equality algorithm. On input two valid group elements h 1 , h 2 ∈ G i , it outputs a bit b ∈ {0, 1}. We require Eq i to define an equivalence relation. We say that the group has unique encodings if Eq i simply checks the equality of bit strings. We write G i (h) for the set of all h ∈ G i such that Eq i (h, h ) = 1; for any such h, h in G i we write h = h ; sometimes we write h = h in G i for clarity. Since "=" refers to equality of bit strings as well as equivalence under Eq i we will henceforth write "as bit strings" when we mean equality in that sense. We require |G i /Eq i |, the number of equivalence classes into which Eq i partitions G i , to be finite and equal to N i (where N i comes from pp). Note that equality algorithms Eq i for 1 ≤ i ≤ κ can be derived from one for Eq κ+1 using the multilinear map e defined below, provided N κ+1 is prime. We assume throughout the paper that various algorithms below return ⊥ when run on invalid group elements. Op i (h 1 , h 2 ): This algorithm defines the group operation. On input two valid group elements h 1 , h 2 ∈ G i it outputs h ∈ G i . We write h 1 h 2 in place of Op i (h 1 , h 2 ) for simplicity. We require that Op i respect the equivalence relations Eq i , meaning that if h 1 = h 2 in G i and h ∈ G i , then h 1 h = h 2 h in G i . We also demand that h 1 h 2 = h 2 h 1 in G i (commutativity), for any third h 3 ∈ G i we require h 1 (h 2 h 3 ) = (h 1 h 2 )h 3 in G i (associativity) and h 1 1 i = h 1 in G i . The algorithm Op gives rise to an exponentiation algorithm Exp i (h, z) that on input h ∈ G i and z ∈ N outputs an h ∈ G i such that h = h • • • h in G i with z occurrences of h. When no h is specified, we assume h = g i . This algorithm runs in polynomial time in the length of z. We denote Exp i (h, z) by h z and define h 0 := 1 i . Note that under the definition of N i for any h ∈ G i we have that Exp i (h, N i ) = 1 i . 5 This in turn leads to an inversion algorithm Inv i (h) that on input h ∈ G i outputs h N i -1 . We insist that g i in fact has order N i , so that (the equivalence class containing) g i generates G i /Eq i . We do not treat the case where the N i are unknown but the formalism is easily extended to include it by adding an explicit inversion algorithm and by replacing N i in pp with an approximation (which may be needed for sampling purposes). The above requirements ensure that G i /Eq i acts as an Abelian group of order N i with respect to the operation induced by Op i , with identity (the equivalence class containing) 1 i , and inverse operation Inv i . We use the bracket notion [EHK + 13] to denote an element h = g x i in G i with [x] i . When using this notation, we will write the group law additively. This notation will be convenient in the construction and analysis of our MLG schemes. For example, [z] i + [z ] i succinctly denotes Op i (Exp(g i , z), Exp(g i , z )). Note that when writing [z] i it is not necessarily the case that z is explicitly known. e(h 1 , . . . , h κ ): This the multilinear map algorithm. For κ group elements h i ∈ G i as input, it outputs h κ+1 ∈ G κ+1 . We demand that for any 1 ≤ j ≤ κ and any h j ∈ G j e(h 1 , . . . , h j h j , . . . , h κ ) = e(h 1 , . . . , h j , . . . , h κ )e(h 1 , . . . , h j , . . . , h κ ) in G κ+1 . We also require the map to be non-degenerate in the sense that for some tuple of elements as input the multilinear map outputs an element of G κ+1 outside the equivalence class of 1 κ+1 . (This implies that e is surjective onto G κ+1 /Eq κ+1 when N i is prime, but need not imply surjectivity when N κ+1 is composite.) We call an MLG scheme symmetric if the group algorithms are independent of the group index for 1 ≤ i ≤ κ and e is invariant under permutations of its inputs. That is, for any permutation π : [κ] -→ [κ] we have e(h 1 , . . . , h κ ) = e(h π(1) , . . . , h π(κ) ) in G κ+1 . We refer to all the other cases as being asymmetric. To distinguish the target group we frequently write G T instead of G κ+1 (and similarly for 1 T and g T in place of 1 κ+1 and g κ+1 ) as its structure in our construction will be different from that of the source groups G 1 , . . . , G κ . Sam i (z): This is the sampling algorithm. On input z ∈ N it outputs h ∈ G i whose distribution is "close" to that of uniform over the equivalence class G i (g z i ). Here "close" is formalized via computational, statistical or perfect indistinguishability. We also allow a special input ε to this algorithm, in which case the sampler is required to output a uniformly distributed h ∈ G i together with a z such that h ∈ G i (g z i ). When outputting z is not required, we say that Sam i (ε) is discrete-logarithm oblivious. Note that for groups with unique encodings these algorithms trivially exist. For notational convenience, for a known a we define [a] i to be an element sampled via Sam i (a). Some applications also rely on the following algorithm which provides a canonical bit string for the group elements within a single equivalence class. Ext i (h): This is the extraction algorithm. On input h ∈ G i it outputs a string s ∈ {0, 1} poly(λ) . We demand that for any λ) . For groups with unique encodings this algorithm trivially exists. h 1 , h 2 ∈ G i with h 1 = h 2 in G i we have that Ext i (h 1 ) = Ext i (h 2 ) (as bit strings). We also require that for [z] i ←$ Sam i (ε), the distribution of Ext i ([z] i ) is uniform over {0, 1} poly( COMPARISON WITH GGH. Our formalization differs from that of [START_REF] Garg | Candidate multilinear maps from ideal lattices[END_REF] which defines a graded encoding scheme. The main difference is that a graded encoding scheme defines bilinear maps e i,j : G i × G j -→ G i+j . Using this algorithm, one can implement Eq i for any 1 ≤ i ≤ κ from Eq κ+1 as follows (if e i,j is injective). To check the equality of h 1 , h 2 ∈ G i call e i,κ+1-i (h, g κ+1-i ) for h = h 1 , h 2 to map these elements to the target group and check equality there using Eq κ+1 . Similarly, Ext i (h) can be constructed from Ext κ+1 (h) and 1 j for all G j . (Note that for extraction we need a canonical string rather than a canonical group element.) Moreover, the abstraction and construction of graded encodings schemes in [START_REF] Garg | Candidate multilinear maps from ideal lattices[END_REF] do not provide any validity algorithms; these are useful in certain adversarial situations such as CCA security and signature verification. Further, all known candidate constructions of graded encoding schemes are noisy and only permit a limited number of operations. Finally, the known candidate graded encoding schemes do not permit sampling for specific values of z, but rather only permit sampling elements with a z that is only known up to its equivalence class. SYNTACTIC EXTENSIONS. Although our syntax does not treat the cases of graded [GGH13a, CLT13], exponentially multilinear, or self-pairing [START_REF] Yamakawa | Selfbilinear map on unknown order groups from indistinguishability obfuscation and its applications[END_REF] maps, it can be modified to capture these variants. We briefly outline the required modifications. For graded maps, we require the existence of a map that on input h i ∈ G i for indices i = i 1 , . . . , i with t := i=1 i j ≤ κ outputs a group element in G t . This map is required to be multilinear in each component. For exponential (aka. unbounded) linearity, we provide the linearity κ in its binary representation to the Setup algorithm. We also include procedures for generator and identity element generation. 6 Proper self-pairing maps correspond to a setting where the group algorithms are independent of the group index for 1 ≤ i ≤ κ + 1 (including the target index κ + 1), and the group generators and identity elements are all identical. Observe that a proper self-pairing would induce a graded encoding scheme of unbounded linearity; recall from the introduction that the scheme of Yamakawa et al. [START_REF] Yamakawa | Selfbilinear map on unknown order groups from indistinguishability obfuscation and its applications[END_REF] does not meet this definition because of the growth in the size of its auxiliary information. The Construction We now present our construction of an MLG scheme Γ according to the syntax introduced in Section 3. In the later sections we will consider special cases of the construction and prove the hardness of analogues of the multilinear DDH problem under various assumptions. We rely on the following building blocks in our MLG scheme. (1) A cyclic group G 0 of some order N 0 with generator g 0 and identity 1 0 ; formally we think of this as a 1-linear MLG scheme Γ 0 with unique encodings in which e is trivial; the algorithm Val 0 implies that elements of G 0 are efficiently recognizable. (2) A general-purpose obfuscator Obf. (3) An additively homomorphic public-key encryption scheme Π := (Gen, Enc, Dec, Eval) with plaintext space Z N (alternatively, a perfectly correct HPKE scheme). (4) A dual-mode NIZK proof system. (5) A family T D of (families of) languages TD which has a hard subset membership problem, and such that all TD have efficiently computable witness relations with unique witnesses.7 (See Section 2 for more formal definitions.) We reserve variables and algorithms with index 0 for the base scheme Γ 0 ; we also write N = N 0 . We require that the algorithms of Γ 0 except for Setup 0 and Sam 0 are deterministic. We will also use the bracket notation to denote the group elements in G 0 . For example, we write [z] 0 , [z ] 0 ∈ G 0 for two valid elements of the base group and [z] 0 + [z ] 0 ∈ G 0 for Op 0 ([z] 0 , [z ] 0 ). Variables with nonzero indices correspond to various source and target groups. Given all of the above components, our MLG scheme Γ consists of algorithms as detailed in the sections that follow. Setup The setup algorithm for Γ samples parameters pp 0 ←$ Setup 0 (1 λ ) for the base MLG scheme, generates two encryption key pairs (pk j , sk j ) ←$ Gen(1 λ ) (j = 1, 2), and a matrix W = (ω 1 , . . . , ω k ) t ∈ Z κ× N where κ is the linearity and ∈ {2, 3} is a parameter of our construction. It sets gpk := (pp 0 , pk 1 , pk 2 , [W] 0 , TD, y) , where [W] 0 denotes a matrix of G 0 elements that entry-wise is written in the bracket notation, TD ←$ T D, and y is not in TD. In our MLG scheme we set N 1 = • • • = N κ+1 := N, where N is the group order implicit in pp 0 . The setup algorithm then generates a common reference string crs = (crs , y) where crs ←$ BCRS(gpk, gsk) for a relation (S, R) that will be defined in Section 4.2. It also constructs two obfuscated circuits C Map and C Add which we will describe in Sections 4.3 and 4.4. For 1 ≤ i ≤ κ, the identity elements 1 i and group generators g i are sampled using Sam i (0) and Sam i (x i ) respectively for algorithm Sam i described in Section 4.5 with x i ∈ [N] that is co-prime to N. We emphasize that this approach is well defined since the operation of Sam i is defined independently of the generators and the identity elements and depends only on gpk and crs. We set 1 κ+1 = 1 0 and g κ+1 = g 0 . The scheme parameters are pp := (gpk, crs, C Map , C Add , g 1 , . . . , g κ+1 , 1 1 , . . . , 1 κ+1 ) . We note that this algorithm runs in polynomial time in λ as long as κ is polynomial in λ. Validity and equality The elements of G i for 1 ≤ i ≤ κ are tuples of the form h = ([z] 0 , c 1 , c 2 , π) where c 1 , c 2 are encryptions of vectors from Z N under , pk 1 , pk 2 , respectively (encryption algorithm Enc extends from plaintext space Z N to Z N in the obvious way) and where π is a NIZK to be defined below. We refer to (c 1 , c 2 , π) as the auxiliary information for [z] 0 . The elements of G κ+1 are just those of G 0 . The NIZK proof system that we use corresponds to the following inclusive disjunctive relation (S, R := R 1 ∨ R 2 ). Algorithm S(1 λ ) outputs gpk = (pp 0 , pk 1 , pk 2 , [W] 0 , TD) as defined above and sets gsk = (sk 1 , sk 2 ). Relation R 1 on input gpk, tuple ([z] 0 , c 1 , c 2 ), and witness (x, y, r 1 , r 2 , sk 1 , sk 2 ) accepts iff [z] 0 ∈ G 0 , the representations of [z] 0 as x, y ∈ Z N are valid with respect to [W] 0 in the sense that [z] 0 = [ x, ω i ] 0 ∧ [z] 0 = [ y, ω i ] 0 , (where •, • denotes inner product) and the following ciphertext validity condition (with respect to the inputs to the relation) is met: (c 1 = Enc(x, pk 1 ; r 1 ) ∧ c 2 = Enc(x, pk 2 ; r 2 )) ∨ (pk 1 , sk 1 ) = Gen(sk 1 ) ∧ (pk 2 , sk 2 ) = Gen(sk 2 ) ∧ x = Dec(c 1 , sk 1 ) ∧ y = Dec(c 2 , sk 2 )) Recall that we have assumed the secret key of the encryption scheme to be the random coins used in Gen. Note that the representation validity check can be efficiently performed "in the exponent" using [W] 0 and the explicit knowledge of x and y. Note also that for honestly generated keys and ciphertexts the two checks in the expression above are equivalent (although this not generally the case when ciphertexts are malformed). Relation R 2 depends on the language TD, and on input gpk, tuple ([z] 0 , c 1 , c 2 ), and witness w y accepts iff w y is a valid witness to y ∈ TD. (Note that R 2 completely ignores ([z] 0 , c 1 , c 2 ).) For 1 ≤ i ≤ κ, the Val i algorithm for Γ , on input ([z] 0 , c 1 , c 2 , π), first checks that the first component is in G 0 using Val 0 and then checks the proof π; if both tests pass, it then returns , else ⊥. Observe that for an honest choice of crs = (crs , y), the perfect completeness and the perfect soundness of the proof system ensure that only those elements which pass relation R 1 are accepted. Algorithm Val κ+1 just uses Val 0 . The equality algorithm Eq i of Γ for 1 ≤ i ≤ κ first checks the validity of the two group elements passed to it and then returns true iff their first components match, according to Eq 0 , the equality algorithm from the base scheme Γ 0 . Algorithm Eq κ+1 just uses Eq 0 . The correctness of this algorithm follows from the perfect completeness of Σ. Group operations We provide a procedure that, given as inputs h = ([z] 0 , c 1 , c 2 , π) and h = ([z ] 0 , c 1 , c 2 , π ) ∈ G i , generates a tuple representing the product h • h . This, in particular, will enable our multilinear map to be run on the additions of group elements whose explicit representations are not necessarily known. We exploit the structure of the base group as well as the homomorphic properties of the encryption scheme to "add together" the first three components. We then use (sk 1 , sk 2 ) as a witness to generate a proof π that the new tuple is well formed. (For technical reasons we check the validity of h and h in two different ways: using proofs π, π , and also explicitly using (sk 1 , sk 2 ). Note that, although useful in the analysis, the explicit check is redundant by the perfect soundness of the proof system under a binding crs .) In pp we include an obfuscation of the C Add circuit shown in Figure 2 (top), and again we emphasize that steps 5a or 5b are never reached with a binding crs (but they may be reached with a hiding crs later in the analysis). Either an IO or a PIO will be used to obfuscate this circuit. Note that although we have assumed the evaluation algorithm to be deterministic, algorithm Prove is randomized and we need to address how we deal with its coins. When using PIO to obfuscate C Add , the obfuscator directly deals with the needed randomness.8 When using IO, a random (but fixed) set of coins will be hardwired into the circuit and hence the same set of coins will be used for all inputs. (As we shall see, when using IO the proof system has to satisfy extra structural requirements; these ensure that using the same coins throughout does not compromise security.) The Op i algorithm for 1 ≤ i ≤ κ runs the obfuscated circuit on i, the input group elements. Algorithm Op κ+1 just uses Op 0 as usual. The correctness of this algorithm follows from those of Γ 0 and Π, the completeness of Σ and the correctness, in our sense of, (the possibly probabilistic) obfuscator Obf; see Section 2 for the definitions. The multilinear map The multilinear map for Γ , on input κ group elements h i = [z i ] i = ([z i ] 0 , c i,1 , c i,2 , π i ), uses sk 1 to recover the representation x i . It then uses the explicit knowledge of the matrix W to compute the output of the map as e([z 1 ] 1 , . . . , [z κ ] κ ) := k i=1 x i , ω i κ+1 . Recalling that G κ+1 is nothing other than G 0 , and g κ+1 = g 0 , the output of the map is just the G 0element (g 0 ) k i=1 x i ,ω i . The product in the exponent can be efficiently computed over Z N for any polynomial level of linearity κ and any as it uses x i and ω i explicitly. The multilinearity of the map follows from the linearity of each of the multiplicands in the above product (and the completeness of Σ, the correctness of Π, and the correctness of the (possibly probabilistic) obfuscator Obf). An obfuscation C Map of the circuit implementing this operation (see Figure 2, bottom) will be made available through the public parameters and e is defined to run this circuit on its inputs. CIRCUIT C Add [gpk, crs, sk 1 , sk 2 , td ext ; r](i, h, h ): 1. if ¬Val i (h) ∨ ¬Val i (h ) return ⊥ 2. parse ([z] 0 , c 1 , c 2 , π) ← h and ([z ] 0 , c 1 , c 2 , π ) ← h 3. [z ] 0 ← [z] 0 + [z ] 0 ; c 1 ← c 1 + c 1 ; c 2 ← c 2 + c 2 4. // explicit validity check of h, h 4.1 x ← Dec(c 1 , sk 1 ) , y ← Dec(c 2 , sk 2 ) x ← Dec(c 1 , sk 1 ) , y ← Dec(c 2 , sk 2 ) 4.2a if ([z] 0 = [ x, ω i ] 0 ) ∨ ([z] 0 = [ y, ω i ] 0 ) goto 5a 4.2b else if ([z ] 0 = [ x , ω i ] 0 ) ∨ ([z ] 0 = [ y , ω i ] 0 ) goto 5b 4.2c else goto 5c // h, h are valid 5a. // h is invalid 5a.1 w y ←$ WExt(td ext , ([z] 0 , c 1 , c 2 ), π) 5a.2 if ¬R 2 (gpk, (([z] 0 , c 1 , c 2 )), w y ) return ⊥ 5a.3 π ← Prove(gpk, crs, ([z ] 0 , c 1 , c 2 ), w y ; r) 5b.repeat 5a with h // only h is invalid 5c. π ← Prove(gpk, crs, ([z ] 0 , c 1 , c 2 ), (sk 1 , sk 2 ); r) 6. return ([z ], c 1 , c 2 , π ) CIRCUIT C Map [gpk, crs, W, sk 1 ](h 1 , . . . , h κ ): 1. for i = 1 . . . κ 1.1 if ¬Val i (h i ) return ⊥ 1.2 ([z i ] 0 , c i,1 , c i,2 , π i ) ← h i 1.3 x i ← Dec(c i,1 , sk 1 ) 2. z κ+1 ← k i=1 x i , ω i (mod N) 3. return [z κ+1 ] κ+1 Figure 2: Top: Circuit for addition of group elements. Explicit randomness r is used with an IO and is internally generated when using a PIO. Bottom: Circuit implementing the multilinear map. Recall that here gpk = (pp 0 , pk 1 , pk 2 , [W] 0 , TD, y). Sampling and extraction Given vectors x and y in Z N satisfying x, ω i = y, ω i , we set [z] 0 := [ y, ω i ] 0 (which can be computed using [W] 0 and explicit knowledge of x) and [z] i ← [z] 0 , c 1 = Enc(x, pk 1 ; r 1 ), c 2 = Enc(y, pk 2 ; r 2 ), π = Prove(gpk, crs, ([z] i , c 1 , c 2 ), (x, y, r 1 , r 2 ) . If W is explicitly known the vectors x and y can take arbitrary forms subject to validity. This matrix, however, is only implicitly known, and in our sampling procedure we set x = y = (z, 0) when = 2 and x = y = (z, 0, 0) when = 3. (We call these the canonical representations.) Note that the outputs of the sampler are not statistically uniform within G i ([z] i ). Despite this, under the IND-CPA security of the encryption scheme it can be shown that the outputs are computationally close to uniform. Since the target group has unique encodings, as noted in Section 3, an extraction algorithm for all groups can be derived from one for the target group. The latter can be implemented by applying a universal hash function to the group elements in G T , for example. κ-Switch A Γ (λ): pp ←$ Setup(1 λ , 1 κ ) ((x 0 , y 0 ), (x 1 , y 1 ), i, st) ←$ A 1 (pp, W) b ←$ {0, 1}; r 1 , r 2 ←$ ({0, 1} r(λ) ) |x 0 | c 1 ← Enc(x b , pk 1 ; r 1 ); c 2 ← Enc(y b , pk 2 ; r 2 ) π ←$ Prove(gpk, crs, ([z] 0 , c 1 , c 2 ), (x, y, r 1 , r 2 , ⊥, ⊥)) b ←$ A 2 ([ x b , ω i ] 0 , c 1 , c 2 , π), st Return (b = b ) Figure 3: Game formalizing the indistinguishability of encodings with an equivalence class. This game is specific to our construction Γ . An adversary is legitimate if z = x b , ω i = y b , ω i for b ∈ {0, 1}. We note that A gets explicit access to matrix W generated during setup. Indistinguishability of Encodings In this section we will prove two theorems that are essential tools in establishing the intractability of the κ-MDDH for our MLG scheme Γ constructed in Section 4. These theorems, roughly speaking, state that valid encodings of elements within a single equivalence class are computationally indistinguishable. We formalize this property via the κ-Switch game shown in Figure 3. This game lets an adversary A choose an element [z] i ∈ G i by producing two valid representations (x 0 , y 0 ) and (x 1 , y 1 ) for it. The adversary is given an encoding of [z] i generated using (x b , y b ) for a random b, and has to guess the bit b. In this game, besides access to pp, which contains the obfuscated circuits for the group operation and the multilinear map, we also provide the matrix W in the clear to the adversary. This strengthens the κ-Switch game and is needed for our later analysis. To prove that the advantage of A in the κ-Switch game is negligible we rely on the security of the obfuscator, the IND-CPA security of the encryption scheme, and the security of the NIZK proof system. Depending on the type of the obfuscator and proof system used, we show indistinguishability of encodings in two incomparable ways: (1) using a probabilistic obfuscator that is secure against X-IND adversaries and a dual-mode NIZK as defined in Section 2.4; and (2) using a (standard) indistinguishability obfuscator for deterministic circuits and a dual-mode NIZK that is required to satisfy a "witness-translation" property that we formalize in Section 5.2. Using probabilistic indistinguishability obfuscation The indistinguishability of encodings using the first set of assumptions above is conceptually simpler to prove and we start with this case. Intuitively, the IND-CPA security of the encryption scheme will ensure that the encryptions of the two representations are indistinguishable. This argument, however, does not immediately work as the parameters pp contain component C Add that depends on both decryption keys. We deal with this by finding an alternative implementation of this circuit without the knowledge of the secret keys, in the presence of a slightly different public parameters (which are computationally indistinguishable to those described in Section 4). The next lemma, roughly speaking, says that provided parameters pp include an instance y ∈ TD, then there exists an alternative implementation C Add that does not use the secret keys, and whose obfuscation is indistinguishable to that of C Add of Figure 2 (top) for an adversary that knows the secret keys. It relies on the security of the obfuscator and the security of the NIZK proof system. Lemma 5.1 (C Add without decryption keys). Let PIO be a secure obfuscator for X-IND samplers, and Σ be a dual-mode NIZK proof system. Additionally, let parameters pp sampled as in Section 4 but with y ∈ TD, and let pp sampled as pp but with a hiding CRS crs , and an obfuscation of circuit C Add of Fig. 4 (bottom). Then, for any PPT adversary A there are ppt adversaries B 1 and B 2 of essentially the same complexity as A such that for all λ ∈ N Pr[A( pp, sk 1 , sk 2 ) = 1 : (sk 1 , sk 2 ) ←$ Gen(1 λ )] -Pr[A( pp, sk 1 , sk 2 ) = 1 : (sk 1 , sk 2 ) ←$ Gen(1 λ )] ≤ 2 • Adv ind PIO,B 1 (λ) + Adv crs Σ,B 2 (λ). Proof. The crucial observation is that a witness w y to y ∈ TD is also a witness to x ∈ R, and therefore C Add can use w y instead of sk 1 , sk 2 to produce the output proof π . Below we provide descriptions of the transformation from C Add to C Add , and let W i denote the event that A in Game i outputs 1. Game 0 : We start with (a PIO obfuscation of) circuit C Add of Fig. 2 and with pp including y ∈ TD and a binding crs . Game 1 : The circuit has witness w y to y ∈ TD hard-coded. If some input reaches the "invalid" branches, C Add does not extract a witness from the corresponding proof, but instead uses w y to generate proof π (see Fig. 4 (top)). Note that Game 1 requires no extraction trapdoor td ext anymore. We claim that |Pr[W 0 (λ)] -Pr[W 1 (λ)]| ≤ Adv ind PIO,B 1 (λ) . By construction, the only difference between the games is that in Game 1 proof π , with respect to invalid (input) encodings, is generated using hard-coded witness w y to y ∈ TD. Since w y is unique, and the CRS crs guarantees perfect soundness, this leads to identical behavior of C Add in Game 0. Hence this hop is justified by PIO. Game 2 : The CRS crs included in the public parameters is now hiding (such that the generated proofs are perfectly witness-indistinguishable). We have that |Pr[W 1 (λ)] -Pr[W 2 (λ)]| ≤ Adv crs Σ,B 2 (λ), where B 2 is a PPT algorithm against the indistinguishability of binding and hiding CRS's. Game 3 : Here, output proofs π for those inputs entering the "valid" branch (step 5b of Fig. 4 (top)) use w y (and not sk 1 , sk 2 ) as witness. In particular, this game does not need to perform a explicit validity check (using sk 1 , sk 2 ) anymore, and therefore the addition circuit can be described as in Fig. 4 (bottom). We claim that |Pr[W 2 (λ)] -Pr[W 3 (λ)]| ≤ Adv ind PIO,B 1 (λ) . By constrution the only difference between both games is that, the public parameters in Game 2 contain a PIO obfuscation of C Add , and in Game 3 contain a PIO obfuscation of C Add of Fig. 4. In Lemma 5.2 we prove that these circuit variants are given by an X-IND sampler, and therefore their PIO obfuscations are indistinguishable. Figure 4: Circuits for addition of group elements used in Lemma 5.1. pp includes gpk = (pp 0 , pk 1 , pk 2 , [W] 0 , TD, y) where y ∈ TD (also includes a hiding CRS crs ). Both circuits also have hard-coded (the) witness wy to y ∈ TD. Top: sk 1 , sk 2 are used to produce π on valid inputs. Bottom: wy is always used to produce π . Lemma 5.2 (X-IND sampling). Let Σ be a dual-mode NIZK proof system for the relation (S, R) defined in Section 4.2. Suppose Σ is perfectly witness-indistinguishable under a hiding CRS. Let A 1 be a sampler which outputs circuits (C Add , C Add ) of Fig. 4. (Both circuits have the system parameters hard-coded in.) Then any A := (A 1 , A 2 ) for a PPT A 2 is X-IND for (the optimal) X, the size of the domain of the circuits. More precisely, for any (possibly unbounded) distinguisher D and for any PPT distinguisher D = (D 1 , D 2 ) and any λ ∈ N, Adv eq$ A,D (λ) = 0 and Adv sel-ind A,D (λ) = 0 . Proof. The first equality is immediate as X is set to be the entire domain of the circuits. The second equality follows from the perfect witness-indistinguishability property of the proof system. Indeed, the only difference between the two circuits is that, for those inputs that are valid encodings, C Add uses decryption keys sk 1 ,sk 2 as witness to generate the output proof π ← Prove(gpk, crs, ([z ] 0 , c 1 , c 2 ), (sk 1 , sk 2 ); r), and C Add uses witness w y to y ∈ TD (with y in the public parameters) to generate the proof π ← Prove(gpk, crs, ([z ] 0 , c 1 , c 2 ), w y ; r). The WI property with a hiding crs guarantees that π and π are identically distributed, and hence so are the outputs of C Add and C Add . Note that no random coins are hardwired into these circuits-we are in the PIO setting-and fresh coins are used to compute the circuits' outputs. of the proof below (see also Fig. 5). Theorem 5.3 (Switching encodings using PIO). Let Γ be the MLG scheme constructed in Section 4, where PIO is secure for X-IND samplers, Π is an IND-CPA-secure encryption scheme, and Σ is a dualmode NIZK proof system. Then, encodings of equivalent group elements are indistinguishable. More precisely, for any PPT adversary A and all λ ∈ N, there are ppt adversaries B 1 , B 2 , B 3 and B 4 of essentially the same complexity as A such that for all λ ∈ N Adv κ-switch Γ,A (λ) ≤ 3 • Adv sm TD,B 1 + 7 • Adv ind PIO,B 2 (λ) + 3 • Adv crs Σ,B 3 (λ) + 2 • Adv ind-cpa Π,B 4 (λ) . Furthermore B 2 is an X-IND sampler for any function X(λ). Proof sketch. The proof of this theorem proceeds via a sequence of 9 games as follows. Game 0 : This is the κ-Switch game. The public parameters pp contain a no-instance y / ∈ TD, a binding crs , C Add is constructed using (sk 1 , sk 2 ) and C Map using sk 1 (see Fig. 2). The ciphertexts c 1 and c 2 contain x b and y b for a random bit b. Game 1 : This game generates the public parameters pp so that include a yes-instance y ∈ TD. The difference to the previous game can be bounded via the hardness of deciding membership to TD. Game 2 : The public parameters pp change so that include a hiding crs , and a (PIO) obfuscation of circuit C Add , see Fig. 4. (Recall that this circuit uses the witness w y to y ∈ TD to produce the output proofs π , and therefore the simultaneous knowledge of decryption keys sk 1 ,sk 2 is not needed anymore.) By Lemma 5.1 the difference with the previous game can be bounded by PIO and CRS indistinguishability. Game 3 : This games generates c 2 by encrypting y 1 , even when b = 0. We can bound the difference in any adversary's success probability via the IND-CPA advantage of Π with respect to pk 2 (the reduction will know (pk 1 , sk 2 ) so as to be able to construct C Map .) Game 4 : The public parameters are changed back to pp, so that include a binding crs , and a (PIO) obfuscation of circuit C Add of Figure 2 (top). The difference with the previous game is bounded again with Lemma 5.1. Game 5 : Now a no-instance y / ∈ TD is included in the public parameters pp. This game is justified by the hardness of deciding membership to TD. Game 6 : This game uses sk 2 (in place of sk 1 ) in the generation of C Map circuit. In this transition we rely on the security of Obf and the perfect soundness of Σ. Perfect soundness implies consistency of the two representations underlying c 1 , c 2 (recall that this means they represent the same group element with respect to W). We then get that the two circuits (using sk 1 , and sk 2 , respectively) are funcitonally equivalent. We can then use the IO security of Obf to justify the switch from using sk 1 to using sk 2 . (Note tht for any function X, any obfuscator that is for X-IND samplers is also secure as an indistinguishability obfuscator.) Note that in this game it is crucial that the crs is in the binding mode. Game 7 : This game, similarly to Game 1 switches to public parameters pp with a yes-instance y ∈ TD. The analysis is as before. Game 8 : This game, similarly to Game 2 , includes in pp a hiding crs , and a (PIO) obfuscation of circuit C Add (see Fig. 4). The analysis is as before. Game 9 : This game generates c 1 by encrypting x 1 , even when b = 0. The analysis is as in Game 3 . Observe that the challenge encoding in Game 9 is independent of the random bit b and the advantage of any (possibly unbounded) adversary A is 0. Collecting bounds on the probabilities involved in the various game hops concludes the proof. public C Add C Map c 1 (b = 0) c 2 (b = 0) G. parameters knows knows contains contains remark 0 pp sk 1 ,sk 2 , tdext sk 1 (x 0 , y 0 ) (x 0 , y 0 ) 1 pp sk 1 ,sk 2 , tdext sk 1 (x 0 , y 0 ) (x 0 , y 0 ) TD indist. 2 pp wy sk 1 (x 0 , y 0 ) (x 0 , y 0 ) Lemma 5.1 3 pp wy sk 1 (x 0 , y 0 ) (x 1 , y 1 ) IND-CPA 4 pp sk 1 ,sk 2 , tdext sk 1 (x 0 , y 0 ) (x 1 , y 1 ) Lemma 5.1 5 pp sk 1 ,sk 2 , tdext sk 1 (x 0 , y 0 ) (x 1 , y 1 ) TD indist. 6 pp sk 1 ,sk 2 , tdext sk 2 (x 0 , y 0 ) (x 1 , y 1 ) PIO 7 pp sk 1 ,sk 2 , tdext sk 2 (x 0 , y 0 ) (x 1 , y 1 ) TD indist. 8 pp wy sk 2 (x 0 , y 0 ) (x 1 , y 1 ) Lemma 5.1 9 pp wy sk 2 (x 1 , y 1 ) (x 1 , y 1 ) IND-CPA Figure 5: Outline of the proof steps of Theorem 5.3. b is the random bit of the κ-Switch game (see Figure 3). Changing between pp and pp is justified by the hardness of deciding membership of TD, and changing between pp and pp by Lemma 5.1. The hops relying on PIO use the perfect completness and the perfect soundness under binding crs to argue function equivalence of C Map . Doing without probabilistic obfuscation In contrast to the PIO-based approach from Section 5.1, here we will only use (deterministic) indistinguishability obfuscation, but a stronger notion of NIZK proof system. Concretely, our proof works for any dual-mode NIZK proof system that enjoys perfect completeness, perfect soundness and extraction under BCRS, perfect WI under HCRS (as defined in Section 2.4), and meets a number of extra structural requirements as we detail below. These requirements are fulfilled by Groth-Sahai proofs [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF] based on the DDH or k-Linear assumption, and we are in fact not aware of other suitable proof systems. Still, we find it convenient to state only the abstract properties that we require for the our result in this section. The proof system Σ = (HCRS, BCRS, Prove, Verify, Sim) is required to satisfy the following structural properties. where we assume that the random coins of Com form a (finite) group whose operation is denoted by "+". In other words, w-commitments and 0 -commitments not only have the same distribution, they also only differ by a fixed shift in the random coins of Com. We furthermore require that ∆ can be efficiently computed from w, w , and the simulation trapdoor td zk . SUITABILITY OF GROTH-SAHAI PROOFS. We briefly comment on the suitability of the NIZK proof system of Groth and Sahai [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF]. 9 The dual nature of the proof system, and its perfect completeness, perfect soundness, perfect extractability, and perfect witness-indistinguishability and perfect zero-knowledge are already proven in [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF]. (However, cf. Footnote 4.) Moreover, it is easy to verify the syntactic requirements above. To see ( ), it is necessary to take a look at the specific structure of the commitment algorithm Com from [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF]. Specifically, when committing to a single bit b, commitments from [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF] have the form Com(gpk, crs, b; r) = b • u 0 + n i=1 r i • u i , for publicly known vectors u i of group elements (contained in crs), and random coins r i ∈ Z q for the order q of the underlying group. (Commitments to bit strings w can be generated in a bitwise fashion.) When crs is produced by HCRS, then u 0 ∈ u 1 , . . . , u n . Then, a commitment to b = 1 differs from a commitment to b = 0 by an additive shift ∆ = (∆ 1 , . . . , ∆ n ) ∈ Z n q of random coins with u 0 = i ∆ i u i . Note that this shift does not depend on r. This shows ( ). THE DETERMINISTIC CIRCUIT C Add . We now comment on a necessary slight tweak to the multilinear map construction itself. Namely, in order to facilitate a later use of ( ), we adapt the way IO generates obfuscations of C Add . Recall that we can view C Add as a deterministic circuit that takes as inputs (among other things) random coins r, and outputs (among other things) a NIZK proof π = Prove(gpk, crs, x, w; r) for a fixed witness w hardwired into C Add . For our purposes, we use a slight variation of C Add that instead generates π as Prove(gpk, crs, x, w; R), where R is a uniformly random value that is hardwired into C Add . When we want to make the choice of R explicit, we also write C R Add . Theorem 5.4 (Switching encodings using IO). Let IO be an indistinguishability obfuscator, Π an IND-CPA encryption scheme, and Σ the specific dual-mode NIZK proof system of Groth and Sahai (see [START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF]). Let Γ be the MLG scheme of Section 4 obtained using these primitives. Then, for any PPT adversary A, there exist PPT adversaries B 1 , B 2 , B 3 and B 4 of essentially the same complexity as A such that for all λ ∈ N Adv κ-switch Γ,A (λ) ≤ 3 • Adv sm TD,B 1 (λ) + 7 • Adv ind IO,B 2 (λ) + 3 • Adv crs Σ,B 3 (λ) + 2 • Adv ind-cpa Π,B 4 (λ) . Proof. The proof of Theorem 5.4 proceeds like that of Theorem 5.3, except of course in those steps that use the security of the probabilistic indistinguishability obfuscator PIO. There are two types of such steps (resp. changes of C Map or C Add ): in the first type, functional equivalence is fully preserved (even when viewing C Add as a deterministic circuit). This type of change occurs in the hop from Game 0 to Game 1 in the proof of Lemma 5.1, and in the hop from Game 5 to Game 6 in the proof of Theorem 5.3. Since the corresponding deterministic circuits are functionally equivalent (in case of C Add = C R Add : when the same value of R is used), the security of IO can be directly utilized. The second type of steps occurs in the hop from Game 2 to Game 3 in the proof of Lemma 5.1. More concretely, these games, which in the IO setting are slightly different, are as described below. Game 2 : The public parameters include, among other things, an IO obfuscation of the top circuit of Fig. 4, (albeit with the change we impose to C Add in the IO setting). This circuit generates outputs proofs (if the input encodings are valid) using witness sk 1 ,sk 2 . Game 3 : The public parameters include, among other things, an IO obfuscation of the bottom circuit of Fig. 4 (albeit with the change we impose to C Add in the IO setting). This circuit generates proofs using always witness w y . We now argue that |Pr[W 2 (λ)] -Pr[W 3 (λ)]| ≤ Adv ind IO,B 1 (λ) , where W i denotes the event that A in Game i outputs 1. Using ( ), the change results in a circuit that is functionally equivalent to the circuit from Game 3 when run with a suitably shifted random input R + ∆. By our setup of C Add , we can express this shift through the hardwired coins as C R Add,3 ≡ C R+∆ Add,2 , where C Add,i denotes the circuit C Add from Game i , ∆ = ∆((sk 1 , sk 2 ), w y ) is the appropriate shift vector from ( ), and ≡ denotes functional equivalence. Hence, our change in Game 3 can be justified with a reduction to the (deterministic) indistinguishability property of IO. Specifically, a suitable circuit sampler A 1 (as in Section 2.3) would sample circuits C 1 := C R Add,1 and C 2 := C R+∆ Add,0 for a uniform R, and a ∆ generated from the corresponding witnesses (sk 1 , sk 2 ) and w y . (We note that during this reduction, we can of course assume (sk 1 , sk 2 ) and w y to be known.) We would like to highlight that Game 3 itself still chooses a uniform R to prepare proofs. In particular, Game 3 does not explicitly compute any ∆ value as in ( ), and hence does not make use of the corresponding witness (sk 1 , sk 2 ). (The value ∆ is only explicitly computed during the reduction to the indistinguishability property of IO.) The remaining parts of the proof of Theorem 5.3 (including the proof of Lemma 5.1) apply unchanged. DDH A Γ 0 (λ): pp ←$ Setup 0 (1 λ , 1 0 ) b ←$ {0, 1} x, y, z ←$ Z N if b = 1 then z ← x • y b ←$ A(pp, [x] 0 , [y] 0 , [z] 0 ) Return (b = b ) q-SDDH A Γ 0 (λ): pp ←$ Setup 0 (1 λ , 1 0 ) q ← q(λ); b ←$ {0, 1} x, z ←$ Z N if b = 1 then z ← x q+1 b ←$ A(pp, [x] 0 , . . . , [x q ] 0 , [z] 0 ) Return (b = b ) (κ, I)-MDDH A Γ (λ): pp ←$ Setup(1 λ , 1 κ ) b ←$ {0, 1} a 1 , . . . , a T , z ←$ Z N if b = 1 then [z] T ← e([a 1 ] 1 , . . . , [a i ] i ) a T b ←$ A(pp, {[a i ] j } (i,j)∈I , [z] T ) Return (b = b ) The Multilinear DDH Problem In this section we show that natural multilinear analogues of the decisional Diffie-Hellman (DDH) problem are hard for our MLG scheme Γ from Section 4. We will establish this for two specific Setup algorithms which give rise to symmetric and asymmetric multilinear maps in groups of prime order N. (See Section 3 for the formal definition.) In the symmetric case, we will base hardness on the q-strong DDH problem [START_REF] Boneh | Short group signatures[END_REF] and in the asymmetric case on the standard DDH problem. Intractable problems elements [a i ] j for (i, j) ∈ I, so that its challenge elements may lie in any combination of the groups. The standard MDDH problem corresponds to the case where I = I * := {(1, 1), . . . , (κ, κ), (κ + 1, κ)} . The symmetric setting We describe a special variant of our general construction in Section 4 which gives rise to a symmetric MLG scheme as defined in Section 3. Recall that in the construction a matrix W was chosen uniformly at random in Z κ× N . We set := 2 and sample W = (ω 1 , . . . , ω κ ) t by setting ω i = (1, ω) for a random ω ∈ Z N . The generators and identity elements for all groups are set to be a single value generated for the first group. These modifications ensure that the scheme algorithms are independent of the index for 1 ≤ i ≤ κ and that e is invariant under all permutations of its inputs. The following lemma, which provides a mechanism to compute polynomial values "in the exponent," will be helpful in the security analysis of our constructions. Lemma 6.1 (Horner in the exponent). Let ω = (ω 0 , ω 1 , ω 2 ) ∈ Z N , and x i = (x i,0 , x i,1 , x i,2 ) ∈ Z 3 N for i = 1 . . . κ. Define z i := x i , ω . Then given only the implicit values [ω i 0 ω j 1 ω k 2 ] T , for all i, j, k such that i + j + k = κ and the explicit values x i the element [z 1 • • • z n ] T can be efficiently computed. Proof. Let P(ω 0 , ω 1 , ω 2 ) := κ i=1 (x i,0 • ω 0 + x i,1 • ω 1 + x i,2 • ω 2 ) = i+j+k=κ p ijk • ω i 0 ω j 1 ω k 2 , Clearly, if all p ijk are known then [P(ω)] T can be computed using [ω i 0 ω j 1 ω k 2 ] T with polynomially many operations. (There are O(κ 2 ) summands above.) To obtain these values we apply Horner's rule. Define P i (ω 0 , ω 1 , ω 2 ) := 1 if i = 0 ; (x i,0 • ω 0 + x i,1 • ω 1 + x i,2 • ω 2 ) • P i-1 (ω 0 , ω 1 , ω 2 ) otherwise. The coefficients of P κ are the required p ijk values. Let t i denote the number of terms in P i . It takes at most 3t i multiplications and t i -1 additions in Z N to compute the coefficients of P i from P i-1 and x i . Since t i ∈ O(κ 2 ), at most O(κ 3 ) many operations in total are performed. We note that the lemma generalizes to any (constant) with computational complexity O(κ ). We prove the following result formally in Appendix A.2 and give an overview of the proof here. Below I = I * denotes the index set with all the second components being 1. Theorem 6.2 ((κ -1)-SDDH hard =⇒ symmetric (κ, I * )-MDDH hard). Let Γ * denote scheme Γ of Section 4 constructed using base group Γ 0 and an indistinguishability obfuscator IO with modifications as described above, and let κ ∈ N. Then for any PPT adversary A there are ppt adversaries B 1 , B 2 and B 3 of essentially the same complexity as A such that for all λ ∈ N Adv (κ,I * )-mddh Γ * ,A (λ) ≤ 2 • Adv (κ-1)-sddh Γ 0 ,B 1 (λ) + Adv ind IO,B 1 (λ) + (κ + 1) • Adv κ-switch Γ * ,B 3 (λ) + κ -1 N(λ) . Proof. In our reduction, the value ω used to generate W will play the role of the implicit value in the SDDH problem instance. We therefore change the implementation of C Map to one that does not know ω in the clear and only uses the implicit values [ω i ] 0 (recall that in our construction G T is just G 0 , so these elements come from the SDDH instance). Such a circuit C * Map can be efficiently implemented using Horner's rule above. In more detail, C * Map has [ω i ] T hard-coded in, recovers x i from its inputs using sk 1 , and then applies Lemma 6.1 with (ω 0 , ω 1 , ω 2 ) := (1, ω, 0) to evaluate the multilinear map. The proof proceeds along a sequence of κ + 6 games as follows. Game 0 : This is the κ-MDDH problem (Figure 6, right). We use x i and y i to denote the representation vectors of a i generated within the sampler Sam I(i) (a i ), where (i, I(i)) ∈ I. Game 1 -Game κ : In these games we gradually switch the representations of [a i ] 1 for i ∈ [κ] so that they are of the form (a i -ω, 1). Each hop can be bounded via the Switch game. (We have not (yet) changed the representation of [a κ+1 ] 1 .) Game κ+1 : This game introduces a conceptual change: the a i for i ∈ [κ] are generated as a i + ω. Note that the distributions of these values are still uniform and that the exponent of the MDDH challenge when b = 1 is a κ+1 • κ i=1 (a i + ω) . This game prepares us for embedding a (κ -1)-SDDH challenge and then to stepwise randomize the exponent above. Game κ+2 : This game switches C Map to C * Map as defined above. We use indistinguishability obfuscation and the fact that these circuits are functionally equivalent to bound this hop. We are now in a setting where ω is only implicitly known. Game κ+3 : This game replaces [ω κ ] 0 with a random value [τ] 0 in C * Map and the computation of the challenge exponent. This hop can be bounded via the (κ -1)-SDDH game. Note that at this point the exponent is not information-theoretically randomized as τ is used within C * Map . Game κ+4 : This game sets the representation of [a κ+1 ] 1 to (a κ+1 -ω, 1). Once again, this hop can be bounded by the Switch game. Game κ+5 : This game introduces a conceptual change analogous to that in Game κ+1 for a κ+1 . Note that a linear factor (a κ+1 + ω) is introduced in this game. This will help to fully randomize the exponent next. Game κ+6 : Analogously to Game κ+3 , this game replaces [ω κ ] 0 with a random value [σ] 0 . We bound this hop using the (κ -1)-SDDH game. In Game κ+6 , irrespective of the value of b ∈ {0, 1}, the challenge is uniformly and independently distributed as σ remains outside the view of the adversary. Hence the advantage of any (unbounded) adversary in this game is 0. This concludes the sketch proof. The asymmetric setting We describe a second variant of the construction in Section 4 that results in an asymmetric MLG scheme. We set := 2 and choose the matrix W = (ω 1 , . . . , ω κ ) t by setting ω i := (1, ω i ) for random ω i ∈ Z N . The following theorem shows that for index set I = {(i, I(i)) : 1 ≤ i ≤ κ + 1} given by an arbitrary function I : [κ + 1] -→ [κ] of range at least 3, this construction is (κ, I)-MDDH intractable under the standard DDH assumption in the base group, the security of the obfuscator, and the κ-Switch game in Section 5. We present the proof intuition here and leave the details to Appendix A.3. Theorem 6.3 (DDH hard =⇒ asymmetric (κ, I * )-MDDH hard). Let Γ * denote scheme Γ of Section 4 constructed using base group Γ 0 and an indistinguishability obfuscator IO with modifications as described above. Let κ ≥ 3 be a polynomial and I * as above. Then for any PPT adversary A there are ppt adversaries B 1 , B 2 and B 3 such that for all λ Adv (κ,I * )-mddh Γ * ,A (λ) ≤ 2 • Adv ddh Γ 0 ,B 1 (λ) + 2 • Adv ind IO,B 2 (λ) + 3 • Adv κ-switch Γ * ,B 3 (λ) + κ + 1 N(λ) . Proof. The general proof strategy is similar to that of the symmetric case, and proceeds along a sequence of 8 games as follows. Game 0 : This is the (κ, I)-MDDH problem. Without loss of generality we assume that I(i) = i for i ∈ [3]. Game 1 -Game 3 : In these games we gradually switch the representation vectors of [a i ] i for i = 1, 2, 3 to those of the form (a i -ω i , 1). Each of these hops can be bounded via the Switch game. Game 4 : This game introduces a conceptual change and generates a i as a i + ω i . The exponent of the MDDH challenge when b = 1 is (a 1 + ω 1 )(a 2 + ω 2 )(a 3 + ω 3 ) • κ+1 j≥4 a j . Game 5 : In this game we change the implementation of C Map to one which uses all but two of the ω i explicitly, the remaining two implicitly, and additionally [ω 1 ω 2 ] 0 , i.e., ω 1 ω 2 given implicitly in the exponent. The new circuit C * Map will be implemented using Horner's rule and is functionally equivalent to the original circuit used in the scheme. We invoke the IO security of the obfuscator to conclude the hop. This game prepares us to embed a DDH challenge next. Game 6 : In this game we replace all the occurrences of [ω 1 ω 2 ] 0 with a random [τ] 0 and the corresponding implicit values. We bound the distinguishing advantage in this hop down to the DDH game. Game 7 : Similarly to Game 5 , we change the implementation of C * Map using [τω 3 ] 0 and argue via indistinguishability of obfuscations for functionally equivalent circuits. Game 8 : Finally, using the hardness of DDH, we replace all the occurrences of [τω 3 ] 0 with a random [σ] 0 . In Game 8 , irrespective of the value of b ∈ {0, 1}, the challenge is uniformly and independently distributed as σ remains outside the view of the adversary. Hence the advantage of any (possibly unbounded) adversary in this game is 0. (κ, m, n, r 0 , r 1 )-RANK A Γ (λ): pp ←$ Setup(1 λ , 1 κ ) b ←$ {0, 1} M 0 ←$ Rk r 0 (Z m×n N ); M 1 ←$ Rk r 1 (Z m×n N ) b ←$ A(pp, [M b ]) Return (b = b ) The Rank Problem The RANK problem is a generalization of DDH-like problems to matrices and has proven to be very useful in cryptographic constructions [BHHO08, NS09, GHV12, BLMR13, EHK + 13]. Here we consider the problem in groups with non-unique encodings equipped with a multilinear map. Our main result is to show that, subject to certain restrictions, the intractability of the rank problem for our construction of an MLG scheme Γ from Section 4 follows from that of the q-SDDH problem for Γ 0 . Formalization of the problem Let pp denote the public parameters of such an MLG scheme, obtained by running Setup with input (1 λ , 1 κ ). For simplicity, we focus on the case where N is prime. Let Rk r (Z m×n N ) denote the set of m × n matrices over Z N of rank r, where necessarily r ≤ min(m, n). We use a variant of our construction in Section 4, setting := 3 and sampling W = (ω 1 , . . . , ω κ ) t ∈ Z κ×3 N where ω i = (1, ω, ω 2 ) for ω ←$ Z N . Note that this results in a symmetric pairing and henceforth we omit subscripts from source group elements. Let [M] denote a matrix whose (i, j)th entry contains an encoding of the form [m i,j ] = ([m i,j ] 0 , c i,j,1 , c i,j,2 , π i,j ), with m i,j ∈ Z N . THE (κ, m, n, r 0 , r 1 )-RANK PROBLEM. For κ, m, n, r 0 , r 0 ∈ N we say that an MLG scheme Γ is (κ, m, n, r 0 , r 1 )-RANK intractable if Adv (κ,m,n,r 0 ,r 1 )-rank Γ,A (λ) := 2 • Pr (κ, m, n, r 0 , r 1 )-RANK A Γ (λ) -1 ∈ NEGL , where game (κ, m, n, r 0 , r 1 )-RANK A Γ (λ) is shown in Figure 7. In the presence of a κ-linear map the rank problem is easy for any r 0 < r 1 < κ, since the determinants of all the r b -minors can be expressed as forms of degree at most κ, and the multilinear map can be used to distinguish their images in the target group. However, this does not invalidate the plausibility of the rank problem for κ ≤ r 0 < r 1 ; indeed there are known reductions to the DDH, the decision linear and the 2-MDDH problems [BHHO08, NS09, GHV12]. We show that for our construction in Section 4, with the modification introduced above, the rank problem is indeed hard provided κ ≤ r 0 < r 1 . A standard hybrid argument shows that it is sufficient to establish this for r 1 := r 0 + 1, with a polynomial loss in the security. Our main result is stated below. The full proof can be found in Appendix A.4. Theorem 7.1 (SDDH =⇒ RANK). Let Γ denote scheme Γ of Section 3 with := 3 and with respect to the base group Γ 0 and an indistinguishability obfuscator IO. Let κ, m, n, r be integers with r ≥ κ. Then for any PPT adversary A there are ppt adversaries B 1 , B 2 and B 3 of essentially the same complexity as A such that for all λ ∈ N Adv (κ,m,n,r,r+1)-RANK Γ,A (λ) ≤ 2κ-1 q=1 Adv q-sddh Γ 0 ,B 1 (λ) + Adv ind IO,B 2 (λ) + (mn) • Adv κ-switch Γ,B 3 (λ) + 1 N(λ) . Proof intuition The main difficulty comes in generating consistent encodings of a rank r challenge matrix [M] throughout its gradual transformation into a rank r + 1 challenge matrix. Contrast this with the MDDH reduction of Section 6, where the challenge that is transformed lives in the target group -a group with unique encodings. As we will see below, having encodings that are represented also with respect to ω 2 will help to overcome this problem and embed a 1-SDDH tuple. EMBEDDING THE SDDH CHALLENGE. To reduce the rank problem to 1-SDDH, consider the following matrix [W] 0 = [1] 0 [ω] 0 [ω] 0 [τ] 0 , which is formed from an 1-SDDH challenge. We will exploit the fact that if τ = ω 2 then W has rank 2, and if τ is uniform then it has rank 2 with overwhelming probability in λ. LIFTING.To obtain an m × n matrix M of rank r ≥ κ or r + 1 we can use the standard trick of embedding the identity matrix I r-1 in the diagonal: M =   S I r-1 0   , where 0 denotes padding with zeroes from Z N to bring the matrix up to the required size. Moreover, via the random self-reducibility of the rank problem the structure in M can be removed. An important point worth mentioning is that after the randomization we are still able to generate an encoded matrix [M] even when ω and τ are only known in the exponent. BREAKING CORRELATION WITH C Map . We follow a similar strategy to break the dependency between C Map and ω. Using the powers [h] 0 = ([1] 0 , [ω] 0 , . . . , [ω 2κ ] 0 ) we build circuit functionally equivalent to C Map , indeed a circuit that outputs κ i (x i,0 + x i,1 ω + x i,2 ω 2 ) T via Lemma 6.1 (recall that G T = G 0 ), and invoke the security of the obfuscator. We then use the q-SDDH assumptions for 2 ≤ q ≤ 2κ -1 in G 0 to gradually transform where TD is a language where is hard to decide membership. Game 6 : As Game 5 , but now the challenger constructs a different circuit C Map with the second encryption secret key hard-coded. Thus, the extracted vector is set to y i ← Dec(c i,1 , sk 2 ). We claim that |Pr[W 5 (λ)] -Pr[W 6 (λ)]| ≤ Adv ind PIO,B 1 (λ). The variants of the C Map circuit described in the games extract (possibly different) encoding vectors x * i , y * i , respectively, for any adversarial input x * = (x * 1 , . . . , x * k ). Observe that the i-th argument x * i = (i, [z i ] 0 , c i,1 , c i,2 , π i ) has a non-rejecting proof π i iff ([z i ] 0 , c i,1 , c i,2 ) passes relation R 1 . (In other words, the ciphertexts encrypt representation vectors of the same [z i ] 0 .) It follows, from the perfect completeness and perfect soundness of the proof system with a binding CRS, that these variants behave identically on any (possibly malformed) input x * . Therefore the variants are functionally equivalent and hence trivially drawn by an X-IND sampler, so that their PIO obfuscations are indistinguishable. Game 7 : As Game 6 but now the public parameters pp are changed so that include a yes-instance y ∈ TD. We have that |Pr[W 6 (λ)] -Pr[W 7 (λ)]| ≤ Adv sm TD,B 1 (λ), where TD is a language where is hard to decide membership. (Technically, the circuit C Map in game Switch is a κ-linear map, and at this point of the transformation C * Map is not necessarily linear. However, a closer look to the proof of Theorem 5.3 shows that no linearity assumption is made on the C Map circuit.) Game κ+5 : The last source exponent is sampled as a κ+1 = a κ+1 + ω for a randomly chosen a κ+1 . This means that, for a known τ in Z N , the challenge d in Equation (3) can be written as where the coefficients p of polynomial P are computed as explained in Game κ+2 , and polynomial Q τ is given by coefficients q = (0, p 0 + τp κ , p 1 , . . . , p κ-1 ). Since Here σ is a random fresh value in Z N . A (κ -1)-SDDH challenge ([ω i ] 0 ) i≤κ-1 , [σ] 0 ) can be used to emulate the challenger in Game κ+4 if σ = ω κ , or in Game κ+5 if σ is random. The Figure 1 : 1 Figure 1: Left: IND-CPA security of a (homomorphic) PKE scheme. Middle: Indistinguishability security of an obfuscator. Right: Static-input (aka. selective) X-IND property of A := (A 1 , A 2 ). Proof structure : Proofs π output by Prove or Sim have the form π = (π com , π open ). We call π com the commitment part and π open the opening part of π. Commitment parts output by Prove : The commitment part π com of a proof generated by Prove is a (probabilistic) commitment to the respective witness w ∈ {0, 1} . (Recall that we can restrict ourselves to witnesses that are bit strings of a length = p(|x|) for a fixed and public polynomial p.) That is, we can write π com = Com(gpk, crs, w; r) for a fixed commitment algorithm Com and Prove's random coins r. Furthermore, Prove uses no random coins beyond those used for Com. Homomorphic property of commitment parts : The commitment algorithm Com used by Prove and Sim is homomorphic in the following sense: for every gpk, every hiding crs (generated by HCRS), and all w, w ∈ {0, 1} , there exists a value ∆ = ∆(w, w ), such that ∀r : Com(gpk, crs, w; r) = Com(gpk, crs, w ; r + ∆) , ( ) Figure 6 : 6 Figure 6: Left: The DDH problem. Middle: The strong DDH problem. Right: The multilinear DDH problem, where I specifies the available group elements. By slight abuse of notation, repeated use of [a i ] i denotes the same sample. Figure 7 : 7 Figure 7: The RANK problem parameterized by integers κ, m, n, r 0 and r 1 . [h] 0 into [q] 0 = ([1] 0 , [ω] 0 , [ω 2 ] 0 , [τ 3 ] 0 , . . . , [τ 2κ ] 0) and embed a 1-SDDH tuple in the challenge matrix [M] as explained above.Game 4 : The public parameters are changed back to pp, so that include a binding crs , and a (PIO) obfuscation of circuit C Add of Fig.2(top). ( pp also include a yes-instance y ∈ TD.) Again by Lemma 5.1 we have that|Pr[W 3 (λ)] -Pr[W 4 (λ)]| ≤ 2 • Adv ind PIO,B 2 (λ) + Adv crs Σ,B 3 .Game 5 : As Game 4 but now the public parameters pp are changed back to the original one described in Section 4 so that include a no-instance y / ∈ TD. We have that |Pr[W 4 (λ)] -Pr[W 5 (λ)]| ≤ Adv sm TD,B 1 (λ), Game 8 : 8 The public parameters pp change so that include a hiding crs , and a (PIO) obfuscation of circuit C Add (see Fig.4(bottom)). By Lemma 5.1 we have that|Pr[W 7 (λ)] -Pr[W 8 (λ)]| ≤ 2 • Adv ind PIO,B 2 (λ) + Adv crs Σ,B 3Game 9 : As Game 8 , but, if b = 0 the challenge encoding is generated by mixing the representation vectors w.r.t public key pk 1 . Thus, on A's response (z, (x 0 , y 0 ), (x 1 , y 1 )), in this game we set c 0 ← Enc(x 1 , pk 1 ; r 1 ), and c 1 ← Enc(y 1 , pk 2 ; r 2 ). Using a similar argument as in Claim A.1 we have that |Pr[W 8 (λ)] -Pr[W 9 (λ)]| ≤ Adv ind-cpa Π,B 4 (λ). Finally, Pr[W 9 (λ)] = 1/2 because the challenge encoding is generated using the same pair of representation vectors (x 1 , y 1 ) regardless of the bit b. The proof of the theorem is concluded by collecting the terms above. Lemma 6.1 implies that (1) both circuits are functionally equivalent, and (2) C * Map is of size poly(λ). We conclude that obfuscations of these two variants are indistinguishable. Or putting it differently:|Pr[W κ+1 (λ)] -Pr[W κ+2 (λ)]| ≤ Adv ind IO,B (λ) .Game κ+3 : Here a different challenge [d] T is generated. We now regard the (κ + 1)-vector p of Equation (2) as a multivariate Z N -polynomial P in κ unknowns. The challenger samples random ω, τ in Z N and sets[d] T = a κ+1 • [P(ω, ω 2 , . . . ω κ-1 , τ)] T ,(3)where P is evaluated in the exponent. Also, the circuit C * Map has hard-coded [τ] T and [ω i ] T for 1 ≤ i ≤ κ -1. We emphasize that in this game, for challenge bit b = 1, the random variables sampling d and (an obfuscation of) C Map are not independent (in case b = 0 they are).We claim that|Pr[W κ+2 (λ)] -Pr[W κ+3 (λ)]| ≤ Adv (κ-1)-sddh Γ 0 ,B(λ) .This follows because to simulate both games it suffices to know [τ] T and [ω i ] T in the exponent. Thus, an adversary B against (κ -1)-SDDH on receiving challenge([ω i ] 0 ) i≤κ-1 , [τ] 0 ) can simulate experiment Game κ+2 if τ = ω κ , or experiment Game κ+3 if τ is random. (Recall that G T = G 0 .)Gameκ+4 : The last source encoding a κ+1 is generated with representation vectors x κ+1 = y κ+1 = (a κ+1 -ω, 1). Using a similar argument as Claim A.2 we have that |Pr[W κ+3 (λ)] -Pr[W κ+4 (λ)]| ≤ Adv κ-switch Γ * ,B (λ) . [d] T = (a κ+1 + ω) • [P(ω, ω 2 , . . . ω κ-1 , τ)] T = a κ+1 • [P(ω, ω 2 , . . . ω κ-1 , τ)] T + [Q τ (ω, ω 2 , . . . , ω κ )] T a κ+1 and d (for case b = 1) are distributed as in the previous game we have that Pr[W κ+4 (λ)] = Pr[W κ+5 (λ)]. Game κ+6 : The last game samples the challenge [d] T for case b = 1 as [d] T = a κ+1 • [P(ω, ω 2 , . . . ω κ-1 , τ)] T + [Q τ (ω, ω 2 , . . . , σ)] T . This version of the paper fixes a flaw that we found in the proof of Theorem 5.3. The construction of Section 4 has been slightly modified, but it does not make use of stronger assumptions and has comparable efficiency. This is not trivial since the new method should not lead to an exponential blow-up in κ. The authors of[START_REF] Yamakawa | Selfbilinear map on unknown order groups from indistinguishability obfuscation and its applications[END_REF] state that such information can be added in their construction, but what would be needed is the obfuscation of a circuit for computing 4xyth powers. The information available for building this would be obfuscations of circuits for computing 2xth and 2yth powers, so an obfuscation of a composition of already obfuscated circuits would be required. Strictly speaking then, the auxiliary information associated with elements output by their pairing is of a different type to that belonging to the inputs, making it questionable whether "self-pairing" is the right description of what is constructed in[START_REF] Yamakawa | Selfbilinear map on unknown order groups from indistinguishability obfuscation and its applications[END_REF]. We note that extraction in Groth-Sahai proofs does not for all types of statements recover a witness. (Instead, for some types of statements, only g w i for a witness variable w i ∈ Zp can be recovered.) Here, however, we will only be interested in witnesses w = (w 1 , . . . , wn) ∈ {0, 1} n that are bit strings, in which case extraction always recovers w.(Specifically, extraction will recover g w i for all i, and thus all w i .) However, note that N i need not be the least integer with this property. It is also more natural to work with unbounded maps in the graded setting as otherwise we would have to provide an exponential number of inputs and hence assume "default" values rather than being able to include them all in pp. An example of such a language is the Diffie-Hellman language TD = {(g r 1 , g r 2 ) | r ∈ N} in a DDH group with generators g 1 , g 2 . In particular, a suitable trapdoor language imposes no additional computational assumption in our upcoming security proof. Typically, the obfuscated circuit will have a PRF key hardwired in and derives the required randomness by applying the PRF to the circuit inputs. With Lemma 5.1 we can invoke IND-CPA security, and via a sequence of games obtain the result stated below. The proof can be found in Appendix A.1; we will give a high-level overview In fact[START_REF] Groth | Efficient non-interactive proof systems for bilinear groups[END_REF] presents different variants of their proof system in prime-order and composite-order groups. Here we refer to the prime-order variants. We start by formalizing the hard problems that we will be relying on and those whose hardness we will be proving. We do this in a uniform way using the language of group schemes of Section 3. Informally, the DDH problem requires the indistinguishability of g xy from a random element given (g x , g y ) for random x and y, the q-SDDH problem requires this for g x q+1 given (g x , g x 2 , . . . , g x q ) and the κ-MDDH problem, whose hardness we will be establishing, generalizes the standard bilinear DDH problem (and its variants) and requires this for ga 1 •••a κ+1 Tin the presence of (g a 1 , . . . , g a κ+1 ).THE DDH PROBLEM. We say that a group scheme Γ 0 is DDH intractable ifAdv ddh Γ 0 ,A (λ) := 2 • Pr DDH A Γ 0 (λ) -1 ∈ NEGL ,where game DDH A Γ 0 (λ) is shown in Figure6(left). THE q-SDDH PROBLEM. For q ∈ N we say that a group scheme Γ 0 is q-SDDH intractable ifAdv q-sddh Γ 0 ,A (λ) := 2 • Pr q-SDDH A Γ 0 (λ) -1 ∈ NEGL ,where game q-SDDH A Γ 0 (λ) is shown in Figure6(middle). THE (κ, I)-MDDH PROBLEM. For κ ∈ N we say that an MLG scheme Γ is κ-MDDH intractable with respect to the index set I ifAdv (κ,I)-mddh Γ,A (λ) := 2 • Pr (κ, I)-MDDH A Γ (λ) -1 ∈ NEGL ,where game (κ, I)-MDDH A Γ (λ) is shown in Figure6(right). Here I is a set of ordered pairs of integers (i, j) with 1 ≤ i ≤ κ + 1, 1 ≤ j ≤ κ. The adversary is provided with challenge group Here,[d] T and C * Map are still correlated via [τ] 0 . This is why we cannot stop at Game 6 . Acknowledgements Albrecht, Larraia and Paterson were supported by EPSRC grant EP/L018543/1. Hofheinz was supported by DFG grants HO 4534/2-2 and HO 4534/4-1. A Full Proofs from the Main Body A.1 Proof of Theorem 5.3: Indistinguishability of encodings using PIO Proof. We consider a chain of 10 games, with Game 0 the κ-Switch game, such that in the last game the challenge encoding is drawn independently of the bit b. Below we let W i denote the event that Game i outputs 1. Game 0 : The original Switch game. Game 1 : As Game 0 but now the public parameters pp are changed so that include a yes-instance y ∈ TD. We have that |Pr[W 0 (λ)] -Pr[W 1 (λ)]| ≤ Adv sm TD,B 1 (λ), where TD is a language where is hard to decide membership. Game 2 : The public parameters pp change so that include a hiding crs , and a (PIO) obfuscation of circuit C Add (see Fig. 4 (bottom)). Recall that this circuit uses the witness w y to y ∈ TD to produce the output proofs π . Therefore the simultaneous knowledge of decryption keys sk 1 ,sk 2 is not needed anymore. By Lemma 5.1 we have that Game 3 : As Game 2 , but, if b = 0 the challenge encoding is generated by mixing the representation vectors w.r.t public key pk 2 . Thus, on A's response (z, (x 0 , y 0 ), (x 1 , y 1 )), in this game we set c 0 ← Enc(x 0 , pk 1 ; r 1 ), and c 1 ← Enc(y 1 , pk 2 ; r 2 ). Proof Claim A.1. Consider the following PPT distinguisher B 4 against the IND-CPA security of the encryption scheme Π, with respect to key pair (pk 2 , sk 2 ). The distinguisher runs experiment Game 2 using A as a subroutine with the following differences: when it receives A's vectors (x j , y j ) (in Z p for j = 0, 1) it submits (y 0 , y 1 ) to the IND-CPA challenger. It gets back c * = Enc(y r * , pk 2 ). Next, B 4 generates c 1 ← Enc(x 0 , pk 1 ), and sets c 2 = c * ; the proof π on instance x = ([z] i , c 1 , c 2 ) is generated using the simulation trapdoor of the proof system. Namely, π ←$ Sim(crs, x, td zk ). Finally, B 4 outputs what A outputs. Algorithm B 4 perfectly simulates the challenger in experiment Game 2 if r * = 0 and in experiment Game 3 if r * = 1. This follows from (1) (x, π) is a valid encoding, indeed ciphertext c * contains an encryption of y r * , such that [z] i = [ y r * , ω i ] i ; and (2) real and simulated proofs are identically distributed under (the hiding) crs included in pp. A.2 Proof of Theorem 6.2: Hardness of symmetric MDDH Proof. We show via a chain of games, starting with the symmetric κ-MDDH problem, such that the last game chooses the challenge at random and independently of the guess bit b. Below we let W i denote the event that Game i outputs 1. Game 0 : The κ-MDDH problem as shown in Figure 8. Here there is only one source group. Game s for 1 ≤ s ≤ κ : As Game s-1 , the difference is that the representation vectors (x s , y s ) of the s-th challenge encoding [a s ] are given by x s,0 = y s,0 = a s -ω and x s,1 = y s,1 = 1 . Thus, in game s ≥ s the second coordinate of the s-th encoding vectors are always fixed. Using a similar argument as in Claim A.2 (see the proof of Theorem 6.3) we have that Game κ+1 : The i-th source exponent is changed to a i = a i + ω for randomly chosen a i ∈ Z N and i ≤ κ. This means that the target exponent for b = 1 is The distribution from which the first κ exponents a i are drawn has not changed, and indeed is the uniform distribution. Therefore Game κ+2 : The differences with the previous game are two-fold. First, for case b = 1 the challenge group element [d] T is generated as in Lemma 6.1. Thus, we first write Equation ( 1) as where P is a degree κ polynomial whose coefficients p = (p 0 , . . . , p κ ) are computed using the iterative rule of Lemma 6.1, with (x i,0 , x i,1 , 0) = (a i , 1, 0). Then [d] T is obtained by evaluating P at point ω in the exponent using group elements The other difference is that we obfuscate a different circuit C * Map which has the powers [ω i ] T hard-coded, for 1 ≤ i ≤ κ . This new circuit extracts the encoding vectors x i from the inputs, as usual, then it computes the coefficients of P as explained above, and evaluates it at ω in the exponent. latter follows again from the fact that, if τ is given in the clear, knowing ω i , σ in the exponent suffices to generate [d] T , since P and Q τ are evaluated in the exponent. (Recall that G T = G 0 .) This shows: . To conclude, to see that Pr[W κ+6 ] ≤ 1/2 + negl(λ) it suffices to show that if the challenge bit is b = 1, the exponent target challenge d is randomly distributed. This follows because the last coefficient of Q τ , namely p κ-1 given by has no inverse in Z N with probability negl(λ) := (κ -1)/N(λ) provided N is prime. Now, for any fixed τ and (ω i ) 1≤i≤κ , the map Q τ + a κ+1 P(ω, . . . , ω κ-1 , τ) defines a bijection over Z N which acts on uniform σ. It follows that [d] T is distributed uniformly in G T . A.3 Proof of Theorem 6.3: Hardness of asymmetric MDDH Proof. Let κ ≥ 3 and I : [κ + 1] -→ [κ] be any function of range with size at least 3. Slightly abusing notation, we set I = (i, We show a chain of games, starting with the asymmetric (κ, I)-MDDH problem, such that the last game chooses the challenge encoding at random and independently of the challenge bit b. Below we let W i denote the event that Game i outputs 1. For simplicity, and without loss of generality, we assume that Game 0 : The asymmetric (κ, I)-MDDH problem as shown in Figure 9. Game s for s = 1, 2, 3 : Similar to Game s-1 with the difference that the representation vectors (x s , y s ) of the source encoding [a s ] s are given by x s,0 = y s,0 = a s -ω s and x s,1 = y s,1 = 1 . Thus, in game s ≥ s the second coordinates of the s-th encoding vectors are always fixed. Game 4 : We change the first three source exponents to a i = a i + ω i for randomly chosen a i ∈ Z N . This means that the target exponent for b = 1 is Here we use the fact that |Rng(I)| ≥ 3. The first three elements a i are drawn from the uniform distribution, and their respective representation vectors are (a i , 1) so Game 5 : The implementation of C Map is changed. Now it has hard-coded The polynomial P(w 1 , . . . , w κ ) = κ i=1 (x i,0 +x i,1 w i ) on point (ω I(1) , . . . , ω I(κ) ) can be evaluated in the exponent knowing [ω 1 ] 0 , [ω 2 ] 0 , [ω 1 ω 2 ] 0 , and explicit ω I(i) for i ≥ 3 with I(i) = 1, 2. Since the output of the original C Map is exactly [P(ω (I(1) , . . . , ω I(κ) )] T we conclude that . Game 6 : The challenge target d is set to d = (a 1 a 2 + ω 1 a 2 + ω 2 a 1 + τ)(a 3 + ω 3 )a I(4) . . . a I(κ+1) , where τ is a fresh random value in Z N , and C Map has hard-coded We note that the circuit is different to the previous one. More concretely, if x i = (x i,0 , x i,1 ) is the first representation vector of the ith input [z] i , the new C * Map outputs the evaluation of the following function at point (τ, ω I(1) , . . . , ω I(κ) ): Here it is enough to know [ω 1 ] 0 , [ω 2 ] 0 , and [τ] 0 to compute [f(τ, ω I(1) , . . . , ω I(κ) )] T . Also, note that if τ = ω 1 ω 2 then this is precisely the output of C Map in the previous game. Thus, a DDH challenge ([ω 1 ] 0 , [ω 2 ] 0 , [τ] 0 ) can be used to generate the pair ([d] T , C * Map ) 10 as in Game 5 if τ = ω 1 ω 2 , or as in Game 6 if τ is random. This shows: , where σ = τω 3 . Here it suffices to know [τ] 0 , [σ] 0 , and [ω 3 ] 0 to evaluate [f(τ, ω I(1) , . . . , ω I(κ) )] T , so we have Game 8 : Random σ ∈ Z N is sampled and the challenge target exponent d is set to The circuit C * Map has hard-coded the same values as in the previous game. (The functionality of the circuit changes since now σ is arbitrary, but the algorithm remains the same.) Once again, a DDH challenge To conclude, we have Pr[W 8 (λ)] ≤ 1/2 + negl(λ). To see this, we argue that d is randomly distributed in Z N for challenge bit b = 1 with overwhelming probability in λ as follows: if N is prime, then κ+1 j=1 a I(j) has an inverse in Z N , and therefore d in Equation 4 seen as a function of σ and parametrized by τ and a I(j) defines a bijection in Z N with overwhelming probability. Thus, if σ is uniform so is d. A.4 Proof of Now, we give a sequence of games starting with the RANK game, and finishing with a game sampling matrices of rank r + 1 with overwhelming probability in λ, independently of the guess bit b. Below we let W i the event that Game i outputs 1. Game 0 : The original (κ, m, n, r, r+1)-RANK problem for r ≥ κ, the game is as shown in Figure 10. Game s for 1 ≤ s ≤ mn : To ease notation let us index these games with (i, j), where 1 ≤ i ≤ m, and 1 ≤ j ≤ n. The (i, j)th game, if b = 0, encodes a random matrix [M] of rank r with a variant of Sam algorithm that uses the following representation vectors Here ω is the random Z N -value defining the public matrix [W] 0 of pp. What changes in these games is that the representation vectors are sampled with the last two coordinates set to 0 or 1. Using a similar argument as in Claim A.2 (see the proof of Theorem 6.2) we have that We have that To see this is enough to show that M is randomly distributed over the set of m × n matrices of rank r. To this end we first argue that M has rank r. This follows from two observations: (1) matrix S has rank 1 applying Equation 5 on matrices S and W = ((1, ω), (ω, ω 2 )) t ; observe that det(W) = 0, and (2) by construction, M has rank r if S has rank 1. Next we use the random self-reducibility of the rank problem. Concretely, [M] is distributed as in Game mn because the left (resp., right) action of invertible matrices GL m (Z N ) (resp., GL n (Z N )) is transitive in the set of Z N -matrices of dimension m × n and rank r. where τ 2κ-s+j are fresh random values in Z N . Observe that now C * Map does not implement a κ-linear map. An attacker against (2κ -s)-SDDH can embed a challenge tuple [q] 0 = ([1] 0 , [ω] 0 , . . . , [ω 2κ-s ] 0 , [τ 2κ-s+1 ] 0 ) in the first 2κ -s + 1 positions of [h s ] 0 . Then, if τ 2κ-s+1 = ω 2κ-s+1 this simulates Game mn+s+1 , otherwise it simulates Game mn+s+2 . This shows To conclude, to see that Pr[W mn+2κ+1 (λ)] ≤ 1/2+negl(λ) we apply Equation 5 again to matrices S and W = ((1, ω), (ω, τ 2 )) to argue that minor S has rank 2 with overwhelming probability over the choice of τ 2 (concretely τ = ω 2 with probability negl(λ) := 1/N(λ)). Therefore M has rank r + 1 for guess bit b = 0 also with overwhelming probability in λ.
104,343
[ "1002045" ]
[ "300800", "59704", "301734", "300800", "300800" ]
01081416
en
[ "math" ]
2024/03/04 23:41:46
2017
https://hal.science/hal-01081416v4/file/MultipleRoots-Finale.pdf
Francesco Amoroso email: [email protected] AND Martín Sombra email: [email protected] Umberto Zannier email: [email protected] UNLIKELY INTERSECTIONS AND MULTIPLE ROOTS OF SPARSE POLYNOMIALS Keywords: . 2010 Mathematics Subject Classification. Primary 11C08; Secondary 11G50 Sparse polynomial, multiple roots, unlikely intersections We present a structure theorem for the multiple non-cyclotomic irreducible factors appearing in the family of all univariate polynomials with a given set of coefficients and varying exponents. Roughly speaking, this result shows that the multiple non-cyclotomic irreducible factors of a sparse polynomial, are also sparse. To prove this, we give a variant of a theorem of Bombieri and Zannier on the intersection of a fixed subvariety of codimension 2 of the multiplicative group with all the torsion curves, with bounds having an explicit dependence on the height of the subvariety. We also use this latter result to give some evidence on a conjecture of Bolognesi and Pirola. Introduction This text is motivated by the following question: let f ∈ Q[t ±1 ] be a sparse Laurent polynomial, that is, a polynomial of high degree but relatively few nonzero terms. When does f have a multiple root in Q × ? In more precise terms, we consider sparse Laurent polynomials given by the restriction of a fixed regular function on G N m , namely a multivariate Laurent polynomial, to a varying 1-parameter subgroup. Let N ≥ 1 and γ = (γ 0 , γ 1 , . . . , γ N ) ∈ Q N +1 . For a = (a 1 , . . . , a N ) ∈ Z N set f a = γ 0 + γ 1 t a 1 + • • • + γ N t a N ∈ Q[t ±1 ]. This Laurent polynomial the restriction of the affine multivariate polynomial L = γ 0 + γ 1 x 1 + • • • + γ N x N ∈ Q[x 1 , . . . , x N ] to the subgroup of the multiplicative group G N m = (Q × ) N parameterized by the monomial map t → (t a 1 , . . . , t a N ). The occurrence of many Laurent polynomials of the form f a with a multiple root certainly happens in the following situation. Let 1 ≤ k ≤ N -1 be an integer, b 1 , . . . , b N ∈ Z N -k and y = (y 1 , . . . , y N -k ) be a group of N -k variables. Consider the Laurent polynomial (1.1) F = γ 0 + γ 1 y b 1 + • • • + γ N y b N ∈ Q[y ±1 1 , . . . , y ±1 N -k ] with y b i = y b i,1 1 • • • y b i,N -k N -k . Suppose that F has a multiple nontrivial factor P . Let θ ∈ Z N -k such that P (t θ 1 , . . . , t θ N ) is not a monomial. Then, for a i = b i , θ , we have f a = F (t θ 1 , . . . , t θ N ) and every root of P (t θ 1 , . . . , t θ N ) is a multiple root of f a . Indeed, our main result (Theorem 1.1) shows that there is a finite family of multivariate Laurent polynomials as in (1.1) such that all multiple non-cyclotomic roots occurring in the family of polynomials f a , a ∈ Z N , come by restricting the multiple factors in this finite family to a 1-parameter subgroup, as explained above. In particular, the multiple non-cyclotomic irreducible factors of the f a 's are also sparse, in the sense that they are the restriction of a fixed Laurent polynomial to a varying 1-parameter subgroup of G N m . The following is a precise statement of this result. For a ∈ Z N , we denote by |a| the maximum of the absolute values of the coordinates of this vector. We also denote by µ ∞ the subgroup of Q × of roots of unity. Theorem 1.1. Let N ≥ 1 and γ = (γ 0 , γ 1 , . . . , γ N ) ∈ Q N +1 . There exists an effectively computable constant C depending only on N and γ such that the following holds. Let a = (a 1 , . . . , a N ) ∈ Z N such that the Laurent polynomial f a = γ 0 + γ 1 t a 1 + • • • + γ N t a N ∈ Q[t ±1 ] is nonzero and has a multiple root ξ ∈ Q × \ µ ∞ . Then there exist 1 ≤ k ≤ N -1 and b 1 , . . . , b N , θ ∈ Z N -k such that (1) |b i | ≤ C, i = 1, . . . , N , and |θ| ≤ C|a|; (2) the matrix B = (b i,j ) i,j ∈ Z N ×(N -k) is primitive, in the sense that it can be completed to a matrix in SL N (Z), and a = B • θ; (3) the Laurent polynomial F = γ 0 + γ 1 y b 1 + • • • + γ N y b N ∈ Q[y ±1 1 , . . . , y ±1 N -k ] has a multiple factor P such that ξ is a root of P (t θ 1 , . . . , t θ N ). The situation is different for multiple cyclotomic roots. The following example shows that the hypothesis that the root ξ is not cyclotomic is necessary for the conclusion of this result to hold. Example 1.2. Let a = (a 1 , a 2 , a 3 ) ∈ Z 3 coprime with 0 < a 1 < a 2 , a 3 = a 1 + a 2 and a 3 0. Consider the polynomial f a = 1 -t a 1 -t a 2 + t a 1 +a 2 ∈ Q[t ±1 ], which has ξ = 1 as a double root. In the notation in Theorem 1.1, we have N = 3 and k = 1, 2. The case k = 2 is easily discarded since then, by the conditions in (2), the polynomial F coincides with f , and so its degree cannot be bounded above independently of a. Hence we only have to consider the case k = 1. Let b 1 , b 2 , b 3 , θ ∈ Z 2 with b i bounded above and such that (1.2) b 1 , θ = a 1 , b 2 , θ = a 2 , b 3 , θ = a 1 + a 2 . Write F = 1 -y b 1 -y b 2 + y b 3 ∈ Q[y ±1 1 , y ±1 2 ] . By (1.2), we have that θ 1 and θ 2 are coprime and b 3 -b 1 -b 2 = λ(θ 2 , -θ 1 ) with λ ∈ Z. Since b i 's are bounded above, b 3 = b 1 + b 2 and so F = 1 -y b 1 -y b 2 + y b 1 +b 2 = (1 -y b 1 )(1 -y b 2 ). By the conditions in (2), b 1 and b 2 are linearly independent, and so F has no multiple factor. Hence, the presence of the double root ξ = 1 cannot be explained in this example as coming from a multiple factor of a multivariate Laurent polynomial of low degree restricted to a 1-parameter subgroup. Theorem 1.1 restricts the possible exponents a ∈ Z N whose associate polynomial has a multiple non-cyclotomic root, to a finite union of proper linear subspaces of Z N . Corollary 1.3. Let N ≥ 1 and γ = (γ 0 , γ 1 , . . . , γ N ) ∈ Q N +1 . Then the set of vectors a = (a 1 , . . . , a N ) ∈ Z N such that the Laurent polynomial γ 0 + γ 1 t a 1 + • • • + γ N t a N ∈ Q[t ±1 ] is nonzero and has a multiple non-cyclotomic root, is contained in a finite union of proper linear subspaces of Z N . To prove Theorem 1.1, we give a version of a theorem of Bombieri and Zannier on the intersection of a subvariety of codimension 2 of the multiplicative group with all the torsion curves, with bounds having an explicit dependence on the height of the subvariety (Theorem 2.3). This allows us to prove a general result concerning the greatest common divisor of two sparse polynomials with coefficients of low height (Theorem 2.6). These two theorems are presented in § 2 and proved in § 3 and § 4, respectively. Theorem 1.1 is an easy consequence of the latter result, as shown in § 5. Theorem 2.6 is also used in § 6 to prove Theorem 6.1, giving some evidence on a conjecture of Bolognesi and Pirola [START_REF] Bolognesi | Osculating spaces and Diophantine equations (with an appendix by Pietro Corvaja and Umberto Zannier)[END_REF]. Acknowledgments. Part of this work was done while the authors met at the Scuola Normale Superiore (Pisa), the Universitat de Barcelona, and the Université de Caen. We thank these institutions for their hospitality. Intersections of subvarieties with torsion curves and gcd of sparse polynomials of low height We first recall some definitions and basic facts. Boldface letters denote finite sets or sequences of objects, whose the type and number should be clear from the context: for instance, x might denote the group of variables (x 1 , . . . , x n ), so that Q [x ±1 ] denotes the ring of Laurent polynomials Q[x ±1 1 , . . . , x ±1 n ]. Given a vector a = (a 1 , . . . , a N ) ∈ Z N we set |a| = max j |a j |. Given a group homomorphism ϕ : G n m → G N m , there exist unique vectors b 1 , . . . , b N ∈ Z n such that ϕ(x) = (x b 1 , . . . , x b N ) for all x ∈ G n m . We define the size of ϕ as size(ϕ) = max j |b j | We also denote by ϕ # : Q[y ±1 1 , . . . , y ±1 N ] -→ Q[x ±1 1 , . . . , x ±1 n ], y i -→ x b i the associated morphism of algebras. If ψ : G N m → G M m is a further homomorphism, then (ψ • ϕ) # = ϕ # • ψ # . Let D ≥ 1 and f 1 , f 2 ∈ Z[t] polynomials of degree ≤ D with fixed coefficients and fixed number of nonzero terms. Filaseta, Granville and Schinzel have shown that, if either f 1 or f 2 do not vanish at any root of unity, then the greatest common divisor gcd(f 1 , f 2 ) can be computed in time polynomial in log(D) [START_REF] Filaseta | Irreducibility and greatest common divisor algorithms for sparse polynomials, Number theory and polynomials[END_REF]. More recently, Amoroso, Leroux and Sombra gave an improved version of this result [START_REF] Amoroso | Overdetermined systems of sparse polynomial equations[END_REF]. The following is its precise statement. Theorem 2.1 ([ALS15], Theorem 4.3). There is an algorithm that, given a number field K and polynomials f 1 , f 2 ∈ K[t], computes a polynomial p ∈ K[t] dividing gcd(f 1 , f 2 ) and such that gcd(f 1 , f 2 )/p is a product of cyclotomic polynomials. If both f 1 and f 2 have degree bounded by D, height bounded by h 0 and number of nonzero coefficients bounded by N , this computation is done with O K,N,h 0 (log(D)) bit operations. In more detail, write f i = γ i,0 + γ i,1 t a 1 + • • • + γ i,N t a N ∈ K[t], i = 1, 2, with a j ∈ Z and γ i,j ∈ K. Denote by ϕ : G m → G N m the homomorphism given by ϕ(t) = (t a 1 , . . . , t a N ) and set L i = γ i,0 + γ i,1 x 1 + • • • + γ i,N x N , i = 1, 2, so that f i = ϕ # (F i ). Then, the algorithm underlying Theorem 2.1 computes an integer 0 ≤ k ≤ N -1 and two homomorphisms ψ : G N -k m → G N m and ϕ 1 : G m → G N -k m with ψ injective, such that ψ • ϕ 1 = ϕ and p = ϕ # 1 (gcd(ψ # (L 1 ), ψ # (L 2 )) ). Moreover, the size of ψ and ϕ 1 is respectively bounded by B and BD, where B is a constant depending only on K, N and h 0 . This algorithm relies heavily on a former conjecture of Schinzel on the intersection of a subvariety of the multiplicative group with 1-parameter subgroups. This conjecture was proved by Bombieri and Zannier in [START_REF] Schinzel | Polynomials with special regard to reducibility[END_REF]Appendix]. For the reader's convenience, we recall an improved version of this result. Theorem 2.2 ([BMZ07], Theorem 4.1). Let N ≥ 1 and P, Q ∈ Q[x 1 , . . . , x N ] coprime polynomials. Then there exists an effectively computable constant B depending only on P and Q with the following property. Let a j ∈ Z, j = 1, . . . , N , ζ j ∈ µ ∞ and ξ ∈ C × with P (ζ 1 ξ a 1 , ..., ζ N ξ a N ) = Q(ζ 1 ξ a 1 , ..., ζ N ξ a N ) = 0. Then there exist b j ∈ Z, j = 1, . . . , N , with 0 < max j |b j | ≤ B and N j=1 (ζ j ξ a j ) b j = 1. In particular, if ξ / ∈ µ ∞ , then N j=1 a j b j = 0. We are interested in extension of Theorem 2.1 to polynomials f 1 , f 2 having low, but unbounded, height. To this end, we need first a version of Theorem 2.2 with explicit dependence on the height of the input polynomials P and Q. As already remarked by Schinzel, the constant B in this theorem cannot depend only on N , on the field of definition and on the degrees of P and Q. For instance, for the data N = 2, P (x, y) = x -2, Q(x, y) = y -2 a and (ζ 1 ξ a 1 , ζ 2 ξ a 2 ) = (2, 2 a ), one has B(P, Q) ≥ a. The following result gives, under some restrictive hypothesis, the dependence of the constant B on the height of the input polynomials. Recall that a coset of G N m is a translate of a subtorus, and that a torsion coset is a translate of a subtorus by a torsion point. A torsion curve (respectively, a torsion hypersurface) is a torsion coset of dimension 1 (respectively, of codimension 1). Following [START_REF] Bombieri | Algebraic points on subvarieties of G n m[END_REF]), given a subvariety X of G N m , we denote by X o the complement in X of the union of all cosets of positive dimension contained in X . We consider the standard compactification of the multiplicative group given by the inclusion ι : G N m -→ P N , (x 1 , . . . , x N ) -→ (1 : x 1 : • • • : x N ). We define the degree of an irreducible subvariety X of G N m , denoted by deg(X ), as the degree of the Zariski closure ι(X ) ⊂ P N , and the height of a point ξ ∈ G N m , denoted by h(ξ), as the Weil height of the projective point ι(ξ) ∈ P N . Theorem 2.3. Let X ⊂ G N m be a subvariety defined over a number field of degree δ by polynomials of degree bounded by d 0 and height bounded by h 0 . Let 0 < ε < 1. Then there exists an effectively computable constant B depending only on N , d 0 , δ and ε, with the following property. Let W be an irreducible component of X of codimension at least 2, T a torsion curve and x ∈ W o ∩ T a non-torsion point. Then either deg(T ) 1-ε N -1 ≤ B • (1 + h 0 ) or there exists a torsion hypersurface T with x ∈ T and deg(T ) ≤ B. Remark 2.4. We might restate Theorem 2.3 in a slightly different way in the case when the torsion curve T is a subtorus. Let ϕ : G m → G N m be an injective homomorphism and keep X , W and ε as in the statement of the theorem. Let ξ ∈ Q × \ µ ∞ such that ϕ(ξ) ∈ W o . In this situation, Theorem 2.3 can be reformulated to the statement that, if size(ϕ) 1-ε N -1 > B • (1 + h 0 ) , then x is contained in a subtorus T of codimension 1 and degree bounded by B. Indeed, Theorem 2.3 applied to the subtorus T = im(ϕ), shows that ϕ(ξ) ∈ T for a torsion hypersurface T of degree bounded by B. This torsion hypersurface is defined by the single equation x b = ω for some b ∈ Z N with |b| ≤ B and ω ∈ µ ∞ . Write ϕ(t) = (t a 1 , . . . , t a N ) with a i ∈ Z. Then ξ a 1 b 1 +•••+a N b N = ω. Since ξ is not torsion, j a j b j = 0 and ω = 1. Hence, T is a subtorus and im(ϕ) ⊆ T . The following variant of Schinzel's example shows that the hypothesis that x ∈ X o is necessary for the conclusion of Theorem 2.3 to hold. Example 2.5. Let 1 ≤ a ≤ b and consider the irreducible subvariety X = {(2, 2 a )} × G m ⊂ G 3 m . With notation as in Theorem 2.3, we have N = 3, d 0 = 1 and h 0 ≈ a. Since X is a coset of positive dimension, X o = ∅. Let T ⊂ G 3 m be the subtorus parameterized by t → (t, t a , t b ) and pick the point x = (2, 2 a , 2 b ) ∈ X ∩ T . It is easy to verify that, for any fixed 0 < ε < 1 and B > 0, if a and b/a are sufficiently large, then neither deg(T ) 1-ε 2 ≤ B • (1 + h 0 ) nor x ∈ T for any torsion hypersurface of degree bounded by B. Theorem 2.3 allows us to prove the desired extension of Theorem 2.1 to polynomials of low height. The following statement gives the quantitative aspects of this result. Theorem 2.6. Let K be a number field of degree δ. For a family of elements γ i,j ∈ K, i = 1, . . . , s, j = 1, . . . , N , and a sequence of N ≥ 1 coprime integers a 1 , . . . , a N , we consider the system of Laurent polynomials f i = γ i,0 + γ i,1 t a 1 + • • • + γ i,N t a N , i = 1, . . . , s. We assume f 1 , . . . , f s not all zeros. Set L i = γ i,0 + γ i,1 x 1 + • • • + γ i,N x N , i = 1, . . . , s, and let ϕ : G m → G N m be the homomorphism given by ϕ(t) = (t a 1 , . . . , t a N ). Put D = |a| and h 0 = max i,j h(γ i,j ). Then there exists an effectively computable constant B depending only on N and δ, with the following property. If (2.1) D 1 2(N -1) > B • (1 + h 0 ), then there exist 0 ≤ k ≤ N -1 and homomorphisms ψ : G N -k m → G N m and ϕ 1 : G m → G N -k m such that (1) ψ is injective and ψ • ϕ 1 = ϕ; (2) size(ψ) ≤ B and size(ϕ 1 ) ≤ B D; (3) Set G = gcd(ψ # (L 1 ), . . . ψ # (L s )) and g = ϕ # 1 (G). Then g | gcd(f 1 , . . . , f s ). Moreover, if ξ is a root of gcd(f 1 , . . . , f s )/g, then either ξ ∈ µ ∞ or there exists a nonempty proper subset Λ ⊂ {1, . . . , N } such that γ i,0 + j∈Λ γ i,j ξ a j = 0, i = 1, . . . , s. Similarly as for Theorem 2.1, the datum k, ψ and ϕ 1 can be effectively computed. In the present situation, this is done by the procedure described in § 4, and this computation costs O δ,N,s (log(D)) bit operations. Proof of Theorem 2.3 All irreducible components of X are defined over a number field of degree bounded by C by polynomials of degree bounded by C and height bounded by Ch 0 , for a constant C depending only on N , d 0 and δ. Using this, we reduce without loss of generality to the case when X is an irreducible subvariety of codimension at least 2. We follow closely the proof of [BMZ07, Theorem 4.1]. Since we assume that x ∈ X o ∩ T , the first reduction of the proof in loc. cit. is unnecessary in our present situation. Write T = {(ζ 1 t a 1 , . . . , ζ N t a N ) | t ∈ G m } ⊆ G N m with a 1 , . . . , a N ∈ Z coprime and ζ 1 , . . . , ζ N ∈ µ ∞ . Thus deg(T ) = |a|. As in loc. cit. we construct, using geometry of numbers, a 2-dimensional torsion coset T 2 containing T and such that (3.1) deg(T 2 ) ≤ B 1 |a| N -2 N -1 for a constant B 1 depending only on N . The proof goes on by distinguishing two cases. Suppose first that the point x is an isolated component of X ∩ T 2 . Since x ∈ X ∩ T , we can write x = (ζ 1 ξ a 1 , . . . , ζ N ξ a N ) with Q × \ µ ∞ . Let K be a field of definition of X and set E = K(ζ 1 , . . . , ζ N ), which is a field of definition for both X and T . Put D = [E(x) : E]. Using Bézout theorem and (3.1), we deduce that this degree satisfies the bound (3.2) D ≤ deg(X ∩ T 2 ) ≤ B 1 |a| N -2 N -1 deg(X ). Moreover, since a 1 , . . . , a N are coprime, [E(ξ ) : E] = D. Let 0 < ε < 1. We have that E(ξ) is an extension of degree ≤ [K : Q]D of the cyclotomic extension Q(ζ 1 , . . . , ζ N ) . By the relative Dobrowolski lower bound of [START_REF] Amoroso | A relative Dobrowolski lower bound over abelian extensions[END_REF], the height of ξ is bounded from below by (3.3) h(ξ) ≥ B 2 D -1-ε , where B 2 is an effective constant that depends only on ε and [K : Q]. By [Sch00, Appendix, Theorem 1], since the point x lies in X o ∩ T , its height is bounded above by a constant depending only on X . Indeed, a close inspection of the proof of this result shows that (3.4) h(x) ≤ B 3 • (1 + h 0 ). for an effectively computable B 3 that depends only on δ and N . Alternatively, this can be obtained by applying Habegger's effective version of the bounded height theorem [Hab12, Theorem 11] with the choice of parameters r = 2 and s = n -1 with respect to the notation therein, together with the arithmetic Bézout theorem in [KPS01, Corollary 2.11]. Thus (3.5) |a| h(ξ) ≤ N i=1 h(ζ i ξ a i ) ≤ N h(x). Combining (3.2), (3.3), (3.4) and (3.5), we get deg(T ) = |a| ≤ B -1 2 B 1 |a| N -2 N -1 deg(X ) 1+ε N B 3 • (1 + h 0 ). From here, we deduce that deg(T ) 1-ε N -1 ≤ B • (1 + h 0 ). with ε = (N -2)ε and where B is any constant ≥ B 4 = B -1 2 (B 1 deg(X )) 1+ε N B 3 , which shows the result in this case. Now suppose that x lies in an irreducible component of positive dimension of X ∩T 2 . Denote by Y this irreducible component, which is thus a X -anomalous subvariety. Let Y max be a a maximal X -anomalous subvariety containing Y. From the Bombieri-Masser-Zannier uniform structure theorem [BMZ07, Theorem 1.4], this subvariety Y max is contained in a coset gH whose degree is bounded in terms of X . Indeed, by the inequality (3.4) in [START_REF] Bombieri | Anomalous subvarieties -structure theorems and applications[END_REF], this degree is bounded by a constant B 5 depending only on δ and deg(X ). As explained in loc. cit., this constant is also effectively computable. The intersection T 2 ∩gH is a union of cosets associated to the same subtorus. Denote by K the unique coset in this intersection that contains Y. Its dimension is either 1 or 2. The case dim(K) = 1 is not possible since, otherwise, Y = K is a coset, which is forbidden by the hypothesis that x ∈ X o . Hence dim(K) = 2, which means that some irreducible component of T 2 lies in gH. Take a torsion point g 0 lying in this irreducible component. Then g 0 ∈ gH and gH = g 0 H is a torsion coset of degree bounded by B 5 . We can find a further constant B 6 depending only on δ and deg(X ) such that there exists a torsion hypersurface T with g 0 H ⊆ T and deg(T ) ≤ B 6 . We then choose B = max(B 4 , B 6 ), concluding the proof. Proof of Theorem 2.6 We follow the proof of [ALS15, Theorem 4.3], replacing the use of Theorem 2.2 by Theorem 2.3. We first need to prove some auxiliary lemmas. Lemma 4.1. Let ϕ : G m → G N m be a homomorphism of size D and T ⊆ G N m a subtorus of codimension 1. We can test if im(ϕ) ⊆ T and, if this is the case, we can compute two homomorphisms ψ : G N -1 m → G N m and ϕ : G m → G N -1 m such that (1) ψ is injective and ψ • ϕ = ϕ; (2) size( ψ) = O(1) and size( ϕ) = O(D). This computation can be done with O(log(D)) bit operations. All the implicit constants depend only on N and deg(T ). Proof. Let x b = 1 be an equation for T and write ϕ(x) = (x a 1 , . . . , x a N ) with a 1 , . . . , a N ∈ Z coprime. Then im(ϕ) ⊆ T if and only if j a j b j = 0. Let us assume that this is the case. We choose an automorphism τ of G N m such that τ (T ) is defined by the equation x N = 1. Let ι : G N -1 m → G N m be the standard inclusion identi- fying G N -1 m with the hyperplane of equation x N = 1, and consider the projection onto the first N -1 coordinates π : G N m → G N -1 m , π(x 1 , . . . , x N ) = (x 1 , . . . , x N -1 , 1). We then set ψ = τ -1 • ι and ϕ = π • τ • ϕ. We leave to the reader the verification on the correctness and the complexity of this algorithm, see [ALS15, Lemma 4.1] for further details. We now describe the algorithm underlying Theorem 2.6. G m → G N -k m . 1: Set k ← 0, ψ ← Id G N m and ϕ 1 ← ϕ; 2: while k < N do 3: let B the constant in Theorem 2.3 for the subvariety ψ -1 (X ) ⊂ G N -k m and the choice ε = 1 2 ; 4: set Φ ← {{x b = 1} | b ∈ Z N primitive such that |b| ≤ B}; 5: while Φ = ∅ do 6: choose T ∈ Φ; 7: if im(ϕ 1 ) ⊆ T then 8: compute as in Lemma 4.1 homomorphisms ψ : end while 14: end while Lemma 4.2. Let X ⊂ G N m be a subvariety defined over a number field of degree δ by polynomials of degree bounded by d 0 . Let also ϕ : G m → G N m be a homomorphism of size D. Algorithm 4.1 computes an integer k with 0 ≤ k < N -1 and two homomorphisms ψ : G N -k-1 m → G N -k m and ϕ : G m → G N -k-1 m such that ϕ 1 = ψ • ϕ; 9: set ψ ← ψ • ψ, ϕ 1 ← ϕ, k ← k + 1, Φ ← ∅; G N -k m → G N m and ϕ 1 : G m → G N -k m such that (1) ψ is injective and ψ • ϕ 1 = ϕ; (2) size(ψ) = O(1) and size(ϕ 1 ) = O(D). This computation is done with O(log D) bit operations. All the implicit constants in the O-notation depend only on N , d 0 and δ. Proof. We show by induction on k that the homomorphisms ψ and ϕ 1 constructed by the algorithm at the level k satisfy both (1) and (2). This is certainly true at the level k = 0. Indeed at this level ψ = Id G N m and ϕ 1 = ϕ. Let k be an integer with 1 ≤ k < N and assume that at the level k -1 the homomorphisms ψ and ϕ 1 satisfy (1) and (2). By Lemma 4.1, the homomorphisms ψ and ϕ at line 8 satisfy ψ • ϕ = ϕ 1 . Hence the updated values of ψ and ϕ 1 , that is ψ • ψ and ϕ, satisfy (ψ • ψ) • ϕ = ψ • ϕ 1 = ϕ. Moreover, since ψ and ψ are injective, by induction and by Lemma 4.1(1), ψ • ψ is also injective. Let B be as in line 3 of the algorithm 4.1, that is, the constant in Theorem 2.3 for the subvariety ψ -1 (X ) and the choice ε = 1 2 . Since size(ψ) = O(1) and X is linear, ψ -1 (X ) is defined over a number field of degree O(1) by polynomials of degree O(1) and height O(h 0 ), with implicit constants depending only on N and δ. In particular, B = O(1). The same is therefore true for the degree of the subtorus T at line 6. By Lemma 4.1(2), the homomorphisms ψ and ϕ at line 8 have size O(1) and O(D) respectively. Thus ψ • ψ and ϕ have also size O(1) and O(D), respectively. We left to the reader the verification on the complexity of the algorithm. We are now able to conclude the proof of Theorem 2.6. Let K and f 1 , . . . , f s be as in that theorem. Thus K is a number field of degree δ and f i = γ i,0 + γ i,1 t a 1 + • • • + γ i,N t a N , i = 1, . . . , s, are Laurent polynomials, not all zeros, with a 1 , . . . , a N coprime. Set D = |a| and assume max i,j h(γ i,j ) ≤ h 0 . We consider the homomorphism ϕ : G m → G N m given by ϕ(t) = (t a 1 , . . . , t a N ). Since a 1 , . . . , a N are coprime, deg(im(ϕ)) = D. We let L i = γ i,0 + γ i,1 x 1 + • • • + γ i,N x n , i = 1, . . . , s. Thus f i = ϕ # (L i ). We apply Algorithm 4.1 to the linear subvariety X defined in G N m by the system of equations L 1 = . . . = L s = 0. From now on, we denote by k ∈ {0, . . . , N -1}, ψ : G N -k m → G N m and ϕ 1 : G m → G N -k m the output of Algorithm 4.1 applied to this subvariety. Put for short F i = ψ # (L i ). By Lemma 4.2, ϕ # 1 (F i ) = f i . Since f 1 , . . . , f s are not all zeros, the same holds for F 1 , . . . , F s . Now set G = gcd(F 1 , . . . , F s ) and g = ϕ # 1 (G). Then g| gcd(f 1 , . . . , f s ), as in Theorem 2.6(3). Let B be a constant depending only on N and δ such that (4.1) D 1 2(N -1) > B • (1 + h 0 ), as in the statement of Theorem 2.6, to be fixed later on. Let Ω be the set of points ξ ∈ C × which are either a root of unity or a common root of the system of polynomials γ i,0 + j∈Λ γ i,j t a j , i = 1, . . . , s, for a nonempty proper subset Λ ⊂ {1, . . . , N }. Let ξ ∈ Ω be a common zero of f 1 , . . . , f s and W a component of ψ -1 (X ) such that ϕ 1 (ξ) ∈ W. We first remark that ϕ 1 (ξ) ∈ W o . If it is not, the point y = ϕ 1 (ξ) is in a coset gH ⊆ W ⊆ ψ -1 (X ) of positive dimension. By Lemma 4.2(2), the point x = ϕ(ξ) = ψ(y) is contained in the coset ψ(gH) ⊆ X , which is also of positive dimension since ψ is injective. The cosets included in a linear variety X have been explicitly classified in [START_REF] Schmidt | Heights of points on subvarieties of G n m , Number theory[END_REF]page 161]. By this result, there exists a nonempty proper subset Λ ⊂ {1, . . . , N } such that γ i,0 + j∈Λ γ i,j x j = 0, i = 1, . . . , s. Hence ξ is a common root of γ i,0 + j∈Λ γ i,j t a j , i = 1, . . . , s, but this is not possible because ξ / ∈ Ω. Thus ξ is not a root of unity and ϕ 1 (ξ) ∈ W o . We apply Theorem 2.3 in the simplified form of Remark 2.4, choosing N ← N -k, X ← ψ -1 (X ), ε ← 1/2 and ϕ ← ϕ 1 . Let B be as in line 3 of the algorithm 4.1. As already remarked in the proof of Lemma 4.2, ψ -1 (X ) is defined over a number field of degree O(1) by polynomials of degree O(1) and height O(h 0 ), with implicit constants depending only on N and δ. In particular, B = O(1). By the quoted Remark 2.4, one of the following assertions holds: (1) there exists a subtorus T of codimension 1 and degree bounded by B such that im(ϕ 1 ) ⊆ T ; (2) deg(im(ϕ 1 )) 1 2(N -k-1) = O(1 + h 0 ); (3) W has codimension 1. By construction, (1) is not possible because T ∈ Φ. Let us assume that (2) holds. By Lemma 4.2, D = deg(im(ϕ)) = deg(im(ψ • ϕ 1 )) = O(deg(im(ϕ 1 ))). Thus D 1 2(N -1) ≤ D 1 2(N -k-1) = O(deg(im(ϕ 1 )) 1 2(N -k-1) ) = O(1 + h 0 ). Choosing the constant B sufficiently large, this contradicts the inequality (4.1). Thus (3) must hold and W has codimension 1. This discussion implies that the ideal (F 1 , . . . , F s ) ⊂ K[y ±1 1 , . . . , y ±1 N -k ] becomes prin- cipal when restricted to a suitable neighborhood U ⊂ G N -k m of ψ -1 (X )\ϕ 1 (Ω). Hence, (F 1 , . . . , F s ) = (G) for some Laurent polynomial G on that neighborhood. We deduce that ϕ -1 1 (U ) is a neighborhood of the set of common zeros ξ ∈ Ω of f 1 , . . . , f s and (f 1 , . . . , f s ) = (g) on ϕ -1 1 (U ). This completes the proof of the theorem. Remark 4.3. For the study of multiple roots of sparse polynomials and, in particular, to prove Theorem 2.6, it is not enough to dispose of a version of Theorem 2.2 with an explicit dependence of its constant B on the height of the input polynomials. We really need the dichotomy that appears in Theorem 2.3, with a bound for the degree of T independent of the height of the equations defining X , whenever the degree of the torsion curve T is large enough. In any case, it is possible to adapt the proof of [BMZ07, Theorem 4.1] to prove such an effective version of Theorem 2.2. Proof of Theorem 1.1 Let N ≥ 1 and γ = (γ 0 , γ 1 , . . . , γ N ) ∈ Q N +1 . Consider the number field K = Q(γ) and the affine polynomial L = γ 0 + γ 1 x 1 + • • • + γ N x N ∈ K[x 1 , . . . , x N ]. Set δ = [K : Q] and h 0 = max j h(γ j ). Let a = (a 1 , . . . , a N ) ∈ Z N such that the univariate Laurent polynomial f = L(t a 1 , . . . , t a N ) = γ 0 + γ 1 t a 1 + • • • + γ N t a N is nonzero and has a multiple root at a point ξ ∈ Q \ µ ∞ . Set a 0 = 0 and assume for the moment that (5.1) ξ is not a multiple root of j∈Λ γ j t a j for every nonempty Λ {0, . . . , N }. We remark that (a 1 , . . . , a N ) = (0, . . . , 0), since otherwise f is a nonzero constant and cannot vanish at ξ. Set d = gcd(a 1 , . . . , a N ) and put a j = a j /d, j = 1, . . . , N . We apply Theorem 2.6 to the polynomials f 1 = γ 0 + γ 1 t a 1 + • • • + γ N t a N and f 2 = tf 1 = γ 1 a 1 t a 1 + • • • + γ N a N t a N , and the homomorphism ϕ : G m → G N m defined by ϕ(t) = (t a 1 , . . . , t a N ). Thus f = f 1 (t d ) and, in the notation of Theorem 2.6, D = |a |, L 1 = γ 0 + γ 1 x 1 + • • • + γ N x N and L 2 = γ 1 a 1 x 1 + • • • + γ N a N x N . We have h(f i ) ≤ h 0 + log(D). Let B = B (N, δ) be the constant which appears in Theorem 2.6. If the inequality (2.1) is not satisfied, we have D 1 2(N -1) ≤ B • (1 + h 0 + log(D)), which shows that D ≤ C 1 for some positive constant C 1 = C 1 (N, δ, h 0 ). In this case, we choose k = N -1, b j = a j , j = 1, . . . , N , θ 1 = d and C ≥ C 1 . Assertions (1), (2) and (3) of Theorem 1.1 are then clearly verified. We now assume that the inequality (2.1) is satisfied. Theorem 2.6 then gives a nonnegative integer k ≤ N and two morphisms ψ : G N -k m → G N m and ϕ 1 : G m → G N -k m satisfying the conditions (1), (2) and (3) of that theorem. Write ψ(y) = (y b 1 , . . . , y b N ) and ϕ 1 (t) = (t θ 1 , . . . , t θ N -k ) with b 1 , . . . , b N ∈ Z N -k of size ≤ B and θ 1 , . . . , θ N -k ∈ Z of size ≤ B D. By (1), the N × (N -k) matrix B = (b j,i ) is primitive and a = B • θ . We set F 1 = ψ # (L 1 ) = γ 0 + γ 1 y b 1 + • • • + γ N y b N , F 2 = ψ # (L 2 ) = γ 1 a 1 y b 1 + • • • + γ N a N y b N , and we consider the differential operator ∆ = θ 1 y 1 ∂ ∂y 1 + • • • + θ N -k y N -k ∂ ∂y N -k . Let b ∈ Z N -k . The monomial y b is an eigenvector of ∆ with eigenvalue the scalar product b, θ . Hence ∆F 1 = N i=1 γ i b i , θ y b i = F 2 . Set G = gcd(F 1 , F 2 ). By hypothesis, ξ d is a common non-cyclotomic root of f 1 and f 2 and, by the additional assumption (5.1), ξ d is not a multiple root of j∈Λ γ j t a j for any nonempty proper subset Λ of {0, . . . , N }. By Theorem 2.6(3), there exists an irreducible factor P of G such that π = ϕ # 1 (P ) ∈ K[t] vanishes at ξ d . We want to show that P is a multiple factor of G. Since P | F 1 and P | F 2 ∆F 1 , by standard arguments either P 2 | F 1 as we want, or ∆P = λP for a constant λ. Let us assume that this last assertion holds. Write P = b∈Z N -k c b y b and set supp(P ) = {b ∈ Z N -k | c b = 0} for the support of P . The differential equation ∆P = λP then says that the scalar product b, θ is constant over supp(P ), which in turns implies that π is a monomial. But then π cannot vanish at ξ d because the latter is nonzero, which is a contradiction. Thus P is a multiple factor of F 1 . Set θ i = dθ i , so that P (t θ 1 , . . . , t θ N ) is a multiple factor of f which vanishes at the point ξ, as required. Remark that k ≥ 1. Indeed the matrix B is primitive and the polynomial L 1 does not have multiple factors, since it is linear. Theorem 1.1 thus follows, under the additional hypothesis (5.1), by choosing C = max{C 1 , B }. We now explain how to remove the extra assumption (5.1). Let as assume that (5.1) does not hold. We decompose {0, . . . , N } as a maximal union of u ≥ 2 nonempty disjoint subsets Λ 1 , . . . , Λ u in such a way that ξ is a multiple root of j∈Λ i γ j t a j for i = 1, . . . , u. To simplify the notation, we assume u = 2 and Λ 1 = {0, . . . , M } with 0 ≤ M ≤ N -1. Thus ξ is a multiple root of both (5.2) γ 0 + M j=1 γ j t a j and N j=M +1 γ j t a j . Moreover, ξ is not a multiple root of j∈∆ γ j t a j for any nonempty ∆ which is a proper subset of {0, . . . , M } or of {M + 1, . . . , N }. We write γ 0 + γ 1 t a 1 + • • • + γ N t a N = (γ 0 + γ 1 t a 1 + • • • + γ M t a M ) + t a M +1 (γ M +1 + γ M +2 t a M +2 -a M +1 + • • • + γ N t a N -a M +1 ). We remark that a 1 , . . . , a M , a M +2 -a M +1 , . . . , a N -a M +1 are not all zeros, since otherwise the polynomials (5.2) are monomials vanishing at ξ, and hence they are both zero, which in turns implies that f is also zero, contrary to the assumption of Theorem 1.1. Set d = gcd(a 1 , . . . , a M , a M +2 -a M +1 , . . . , a N -a M +1 ) and put a j = a j /d, for j = 1, . . . , M, (a j -a M +1 )/d, for j = M + 3, . . . , N. Thus a 1 , . . . , a M , a M +2 , . . . , a N are coprime, pairwise distinct, nonzero integers. We apply Theorem 2.6 to the homomorphism ϕ : G m → G N -1 m defined by ϕ(t) = (t a 1 , . . . , t a M , t a M +2 , . . . , t a N ) , and for the four polynomials f 1 = γ 0 + M j=1 γ j t a j , f 2 = γ M +1 + N j=M +2 γ j t a j , f 3 = tf 1 , f 4 = tf 2 . Thus f = f 1 (t d ) + t a M +1 f 2 (t d ) and D = |a |. We argue as in the first part of the proof. We remark that h(f i ) ≤ h 0 + log(2D). Let B = B (N, δ) be the constant that appears in Theorem 2.6. If the inequality (2.1) is not satisfied, then D ≤ C 1 = C 1 (N, δ, h 0 ). In this case, we choose k = N -2, θ 1 = d, θ 2 = a M +1 and b j =      (a j , 0) for j = 1, . . . , M, (0, 1) for j = M + 1, (a j , 1) for j = M + 2, . . . , N. Thus, in the notation of Theorem 1.1(3), F = f 1 (y 1 ) + y a M +1 2 f 2 (y 1 ) ∈ Q[y ±1 1 , y ±1 2 ]. Since ξ is a multiple root of both f 1 (t d ) and f 2 (t d ), the polynomials f 1 (y 1 ) and f 2 (y 1 ) have a common multiple factor, say P (y 1 ), which vanishes at ξ d . Thus P (y 1 ) is a multiple factor of F and P (t d ) vanishes at ξ, as required. It remains to consider the case when the inequality (2.1) is satisfied. Theorem 2.6 then gives a nonnegative integer k ≤ N -1, vectors b 1 , . . . , b M , b M +2 , . . . , b N ∈ Z N -1-k of size ≤ B and θ 1 , . . . , θ N -1-k ∈ Z of size ≤ B D such that the (N - 1) × (N -1 -k) matrix (b j,i ) j,i has maximal rank N -1 -k and a j = N -1-k i=1 b j,i θ i for j = 1, . . . , M and j = M + 2, . . . , N . We set y = (y 1 , . . . , y N -1 ) and F 1 = γ 0 + M j=1 γ j y b j , F 2 = γ M +1 + N j=M +2 γ j y b j , F 3 = M j=1 γ j a j y b j , F 4 = N j=M +2 γ j a j y b j , and consider the differential operator ∆ = θ 1 y 1 ∂ ∂y 1 + • • • + θ N -1k y N -1-k ∂ ∂y N -1-k . As in the first part of the proof, we have that ∆F 1 = F 3 and ∆F 2 = F 4 . Set G = gcd(F 1 , F 2 , F 3 , F 4 ) and write f i = α∈S f i,α t α , i = 1, . . . , 4, with S = 4 i=1 supp(f i ) = {0, a 1 , . . . , a M , a M +2 , . . . , a N }. By hypothesis, ξ d is a common non-cyclotomic root of f 1 , f 2 , f 3 and f 4 . We want to deduce from Theorem 2.6(3) that ϕ # 1 (G) vanishes at ξ d . This certainly happens unless there exists a nonempty proper subset Γ of S such that ξ d is a common root of α∈Γ f i,α t α , i = 1, . . . , 4. Assume by contradiction that this is the case. Then ξ d is a multiple root of α∈Γ f i,α t α , i = 1, 2. We recall that supp(f 1 ) = {0, a 1 , . . . , a M }, supp(f 2 ) = {0, a M +2 , . . . , a N }. Since ξ is not a multiple root of j∈∆ γ j t a j for any nonempty ∆ which is a proper subset of {0, . . . , M } or of {M + 1, . . . , N }, we have Γ ∩ supp(f 1 ) = ∅ or Γ ∩ supp(f 1 ) = supp(f 1 ) and Γ ∩ supp(f 2 ) = ∅ or Γ ∩ supp(f 2 ) = supp(f 2 ). Since supp(f 1 ) ∩ supp(f 2 ) = ∅, we deduce that Γ = supp(f 1 ) ∪ supp(f 2 ), which contradict the previous assumption. Thus, by Theorem 2.6(3), ϕ # 1 (G) vanishes at ξ d . Let P be an irreducible factor of G such that π = ϕ # 1 (P ) ∈ K[t] vanishes at ξ d . As in the first part of the proof, P is a multiple factor of both F 1 and F 2 and thus of the polynomial F = γ 0 + γ 1 y b 1 + • • • + γ N y b N = F 1 (y 1 , . . . , y N -1 ) + y a M +1 N F 2 (y 1 , . . . , y N -1 ) with y = (y 1 , . . . , y N ). Set θ i = dθ i for i = 1, . . . , N -1 -k, θ N -k = a M +1 and b j =      (b j,1 , . . . , b j,N -1-k , 0) for j = 1, . . . , M, (0, . . . , 0, 1) for j = M + 1, (b j,1 , . . . , b j,N -1-k , 1) for j = M + 2, . . . , N. Then the N × (N -k) matrix B = (b j,i ) j,i has maximal rank and a = B • θ, so that P (t θ 1 , . . . , t θ N -1 ) is a multiple factor of f which vanishes at the point ξ. Theorem 1.1 then follows by choosing C = max{C 1 , B }. On a conjecture of Bolognesi and Pirola Let ϕ : G m → G N m be a homomorphism given by ϕ(t) = (t a 1 , . . . , t a N ) for a sequence of integers a 1 , . . . , a N such that 0 < a 1 < • • • < a N , and consider the curve U = im(ϕ). It is easy to verify that the linear subspace X ⊂ C N -1 defined by the condition rank       a 1 a 2 1 • • • a N -2 1 x 1 -1 a 2 a 2 2 • • • a N -2 2 x 2 -1 . . . . . . • • • . . . . . . a N a 2 N • • • a N -2 N x N -1       < N -1 has codimension 2, and that the restriction of its defining equations to (t a 1 , . . . , t a N ) vanish to order N -1 at t = 1. Thus, X is the osculating (N -2)-linear dimensional space of U at the point (1, . . . , 1) ∈ G N m . It is convenient to homogenize by letting a 0 = 0 and considering the (N + 1) × N matrix given by A(a, (x 0 : . . . : x N )) =       1 a 0 a 2 0 • • • a N -2 0 x 0 1 a 1 a 2 1 • • • a N -2 1 x 1 . . . . . . . . . • • • . . . . . . 1 a N a 2 N • • • a N -2 N x N       . Then we identify X with the linear subspace of P N defined by condition rank(A(a, (x 0 : . . . : x N ))) < N. For simplicity, we assume that a 1 , . . . , a N are coprime. Then L intersects U in a second point different from the osculating one if and only if there exists ξ = 1 such that rank(A(a, (1 : ξ a 1 : . . . : ξ a N ))) < N . In [START_REF] Bolognesi | Osculating spaces and Diophantine equations (with an appendix by Pietro Corvaja and Umberto Zannier)[END_REF], Bolognesi and Pirola conjecture that this can never happen. It easily seen that, to prove their conjecture, we may assume that ξ is not torsion. In the case N = 2 the conjecture is trivial. Bolognesi and Pirola proved the conjecture for N = 3. In [START_REF] Corvaja | On the rank of certain matrices[END_REF], Corvaja and Zannier proved a weak form of the conjecture for N = 4, namely that the set of exceptional pairs (a, ξ) such that the matrix A(a, (1 : ξ a 1 : . . . : ξ a N )) has rank < N is finite. As a second application of Theorem 2.6, we prove the following result. Theorem 6.1. There is a constant C depending only on N such that the following holds. Let a 1 , . . . , a N be integers such that 0 = a 0 < a 1 < a 2 < • • • < a N =: D and ξ ∈ Q × \ µ ∞ . If the matrix A(a, (1 : ξ a 1 : . . . : ξ a N )) has rank < N , then there exist (2) the matrix B = (b i,j ) i,j ∈ Z N ×(N -k) is primitive and a = B • θ; 1 ≤ k ≤ N - ( Proof. The proof is very similar to that of Theorem 1.1. Let 0 = a 0 < a 1 < a 2 < • • • < a N =: D and ξ ∈ Q × \ µ ∞ such that the matrix A(a, (1 : ξ a 1 : . . . : ξ a N )) has rank < N . For each subset Λ ⊂ {0, . . . , N }, we put v Λ,j = ξ a j if j ∈ Λ and v Λ,j = 0 otherwise. Then we assume that (6.1) for all nonempty Λ {0, . . . , N }, rank(A(a, (v Λ,0 : v Λ,1 : . . . : v Λ,N ))) = N. This extra assumption may be removed, proceeding as in the last part of the proof of Theorem 1.1. Let d = gcd(a 1 , . . . , a N ) and put a i = a i /d, i = 1, . . . , N . As in the proof of Theorem 1.1, we may assume, by replacing a by a , that d = 1. As already remarked, the linear space X defined by rank(A(a, (x 0 : x 1 : . . . : x N ))) < N is defined by two linear equations, say L i = γ i,0 x 0 + γ i,1 x 1 + • • • + γ i,N x n , i = 1, 2, with coefficients γ i,j bounded by N !D N 2 . We apply Theorem 2.6, choosing K 0 = Q, s = 2 and ϕ(t) = (t a 1 , . . . , t a N ). Thus f i = γ i,0 + γ i,1 t a 1 + • • • + γ i,N t a N , i = 1, 2. These two polynomials are not both zeros, since otherwise rank(A(a, (1 : t a 1 : . . . : t a N ))) < N identically, which is not possible by the assumption 0 < a 1 < a 2 < • • • < a N . Let B = B (N, 1) be the constant which appears in Theorem 2.6. If the inequality (2.1) of that theorem is not satisfied, we have that Algorithm 4. 1 1 Input: a subvariety X ⊂ G N m defined over a number field K and a homomorphism ϕ : G m → G N m . Output: an integer k with 0 ≤ k ≤ N -1 and two homomorphisms ψ : G N -k m → G N m and ϕ 1 : 1 and vectors b 1 , . . . , b N , θ ∈ Z N -k such that (1) |b i | ≤ C, i = 1, . . . , N , and |θ| ≤ CD; 3) the subvariety of G N -k m defined byV = {y ∈ G N -k m | rank(A(a, (1 : y b 1 : . . . : y b N ))) < N }has a component of codimension 1 containing the point (ξ θ 1 , . . . , ξ θ N -k ). -1) ≤ B • (1 + N 2 log D + N log N ),which shows that D ≤ C 1 for some positive constant C 1 = C 1 (N ). In this case we simply choose k = N -1, b i = a i for i = 1, . . . , N and θ 1 = 1. Assertions (1), (2) and (3) of Theorem 6.1 are clearly verified for C ≥ C 1 . This research was partially financed by the European project ERC Advanced Grant "Diophantine problems" (grant agreement n • 267273), the CNRS project PICS 6381 "Géométrie diophantienne et calcul formel", and the Spanish projects MINECO MTM2012-38122-C03-02, MINECO MTM2015-65361-P.. Thus we may assume that the inequality (2.1) is satisfied. Theorem 2.6 then gives a nonnegative integer k < N and two homomorphisms ψ : By Theorem 2.6(1), the matrix B = (b i,j ) i,j is primitive and a = B • θ. By the assumption (6.1), ξ is not a common root of j∈Λ γ i,j t a j , i = 1, 2, for any nonempty Λ {0, . . . , N }. Thus, by Theorem 2.6(3), the greatest common divisor of F 1 (y b 1 , . . . , y b N ) and F 2 (y b 1 , . . . , y b N ) must vanish at (ξ θ 1 , . . . , ξ θ N -k ). This means that V has a component of codimension 1 through the point (ξ θ 1 , . . . , ξ θ N -k ), as required. Since X is a linear space of codimension 2 and B is primitive, we must have k ≥ 1. Theorem 6.1 follows by choosing C = max{C 1 , B }. Remark 6.2. An immediate consequence of Theorem 6.1(1,2) is that the vectors a such that the matrix A(a, (1 : ξ a 1 : . . . : lie on a finite union of proper vector subspaces of Q N , which is effectively computable for every given N . Moreover, the condition (3) can be translated in terms of resultants, and can be checked by the search of integral points θ = (θ 1 , . . . , θ N -k ) ∈ Z N -k on a finite family of varieties, depending only on N . More precisely, fix k ∈ {1, . . . , N -1} and fix one of the finitely many N × (N -k) primitive matrix B = (b i,j ) i,j with entries of size bounded by C(N ). of F 1 and F 2 with respect to, say, y N -k and let W be the variety defined by the vanishing of the coefficients of R, viewed as a polynomial in the variables y 1 , . . . , y N -k-1 . Then V has a component of codimension 1 if and only if θ ∈ W and a = B • θ.
42,973
[ "177502" ]
[ "105", "300318", "4763" ]
01470957
en
[ "spi" ]
2024/03/04 23:41:46
2018
https://inria.hal.science/hal-01470957/file/_LTV_ParIdent_FxT_TAC_FV.pdf
Time-Varying Parameter Identification Algorithms: Finite and Fixed-Time Convergence H. Ríos † , D. Efimov ‡ * , J.A. Moreno § , W. Perruquetti ‡ and J.G. Rueda-Escobedo § Abstract-In this paper the problem of time-varying parameter identification is studied. To this aim, two identification algorithms are developed in order to identify time-varying parameters in a finite-time or prescribed time (fixed-time). The convergence proofs are based on a notion of finite-time stability over finite intervals of time, i.e. Short-finite-time stability; homogeneity for time-varying systems; and Lyapunov-based approach. The results are obtained under injectivity of the regressor term, which is related to the classical identifiability condition. The case of bounded disturbances (noise of measurements) is analyzed for both algorithms. Simulation results illustrate the feasibility of the proposed algorithms. Index Terms-Time-varying systems, Parameter identification, Finite/Fixed-time. I. INTRODUCTION T HE parameter identification problem for different kind of systems has been extensively studied during the last decades. One of the more important reasons is the need for accurate and efficient control for systems. The challenge of providing better models of physical phenomena leads to that the parameter identification problem becomes fundamental in industrial applications. System identification techniques are also used in signal processing applications (such as communications [START_REF] Huijberts | System identification in communication with chaotic systems[END_REF], geophysical engineering [START_REF] Zhongfang | Distributed parameter system identification and its application to geophysical parameter identification[END_REF] and mechanical engineering [START_REF] Sun | Idenitification of mechanical parameters of hard-coating materials with strain-dependence[END_REF]), in nontechnical fields such as biology [START_REF] Hasenauer | Parameter identification, experimental design and model falsification for biological network models using semidefinite programming[END_REF], environmental sciences and econometrics to improve the knowledge on the identified object, prediction and control. The identification theory basically deals with the problem of the efficient extraction of signal and system dynamic properties based on available data measurements. In the literature there exist many methods to identify parameters, and the most popular ones belong to the group of least squares (LS) methods; e.g. non-recursive methods of LS, recursive methods of LS, methods of weighted LS, exponential forgetting with constant forgetting factor, exponential forgetting with variable forgetting factor, etc. There exist also many modifications of the LS methods; e.g. method of generalized LS, method of extended LS, instrumental variables method, and some others like extended Kalman filter, modulating functions methods, sub-spaces methods, etc. (see, e.g. [START_REF] Ljung | System Identification: Theory for the User[END_REF] and [START_REF] Isermann | Identification of Dynamic Systems, An Introduction with Applications[END_REF]). It is worth mentioning that most of these methods were established for identifying constant parameters. For time-varying parameters, the methods of recursive LS can also be used [START_REF] Isermann | Identification of Dynamic Systems, An Introduction with Applications[END_REF]. For instance, in [START_REF] Chen | On-line parameter estimation for a class of time-varying continuous systems with bounded disturbances[END_REF] a non-recursive LS method is proposed for time-varying parameters. In this method, a polynomial approximation, based on Taylor expansion, with a bounded regressor vector is built and used to approximate the time-varying parameters. In [START_REF] Li | Recursive identification of time-varying systems: Self-tuning and matrix rls algorithms[END_REF] a new matrix forgetting factor recursive LS algorithm is proposed for time-varying parameters which satisfy a random walk model assumption. In the framework of adaptive estimation, in [START_REF] Zhu | Adaptive estimation of time-varying parameters in linearly parametrized systems[END_REF] a modified version of the LS algorithm is provided to estimate time-varying parameters by means of a polynomial approximation. However, most of these works are only able to follow slowly varying parameters and they can ensure at most exponential or asymptotic convergence to a neighborhood of the real value. In the context of finite-time (FT) convergence [START_REF] Polyakov | Stability notions and lyapunov functions for sliding mode control systems[END_REF], a recursive FT convergent algorithm has been presented in [START_REF] Moreno | A new recursive finite-time convergent parameter estimation algorithm[END_REF]. Such an algorithm is a non-linear recursive version of the LS algorithm, where the nonlinear injection terms provide FT convergence since they are designed based on the generalized Super-Twisting Algorithm (STA) [START_REF] Moreno | Strict lyapunov functions for the supertwisting algorithm[END_REF]. In this line of research, in [START_REF] Davila | Observation and identification of mechanical systems via second order sliding modes[END_REF] the STA has also been used for parameter identification of mechanical systems. However, the linearly filtered equivalent output injection signal of the STA is used to obtain the regressor, from which a standard LS recursive algorithm identifies the parameters asymptotically. Other parameter identification methods, using first order sliding-modes, are also based on the reconstruction of the equivalent control signals leading to asymptotic reconstruction algorithms (see, e.g. [START_REF] Xu | A VSS identification scheme for time-varying parameters[END_REF] where an identification scheme is developed for timevarying parameters). A FT and non-recursive LS algorithm is presented by [START_REF] Adetola | Finite-time parameter estimation in adaptive control of nonlinear systems[END_REF] for constant parameters. Such an algorithm is based on adaptive control, it requires to solve matrix valued ordinary differential equations and checking the convertibility of a matrix (persistence of excitation condition) online. This paper contributes to the development of two parameter identification algorithms that are able to identify time-varying parameters in a finite time and also in a prescribed time (that can be selected a priori), i.e. fixed-time (FxT) [START_REF] Polyakov | Nonlinear feedback design for fixed-time stabilization of linear control systems[END_REF]; respectively. The convergence proof of the FT identification algorithm is based on a notion of finite-time stability over finite intervals of time, i.e. Short-finite-time (Short-FT) stability [START_REF] Ríos | Homogeneous time-varying systems: Robustness analysis[END_REF]; and homogeneity for time-varying systems [START_REF] Ríos | Homogeneity based uniform stability analysis for time-varying systems[END_REF]; a Lyapunov function approach is also given for this algorithm. On the other hand, the convergence proof corresponding to the FxT identification algorithm is also based on a Lyapunov function approach. The results are obtained under injectivity of the regressor term, which is related to the classical identifiability condition. It is worth saying that, to the best of our knowledge, an FxT algorithm for identification of time-varying parameters does not exist in the literature. Additionally, the case of bounded disturbances (noise of measurements) is analyzed for both algorithms. Simulation results illustrate the feasibility of the proposed algorithms. Structure of the paper: The problem statement is presented in the Section II. Some preliminary concepts and results are described in Section III. The FT identification algorithm is presented in Section IV based on time-varying homogeneity and Lyapunov-based approach, respectively. The main result, i.e. the FxT identification algorithm, is proposed in Section V. Some simulation results are depicted in Section VI and some concluding remarks are given in Section VII. Finally, all the proofs are postponed to the Appendix. Notation: Let q denote the Euclidean norm of a vector q ∈ R n , and 1, n a sequence of integers 1, ..., n. The induced norm for a matrix Q ∈ R m×n is given as Q := λ max (Q T Q) = σ max (Q) , where λ max (respectively, λ min ) is the maximum (respectively, the minimum) eigenvalue, and σ max is the maximum singular value. For a Lebesgue measurable function u : R ≥0 → R m define the norm u (t0,t1) := ess sup t∈(t0,t1) u(t) , then u ∞ = u (0,+∞) and the set of functions u with the property u ∞ < +∞ is denoted as L ∞ . A continuous function α : R ≥0 → R ≥0 belongs to class K if it is strictly increasing and α(0) = 0; it belongs to class K ∞ if it is also unbounded. A continuous function β : R ≥0 × R ≥0 → R ≥0 belongs to class KL if for each fixed s, β(r, s) ∈ K with respect to r, and for each fixed r, β(r, s) is decreasing to zero with respect to s. II. PROBLEM STATEMENT Consider the following time-varying system:1 dθ(t) dt = Θ(ωt), (1) y(t) = Γ T (ωt)θ(t) + ε(t), (2) where θ ∈ R n and y ∈ R m are the unknown parameter vector and the output available for measurements, respectively; while the term ε(t) ∈ R m represents some bounded disturbances (noise of measurements). It is assumed that ε is a Lebesgue measurable and essentially bounded signal, i.e. ε ∈ L ∞ . The term Γ : R → R n×m is a continuous function of time so-called regressor, and Θ : R → R n is a uniformly bounded Lebesgue measurable signal. The regressor Γ is known, and bounded, whilst Θ represents the unknown but bounded parameter dynamics and w is the frequency or rate of the time-varying part. The aim of this paper is to identify the time-varying parameter vector θ(t) in a finite and/or fixed time for the disturbance-free case and provide an ultimate bound for the disturbed case. III. PRELIMINARIES Consider a time-dependent differential equation [START_REF] Khalil | Nonlinear Systems[END_REF]: dx(t) dt = f (t, x(t)), t ≥ t0, t0 ∈ R, (3) where x(t) ∈ R n is the state vector; f : R × R n → R n is a continuous function with respect to x and piece-wise continuous with respect to t, f (t, 0) = 0 for all t ∈ R. The solution of the system (3) for an initial condition x 0 ∈ R n at time instant t 0 ∈ R is denoted as x(t, t 0 , x 0 ) and defined on some finite time interval [t 0 , t 0 + T ). A. Stability definitions Let Ω, Ξ be open neighborhoods of the origin in R n , 0 ∈ Ω ⊂ Ξ. Definition 1. [START_REF] Khalil | Nonlinear Systems[END_REF], [START_REF] Polyakov | Nonlinear feedback design for fixed-time stabilization of linear control systems[END_REF] At the steady state x = 0 the system (3) is said to be a) Uniformly stable (US) if for any > 0 there is δ( ) such that for any x 0 ∈ Ω, if x 0 ≤ δ( ) then x(t, t 0 , x 0 ) ≤ for all t ≥ t 0 , for any t 0 ∈ R; b) Uniformly finite-time stable (UFTS) if it is US and finitetime converging from Ω, i.e. for any x 0 ∈ Ω there exists 0 ≤ T x0 < +∞ such that x(t, t 0 , x 0 ) = 0 for all t ≥ t 0 + T x0 , for any t 0 ∈ R. The function T 0 (x 0 ) = inf{T x0 ≥ 0 : x(t, t 0 , x 0 ) = 0 ∀t ≥ t 0 + T x0 } is called the settling-time of the system (3). c) Uniformly fixed-time stable (UFxTS) if it is UFTS and the settling-time function T 0 (x 0 ) is bounded, i.e. ∃T max > 0 : T 0 (x 0 )≤ T max , for all x 0 ∈ Ω and for any t 0 ∈ R. If Ω = R n , then x = 0 is said to be globally US (GUS) / UFTS (GUFTS) / UFxTS (GUFxTS), respectively. In this work a special stability notion will be also used for a compact interval of initial times t 0 , and only on a fixed interval of time [START_REF] Dorato | Short-time stability in linear time-varying systems[END_REF], [START_REF] Weiss | On the stability of systems defined over a finite time interval[END_REF]. Definition 2. [17] At the steady state x = 0 the system (3) is said to be a) Short-time stable (Short-TS) with respect to (Ω, Ξ, T 0 , T f ) if for any x 0 ∈ Ω, x(t, t 0 , x 0 ) ∈ Ξ for all t ∈ [t 0 , T f ] for any t 0 ∈ [-T 0 , T 0 ]; b) Short-finite-time stable (Short-FTS) with respect to (Ω, Ξ, T 0 , T f ) if it is Short-TS with respect to (Ω, Ξ, T 0 , T f ) and finite-time converging from Ω with the convergence time T t0,x0 ≤ T f for all x 0 ∈ Ω and t 0 ∈ [-T 0 , T 0 ]; c) Globally short-finite-time stable (GShort-FTS) if for any bounded set Ω ⊂ R n containing the origin there exist a bounded set Ξ ⊂ R n , Ω ⊂ Ξ and T f > 0 such that the system is Short-FTS with respect to (Ω, Ξ, T 0 , T f ) for any T 0 . In [START_REF] Dorato | Short-time stability in linear time-varying systems[END_REF] and [START_REF] Weiss | On the stability of systems defined over a finite time interval[END_REF] the short-time stability is considered only for a fixed initial time instant t 0 . This notion is used here to avoid a confusion with finite-time stability from [START_REF] Roxin | On finite stability in control systems[END_REF] and [START_REF] Bhat | Geometric homogeneity with applications to finite-time stability[END_REF]; since both concepts of stability are used in this work. IV. SHORT-FINITE-TIME IDENTIFICATION ALGORITHM In this section the FT identification algorithm is presented. The convergence to zero of the parameter identification error will be proved based on homogeneity for time-varying systems and Short-FT stability; results introduced previously by [START_REF] Ríos | Homogeneity based uniform stability analysis for time-varying systems[END_REF] and [START_REF] Ríos | Homogeneous time-varying systems: Robustness analysis[END_REF]. For simplicity and brevity it is assumed that t 0 = 0. In order to estimate the parameter vector θ, the following nonlinear algorithm can be introduced2 θ(t) = -KΓ(ωt) Γ T (ωt) θ(t) -y(t) γ , (4) where • γ := |•| γ sign(•), with |•| and sign(•) understood in the component-wise sense, and γ ∈ [0, 1); the matrix K ∈ R n×n is symmetric and positive definite, i.e. K = K T > 0. Define σ Γmin and σ Γmax as the minimum and maximum singular values of Γ(ωt) for all t ≥ 0, respectively. Then, let us introduce the following assumption. Assumption 1. The regressor term Γ(ωt) is such that σ Γmin ≥ σ > 0 for all t ≥ 0; while the term Θ(ωt) ∈ L ∞ with Θ(ωt) ∞ ≤ k(ωt) ≤ Λ for all t ≥ 0, for a known continuous function k : R → R ≥0 and a known constant Λ > 0. The assumption σ Γmin ≥ σ > 0 for all t ≥ 0 implies that m ≥ n and it is equivalent to the classic identifiability condition corresponding to the injectivity of the regressor term, i.e. rank(Γ(ωt)) = n, for each instant of time t. Let us define the error θ(t) = θ(t) -θ(t). Hence, the error dynamics is given by θ(t) = -KΓ(ωt) Γ T (ωt) θ(t) -ε(t) γ -Θ(ωt). (5) In the following, the Short-FT stability statements given by Lemma 3 and Corollary 1 in [START_REF] Ríos | Homogeneous time-varying systems: Robustness analysis[END_REF] will be applied, separately; to prove that error dynamics [START_REF] Ljung | System Identification: Theory for the User[END_REF], for the disturbance-free case, is GShort-FTS for γ = 0, and globally ultimate bounded for γ ∈ (0, 1); while for the disturbed case an ultimate bound is given for any γ ∈ [0, 1). All the proofs are described in the Appendix. A. Homogeneity-based approach: Disturbance-free case Let us consider ε = 0. Then, the following result is established. Theorem 1. Let Assumption 1 be satisfied. If Θ(0) = 0 and ε = 0; then, for any ρ > 0 and T 0 > 0 there exist ω 0 > 0, ϑ ≥ 1 and T f > T 0 ; such that system [START_REF] Ljung | System Identification: Theory for the User[END_REF] with ω ∈ [-ω 0 , ω 0 ], for γ = 0 and K = K T > 0, is Short-Finite-Time Stable with respect to (B ρ , B ϑρ , T 0 , T f ). Remark 1. According to Theorem 1, the Short-FT stability is preserved for a frequency spectrum sufficiently close to zero (see Lemma 3 in [START_REF] Ríos | Homogeneous time-varying systems: Robustness analysis[END_REF]). B. Lyapunov-based approach: Disturbance-free case Let us consider ε = 0. Thus, based on the statements given by Corollary 1 in [START_REF] Ríos | Homogeneous time-varying systems: Robustness analysis[END_REF], the following result is given 3 . Theorem 2. Let Assumption 1 be satisfied. If there exist ω 0 > 0 and δ > 0 such that k is a periodic continuous function with k ∈ L 1 ,δ , and (s) = sup |t|≤s k(ω 0 t); then, the system (5), for γ = 0 and K = K T > 0, is Globally Short-Finite-Time Stable. Remark 2. From the proof of Theorem 2, if the matrix K is such that λmin(K) > (λ 1/2 max(K )Λ/σΓ min ) 2/3 , then system (5) is UFTS. Let us consider the case in which γ ∈ (0, 1). Then, the following result is established. Corollary 1. Let Assumption 1 be satisfied. Then, the system (5), for γ ∈ (0, 1) and K = K T > 0, is globally ultimate bounded, and its trajectories satisfy the following bound θ(t) ≤ λmax(K) λmin(K) µ, ∀t ≥ T ( θ(0)), (6) with µ = Λ λmin(K)σ γ+1 Γ min δ 1 γ , T ( θ(0)) ≤ max   0, 2 θ(0) 1-γ σ γ+1 Γ min (1 -δ)λmin(K)(1 -γ)   , δ ∈ (0, 1), and θ(0) ∈ R n . 3 The same result has been previously obtained in [START_REF] Rueda-Escobedo | Discontinuous gradient algorithm for finite-time estimation of time-varying parameters[END_REF] for the discontinuous algorithm. Remark 3. The solutions of system (5) enter into the bound (6) at most in a finite time T ( θ(0)). According to Definition 2, system (5) is GShort-FTS with respect to the set { θ ∈ R n : θ ≤ λmax(K)/λmin(K)µ}. Remark 4. Corollary 1 shows that the parameter identification error may be reduced according to the choice of the gain K and the parameter γ since the size of µ depends on the value of both of them. C. Lyapunov-based approach: Disturbed case Let us consider ε = 0 and introduce the following assumption. Assumption 2. The disturbance term ε ∈ L ∞ with ε ∞ ≤ ε and a known constant ε > 0. Thus, based on the statements given by Corollary 1, the following result is established. Corollary 2. Let Assumptions 1 and 2 be satisfied. Then, the system (5), for γ ∈ [0, 1) and K = K T > 0, is globally ultimate bounded, and its trajectories satisfy the following bound θ(t) ≤ λmax(K) λmin(K) µ, ∀t ≥ T ( θ(0)), (7) with µ = max   ε σΓ min , σΓ max m 2-γ 4 ε γ λmin(K) + Λ λmin(K)σ γ+1 Γ min δ 1 γ   , T ( θ(0)) ≤ max   0, 2 θ(0) 1-γ σ γ+1 Γ min (1 -δ)λmin(K)(1 -γ)   , δ ∈ (0, 1), m ∈ N the dimension of y, and θ(0) ∈ R n . Remark 5. The solutions of system (5) enter into the bound (7) at most in a finite time T ( θ(0)). Additionally, system (5) is GShort-FTS with respect to the set { θ ∈ R n : θ ≤ λmax(K)/λmin(K)µ}. Remark 6. Corollary 2 shows that the parameter identification error converges to a neighborhood of the origin that depends on the magnitude of the noise, i.e. ε, the choice of the gain K and the parameter γ. V. FIXED-TIME IDENTIFICATION ALGORITHM Let us introduce a modification of the nonlinear algorithm (4), i.e. θ(t) = -KΓ(ωt) Γ T (ωt) θ(t) -y(t) γ 1 + Γ T (ωt) θ(t) -y(t) γ 2 , (8) where γ 1 ∈ [0, 1), γ 2 > 1 and K = K T > 0. The error dynamics is given as follows θ(t) = -KΓ(ωt) Γ T (ωt) θ(t) -ε(t) γ 1 + Γ T (ωt) θ(t) -ε(t) γ 2 -Θ(ωt). (9) Note that, since γ 1 ∈ [0, 1) and γ 2 > 1, (9) is not homogeneous. Therefore, only the Lyapunov-based approach is used to prove the FxT stability. In the following, the FxT stability statements given by Lemma 1 in [START_REF] Polyakov | Nonlinear feedback design for fixed-time stabilization of linear control systems[END_REF] will be applied to prove that error dynamics [START_REF] Zhu | Adaptive estimation of time-varying parameters in linearly parametrized systems[END_REF], for the disturbance-free case, is GFxTS for γ 1 = 0 and γ 2 > 1, and globally ultimate bounded for γ 1 ∈ (0, 1) and γ 2 > 1; while for the disturbed case an ultimate bound is given for any γ 1 ∈ (0, 1) and γ 2 > 1. All the proofs are described in the Appendix. A. Lyapunov-based approach: Disturbance-free case Let us consider ε = 0. Based on the statements given by Lemma 1 in [START_REF] Polyakov | Nonlinear feedback design for fixed-time stabilization of linear control systems[END_REF], the following result is established. Theorem 3. Let Assumption 1 be satisfied. If the following conditions hold: 1) λmin(K) > max(λ1, λ2), with λ 1 =   n γ 2 -1 2(γ 2 +1) λmax(K)Λ 2 γ 2 2 σ γ 2 +1 Γ min   2 γ 2 +3 , λ 2 = λmax(K)Λ σ Γ min 2 3 ; 2) γ 1 = 0 and γ 2 > 1; then, the system (9) is Globally Fixed-Time Stable with settling time T ≤ 2(α(γ2 + 1) + β) αβ(γ2 + 1) , for all θ(0) ∈ R n and α = σ γ 2 +1 Γ min (2λmin(K)) γ 2 +1 2 n γ 2 -1 2(γ 2 +1) - 2λmax(K) λmin(K) Λ, β = σΓ min (2λmin(K)) 1 2 - 2λmax(K) λmin(K) Λ. Remark 7. The solutions of the error dynamics (9), for γ1 = 0 and γ2 > 1, go to zero at most in a fixed time T that is independent of θ(0). Let us consider the case in which γ 1 ∈ (0, 1) and γ 2 > 1. Then, based on the statements given by Corollary 1 and Theorem 3, the following result is established. 2) γ 1 ∈ (0, 1) and γ 2 > 1; then, the system (9) is globally ultimate bounded, and its trajectories satisfy the following bound θ(t) ≤ λmax(K) λ min (K) min 2λmax(K), µ f , ∀t ≥ T ( θ(0)), (10) with µ f = Λ λmin(K)σ γ 1 +1 Γ min δ 1 γ 1 , T ≤ max 0, 2 α(γ2 + 1) , δ ∈ (0, 1), and all θ(0) ∈ R n . Remark 8. The solutions of (9), for γ1 ∈ (0, 1) and γ2 > 1, enter into the bound (10) at most in a fixed time T that is independent of θ(0). In this sense, the Algorithm described by (8) may possess a faster rate of convergence to the bound (10) than the Algorithm given by (4). Remark 9. Corollary 3 shows that the parameter identification error could be adjusted according to the choice of the gain K and the parameter γ1. B. Lyapunov-based approach: Disturbed case Let us consider ε = 0 . Thus, based on the statements given by Corollary 3, the following result is established. Corollary 4. Let Assumptions 1 and 2 be satisfied. If the following conditions hold: 1) λmax(K) < λ3, with λ 3 =      2 2-γ 2 2 σ γ 2 +1 Γ min n γ 2 -1 2(γ 2 +1) (σ Γmax (m 2-γ 1 4 ε γ 1 + m 2-γ 2 4 ε γ 2 )λ min (K) + Λ)      1 2 ; 2) γ 1 ∈ [0, 1), and γ 2 > 1; then, the system (9) is globally ultimate bounded, and its trajectories satisfy the following bound θ(t) ≤ λmax(K) λ min (K) min 2λmax(K), µ f , ∀t ≥ T, (11) with µ f = max       ε σ Γ min ,     σ Γmax (m 2-γ 1 4 ε γ 1 + m 2-γ 2 4 ε γ 2 )λ min (K) + Λ λ min (K)σ γ 1 +1 Γ min δ     1 γ 1       , T ≤ max 0, 2 ᾱ(γ 2 + 1) , δ ∈ (0, 1), m ∈ N the dimension of y, all θ(0) ∈ R n , and ᾱ = 2 3-γ 2 2 σ γ 2 +1 Γ min (λmin(K)) γ 2 +1 2 n γ 2 -1 2(γ 2 +1) - 2λmax(K) λmin(K) Λ. Remark 10. The solutions of (9) enter into the bound (11) at most in a fixed time T . Remark 11. Corollary 4 shows that the parameter identification error converges to a neighborhood of the origin that depends on the magnitude of the noise, i.e. ε, the choice of the gain K and the parameters γ1 and γ2; in a fixed time T . VI. SIMULATION RESULTS A. Automatic throttle valve actuator Consider the behavior of an automatic throttle valve actuator [START_REF] Isermann | Identification of Dynamic Systems, An Introduction with Applications[END_REF]. The DC motor is described by U (t) = R(t)i(t) + ψ(t)ω(t), M el (t) = ψ(t)i(t), where U is the armature voltage, M el is an electrical time variable, R and ψ are the unknown time-varying armature resistance and magnetic flux linkage, respectively; while i and ω are the armature phase current and motor angular speed, respectively. The mechanical part is modeled as J ω(t) = ψ(t)i(t) - 1 v csϕ(t) + fp + fc ω(t) 0 + fs ω(t) v , φ(t) = ω(t) v , ω(0) = 0, ϕ(0) = 0, where J is the inertia, v is the gear ratio, c s and f p are the spring constant and pretension, respectively; f c is the Coulomb friction torque, f s is the viscous friction torque, and ϕ is the angular throttle position. The model for the parameter identification is then given as y(t) = Γ T (t)θ(t) + ε(t), where y(t ) := [U (t), M el (t)] T is the measured output, θ(t) := [R(t), ψ(t)] T is the unknown time-varying parameter vector, ε(t) represents the disturbances, and Γ(t) = i(t) 0 ω(t) i(t) . The parameters of the model are given in Table I. It is easy to show that the given example satisfies Assumption 1 with σ Γmin = 0.5147 and Λ = 0.3. The simulations have been done in Matlab Simulink with the Euler discretization method and sample time equal to 0.001. The FT and FxT algorithms, i.e. (4) and [START_REF] Li | Recursive identification of time-varying systems: Self-tuning and matrix rls algorithms[END_REF], respectively; are implemented for the disturbance-free case, i.e. ε(t) = 0, with γ = 0 for the FT algorithm, and γ 1 = 0, γ 2 = 1.5 for the FxT algorithm; both of them with K = 3I, and different initial conditions. Note that this value of K satisfies the conditions of Theorem 2, i.e. K = K T > 0, and also the conditions of Theorem 3, i.e. λ min (K) > max(λ 1 , λ 2 ) with λ 1 = 0.0845 and λ 2 = 0.7481. The results are depicted by Figs. 1 and2. The results illustrate the statements given by Theorem 2 and 3, i.e. FT and FxT convergence, respectively. For the disturbed case, the FT and FxT algorithms are designed in the same way as in the previous simulation taking into account that the disturbance ε(t) := [ε 1 (t), ε 1 (t)] T is given by a bounded continuous signal such that ε 1 ∞ ≤ ε = 2. Note that the value K = 3I satisfies the conditions of Corollary 2, i.e. K = K T > 0, and also the conditions of Corollary 4, i.e. λ max (K) < λ 3 = 3.5230. The results are depicted by Fig. 3. The results illustrate the statements given by Corollary 2 and 4, i.e. global ultimate boundedness. Now, the algorithms given in ( 4) and ( 8), are implemented with K = 3I and different values of γ, γ 1 ∈ (0, 1), respectively; and γ 2 ∈ [1.5, 3.0] for the algorithm given by ( 8) and the disturbance case. The parameter identification error for both algorithms is depicted by Fig. 4. The results illustrate the statements given by Remarks 6 and 11, respectively. 4)), the bottom graph for the case γ 1 = 0 and γ 2 ∈ [1.5, 3.0], while the middle graph for the case γ 1 ∈ (0, 1) and γ 2 = 1.5 (algorithm (8)), with θ(0) = (0, 1) T and θ(0) = (10, 10) T . θ(t) vs θ(t) θ1(t) θ1(t) θ2(t) θ2(t) B. Example with relaxed Assumption 1 Let us consider another example, i.e. θ1(t) = sin(3.5t) + cos(2t), θ1(0) = 2, θ2(t) = sin(0.1t) + cos(3t), θ2(0) = 2, with the following structure for the regressor Γ(t) = 0.2 cos(50t) -0.1 cos(50t) 0.2 cos(50t) 0.1 -0.05 0.1 . For this example rank(Γ(t)) = 1 for all t, i.e. Assumption 1 is not satisfied, and the injectivity condition of the regressor term does not hold for each instant of time. The FxT algorithm (8) is implemented with γ 1 = 0, γ 2 = 1.5, K = 15I, and initial conditions θ(0) = (1, 1) T ; and for comparison purpose the Pseudo-inverse solution (least-square) is also implemented, i.e. θ(t) = (Γ(t)Γ T (t)) -1 Γ(t)y. The results are depicted by Fig. 5 for the disturbance-free case, i.e. ε(t) = 0. θ(t) vs θ(t) θ1(t) θ1(t) θ2(t) θ2(t) θ(t) vs θ(t) θ1(t) θ1(t) θ2(t) θ2(t) Figure 5. FxT Algorithm vs Pseudo-inverse Solution Disturbance-Free. The top graph shows the parameter identification for the FxT algorithm while the bottom one depicts the solution given by the least-square method. Despite that Assumption 1 does not hold, the proposed FxT Algorithm works while the conventional inversion algorithm fails to provide an estimation in this case. The explanations of this phenomenon lies in the persistence of excitation condition which is a subject of future research. VII. CONCLUSIONS Two identification algorithms, i.e. FT and FxT algorithms, are proposed that are able to identify time-varying parameters in a finite-time and also in a prescribed time, respectively. The convergence proof of the FT identification algorithm is based on Short-FT stability and homogeneity for time-varying systems; and also a Lyapunov-based approach is given for this algorithm. On the other hand, the convergence proof of the FxT algorithm is based on a Lyapunov-based approach. The results are obtained under injectivity of the regressor term, which is related to the classical identifiability condition. It is worth saying that, to the best of our knowledge, an FxT algorithm to identify time-varying parameters does not exist in the literature. Additionally, the case of bounded disturbances (noise of measurements) is analyzed for both algorithms. Simulation results depict the feasibility of the proposed algorithms. The persistence of excitation properties are in the scope of future research. APPENDIX Let us introduce the following class of functions for ∈ K and δ > 0: L m ,δ = {d : R → R m : d(s) ≤ (s) ∀s ≥ 0; ∃τ > 0 : d(s) = 0, ∀|s| ≥ τ ; max{ d 1, d ∞} ≤ δ} , where d 1 = +∞ -∞ d(t) dt, d ∞ = sup t∈R d(t) . Proof of Theorem 1: Let us apply the statements given in Lemma 3 in [START_REF] Ríos | Homogeneous time-varying systems: Robustness analysis[END_REF] in order to prove that the system ( 5) is Short-FTS: 1. System ( 5) is r-homogeneous with degree ν = -1 for (r1, r2, . . . , rn) = (1, 1, . . . , 1) and γ = 0. 2. Let us prove that system (5) is GAS for ω = 0. Assume that Θ(0) = 0 and define Γ0 = Γ(0). Then, let us consider the following candidate Lyapunov function V (t) = 1 2 θT K -1 θ. ( 12 ) The time derivative along the trajectories of system ( 5) is given as follows V (t) = θT K -1 -KΓ0 Γ T 0 θ γ = -θΓ0 Γ T 0 θ γ . Note that θT Γ0 Γ T 0 θ γ = m i=1 |( θT Γ0)i| γ+1 = θT Γ0 γ+1 γ+1 , and since θT Γ0 ≤ θT Γ0 γ+1 holds for all 2 > γ + 1 > 0, V may be bounded as follows V (t) ≤ -θT Γ0 γ+1 γ+1 ≤ -θT Γ0 γ+1 , ≤ -θT Γ0Γ T 0 θ γ+1 2 ≤ -σ γ+1 Γ min θ γ+1 . Hence, V is negative definite and thus, GAS is concluded for ω = 0. 3. Since Γ is a continuous function of time and Θ is a uniformly bounded Lebesgue measurable signal, Assumption 1 in in [START_REF] Ríos | Homogeneous time-varying systems: Robustness analysis[END_REF] is satisfied for all γ ∈ [0, 1). Therefore, based on Lemma 3 in [START_REF] Ríos | Homogeneous time-varying systems: Robustness analysis[END_REF], there exist ω0 > 0, ϑ ≥ 1 and T f > T 0 ; such that system [START_REF] Ljung | System Identification: Theory for the User[END_REF] with ω ∈ [-ω0, ω0], γ = 0, ε = 0 and K = K T > 0 is Short-FTS with respect to (Bρ, B ϑρ , T 0 , T f ). Proof of Theorem 2: Let us consider the Lyapunov function [START_REF] Moreno | Strict lyapunov functions for the supertwisting algorithm[END_REF] which satisfies the following inequalities c -1 1 θ 2 ≤ V ≤ c -1 2 θ 2 , (13) c -γ+1 2 1 θ γ+1 ≤ V γ+1 2 ≤ c -γ+1 2 2 θ γ+1 , (14) where c1 = 2λmax(K) and c2 = 2λmin(K). The function V is positive definite, radially unbounded, and continuously differentiable with its time derivative satisfying V (t) = θT K -1 -KΓ Γ T θ γ -Θ(ωt) , ≤ -θT Γ Γ T θ γ + θ Θ(ωt) ∞ λmin(K) . Then, recalling that θT Γ Γ T θ γ = θT Γ γ+1 γ+1 , and since θT Γ ≤ θT Γ γ+1 holds for all 2 > γ + 1 > 0, and Θ(ωt) ∞ ≤ k(ωt), V may be bounded as follows V (t) ≤ -θT Γ γ+1 + k(ωt) λmin(K) θ , ≤ -σ γ+1 Γ min θ γ+1 + k(ωt) λmin(K) θ , ≤ -σ γ+1 Γ min c γ+1 2 2 V γ+1 2 (t) + 2 √ c1 c2 k(ωt)V 1 2 (t). (15) Let us assume that γ = 0. Therefore, from ( 15) and ( 13), it follows that V (t) ≤ -σΓ min √ c2V 1 2 (t) + 2 √ c1 c2 k(ωt)V 1 2 (t). Applying Corollary 1 in [START_REF] Ríos | Homogeneous time-varying systems: Robustness analysis[END_REF], with α = σΓ min √ c2, η = 0.5, and assuming that k is periodic, and such that k ∈ L 1 ,δ , with (s) = sup |t|≤s k(ω0t), one can conclude that the error ( 5) is GShort-FTS. Proof of Corollary 1: From ( 15) and the fact that Θ(ωt) ∞ ≤ Λ, for all t ∈ R, it follows that V (t) ≤ -σ γ+1 Γ min θ γ+1 + Λ λmin(K) θ , ≤ -σ γ+1 Γ min (1 -δ) θ γ+1 -σ γ+1 Γ min δ θ γ+1 + Λ λmin(K) θ , where δ ∈ (0, 1). Then, V (t) ≤ -σ γ+1 Γ min (1 -δ) θ γ+1 , ∀ θ ≥ µ. (16) From ( 16), it follows that V (t) ≤ -σ γ+1 Γ min (1 -δ)c γ+1 2 2 V γ+1 2 (t), and by the comparison principle (see, e.g. [START_REF] Khalil | Nonlinear Systems[END_REF]), one obtains V (t) ≤   V 1-γ 2 (0) - σ γ+1 Γ min (1 -δ)c γ+1 2 2 (1 -γ) 2 t   2 1-γ , then, the last inequality ensures that θ satisfies the following bound θ(t) ≤ √ c1×   c γ-1 2 2 θ(0) 1-γ - σ γ+1 Γ min (1 -δ)c γ+1 2 2 (1 -γ) 2 t   1 1-γ , (17) for all t < T ( θ(0)); while for all t≥T ( θ(0)), from [START_REF] Davila | Observation and identification of mechanical systems via second order sliding modes[END_REF], it is obtained that θ is bounded as in (6), i.e. θ ≤ λmax(K)/λmin(K)µ. Note that ( 6) and ( 17) hold for any θ(0) ∈ R n , with no restriction on how large µ is. Hence, it is concluded that the solutions of system (5) are globally ultimate bounded with its trajectories satisfying the bound given by [START_REF] Isermann | Identification of Dynamic Systems, An Introduction with Applications[END_REF]. Proof of Corollary 2: Let us consider the Lyapunov function [START_REF] Moreno | Strict lyapunov functions for the supertwisting algorithm[END_REF] which satisfies the inequalities ( 13) and [START_REF] Xu | A VSS identification scheme for time-varying parameters[END_REF]. The time derivative of V satisfies V (t) = θT K -1 -KΓ Γ T θ -ε γ -Θ(ωt) , ≤ -θT Γ Γ T θ -ε γ + θ Λ λmin(K) . Consider, in the component-wise sense, that |(Γ T θ)i| ≥ |εi|, for all i = 1, m. Therefore, sign(Γ T θ -ε) = sign(Γ T θ) is implied. Then, for any x, x ∈ R and γ ∈ [0, 1), the inequality |x+ x| γ ≤ |x| γ +|x| γ holds [START_REF] Mitrinović | Grundlehren der mathematischen Wissenschaften[END_REF]. Hence, defining x = (Γ T θ)i -εi and x = εi, it follows that for all i = 1, n |(Γ T θ)i| γ = |(Γ T θ)i -εi + εi| γ ≤ |(Γ T θ)i -εi| γ + |εi| γ , ⇒ |(Γ T θ)i| γ -|εi| γ ≤ |(Γ T θ)i -εi| γ , and then, in the component-wise sense, one gets that -|Γ T θ -ε| γ ≤ -|Γ T θ| γ + |ε| γ , (18) holds for any γ ∈ [0, 1). Applying [START_REF] Ríos | Homogeneity based uniform stability analysis for time-varying systems[END_REF], one obtains that V is upper bounded as follows V (t) ≤ -θT Γ Γ T θ γ -|ε| γ sign(Γ T θ) + θ Λ λmin(K) . Recall that • γ , | • | and sign(•) are understood in the componentwise sense, i.e. Γ T θ γ =       |(Γ T θ) 1 | γ sign((Γ T θ) 1 ) |(Γ T θ) 2 | γ sign((Γ T θ) 2 ) . . . |(Γ T θ)m| γ sign((Γ T θ)m)       , |ε| γ sign(Γ T θ) =       |ε 1 | γ sign((Γ T θ) 1 ) |ε 2 | γ sign((Γ T θ) 2 ) . . . |εm| γ sign((Γ T θ)m)       . Then, it follows that ε holds for all 2 > 2γ > 0, it is given that V (t) ≤ -σ γ+1 V (t) ≤ -σ γ+1 Γ min (1 -δ) θ γ+1 -σ γ+1 Γ min δ θ γ+1 + σΓ max m 2-γ 4 ε γ λmin(K) + Λ λmin(K) θ , and then, V (t) ≤ -σ γ+1 Γ min (1 -δ) θ γ+1 , ∀ θ ≥ µ. (19) The rest of the proof follows the same steps as the proof of Corollary 1 providing that the solutions of system [START_REF] Ljung | System Identification: Theory for the User[END_REF], with ε = 0, are globally ultimate bounded with its trajectories satisfying the bound given by [START_REF] Chen | On-line parameter estimation for a class of time-varying continuous systems with bounded disturbances[END_REF]. Proof of Theorem 3: Consider the Lyapunov function [START_REF] Moreno | Strict lyapunov functions for the supertwisting algorithm[END_REF] satisfying inequalities ( 13)-( 14) with c1 = 2λmax(K) and c2 = 2λmin(K). Its time derivative along the trajectories of the error dynamics ( 9) is given by V (t) = -θT Γ Γ T θ γ 1 + Γ T θ γ 2 -θT K -1 Θ(ωt), ≤ -θT Γ γ 1 +1 γ 1 +1 + θT Γ γ 2 +1 γ 2 +1 + Λ λmin(K) θ . Then, since θT Γ ≤ θT Γ γ 1 +1 and, by Hölder's inequality, θT Γ ≤ n γ 2 -1 2(γ 2 +1) θT Γ γ 2 +1 hold for all γ2 + 1 > 2 > γ1 + 1 > 0, V is bounded as follows V (t) ≤ - θT Γ γ 1 +1 + n 1-γ 2 2(γ 2 +1) θT Γ γ 2 +1 + Λ λmin(K) θ , ≤ -σ γ 1 +1 Γ min c γ 1 +1 2 2 V γ 1 +1 2 (t) - σ γ 2 +1 Γ min c γ 2 +1 2 2 n γ 2 -1 2(γ 2 +1) V γ 2 +1 2 (t) + 2 √ c1 c2 ΛV 1 2 (t). (20) Let us introduce the following inequalities V γ 2 +1 ≤ V γ 1 +1 ≤ V, ∀V ≤ 1, (21) V γ 2 +1 > V γ 1 +1 > V, ∀V > 1. (22) Hence, from ( 14), [START_REF] Dorato | Short-time stability in linear time-varying systems[END_REF] and the previous inequalities, when V > 1 it is obtained that V (t) ≤ -   σ γ 2 +1 Γ min (2λ min (K)) γ 2 +1 2 n γ 2 -1 2(γ 2 +1) - 2λmax(K) λ min (K) Λ   V γ 2 +1 2 (t), = -αV γ 2 +1 2 (t), θ(0) 1-γ 1 - σ γ 1 +1 Γ min (1 -δ)c γ 1 +1 2 2 (1 -γ 1 ) 2 t    1 1-γ 1 , (25) for all t < T ( θ(0)); while for all t≥T ( θ(0)), from (13), it is obtained that θ is bounded as in [START_REF] Polyakov | Stability notions and lyapunov functions for sliding mode control systems[END_REF], i.e. θ ≤ λmax(K)/λmin(K) min(c 1/2 1 , µ f ). Therefore, it is concluded that the solutions of system (9) are globally ultimate bounded with its trajectories satisfying the bound given by [START_REF] Polyakov | Stability notions and lyapunov functions for sliding mode control systems[END_REF]. Proof of Corollary 4: Assume, in the component-wise sense, that |(Γ T θ)i| ≥ |εi|, for all i = 1, n. Then, it implies that sign(Γ T θε) = sign(Γ T θ). Consider that for any x, x ∈ R and γ2 > 1 , the inequality |x+ x| γ 2 ≤ 2 γ 2 -1 (|x| γ 2 +|x| γ 2 ) holds [START_REF] Mitrinović | Grundlehren der mathematischen Wissenschaften[END_REF]. Thus, defining x = (Γ T θ)i -εi and x = εi, it follows that |(Γ T θ)i| γ 2 = |(Γ T θ)i -εi + εi| γ 2 , ≤ 2 γ 2 -1 |(Γ T θ)i -εi| γ 2 + |εi| γ 2 , ⇒ |(Γ T θ)i| γ 2 -2 γ 2 -1 |εi| γ 2 ≤ 2 γ 2 -1 |(Γ T θ)i -εi| γ 2 , for all i = 1, n, and then component-wisely -|Γ T θ -ε| γ 2 ≤ -2 1-γ 2 |Γ T θ| γ 2 + |ε| γ 2 , (26) holds for all γ2 > 1. Taking into account the previous inequality, this proof follows the same steps as the proof of Theorem 3, Corollary 2 and Corollary 3. Corollary 3 . 3 Let Assumption 1 be satisfied. If the following conditions hold: 1) λmin(K) > λ1; Figure 1 . 2 FT 12 Figure 1. Parameter Identification Noise-Free. The top graph shows the parameter identification for the case γ = 0 (FT algorithm) while the bottom graph for the case γ 1 = 0 and γ 2 = 1.5 (FxT algorithm), with θ(0) = (0, 1) T and θ(0) = (2, 3) T . Figure 2 . 2 Figure 2. Parameter Identification Error Disturbance-Free.The graph shows that for the FT algorithm (solid lines) the convergence time increases when the initial conditions of the error dynamics also increases, while the FxT algorithm depicts in addition to a faster rate of convergence a bounded settling time T 0 ( θ(0)) for any initial condition θ(0) ∈ R 2 . 2 FTFigure 3 . 23 Figure 3. Parameter Identification Error -Disturbance Case Figure 4 . 4 Figure 4. Parameter Identification Error Disturbance-Case. The top graph shows the parameter identification error for different values of γ ∈ (0, 1) (algorithm (4)), the bottom graph for the case γ 1 = 0 and γ 2 ∈ [1.5, 3.0], while the middle graph for the case γ 1 ∈ (0, 1) and γ 2 = 1.5 (algorithm (8)), with θ(0) = (0, 1) T and θ(0) = (10, 10) T . Γ min θ γ+1 + θT Γ |ε| γ sign(Γ T θ) σΓ max |ε| γ λmin(K) + Λ λmin(K) θ , ≤ -σ γ+1 Γ min (1 -δ) θ γ+1 -σ γ+1 Γ min δ θ γ+1 + σΓ max |ε| γ λmin(K) + Λ λmin(K) θ .where δ ∈ (0, 1). Since |ε| γ = m i=1 |εi| 2γ = ε γ 2γ , and by Hölder's inequality ε 2γ ≤ m 2-γ 4γ Table I PARAMETERS I OF THE AUTOMATIC THROTTLE VALVE ACTUATOR. Parameter Value i(t) 0.5 sin(2πf t)[A] f 60[Hz] R(t) cos(0.2t)[Ω] ψ(t) sin(0.1t)[V s] J 0.011[kgm 2 ] v 16.42 cs 0.01[N/m] fp 0.7[N m] fc 1.0[N m] fs 0.0037[N m] Note that even when a static relation between the measured output and parameters is considered, such a problem is related with identification of dynamical systems. A similar algorithm was previously presented in[START_REF] Ríos | Homogeneity based uniform stability analysis for time-varying systems[END_REF] for the adaptive state estimation problem; and the discontinuous case, i.e. γ = 0, for time-varying parameter identification without disturbances in[START_REF] Rueda-Escobedo | Discontinuous gradient algorithm for finite-time estimation of time-varying parameters[END_REF]. H. Ríos gratefully acknowledge the financial support from CONACyT 270504. This work was also supported in part by HoTSMoCE Inria associate team program, by the Government of Russian Federation (Grant 074-U01) and the Ministry of Education and Science of Russian Federation (Project 14.Z50.31.0031). that is negative definite for all λmin(K) > (n Γ min ) 2/(γ 2 +3) = λ1. Thus, for any θ(t) such that V ( θ(0)) > 1, the last inequality ensures V ( θ(t)) ≤ 1 for t ≥ T1 = 2/α(γ2 + 1). For the case when V ≤ 1, it follows that Let us assume that γ1 = 0. Hence, it follows that which is negative definite for all λmin(K) > ( λmax(K)Λ/σΓ min ) 2/3 = λ2. Then, for any θ(t) such that Therefore, V ( θ(t)) = 0, for all t ≥ 2(α(γ2 + 1) + β)/αβ(γ2 + 1), and all θ(t). Thus, based on Lemma 1 in [START_REF] Polyakov | Nonlinear feedback design for fixed-time stabilization of linear control systems[END_REF], the system (9) is GFxTS. Proof of Corollary 3: This proof follows the same steps as the previous proof except for the case when V ≤ 1. Let us consider [START_REF] Dorato | Short-time stability in linear time-varying systems[END_REF], i.e.
41,209
[ "20438", "739707" ]
[ "300693", "525219", "250023", "120930", "525219", "250023" ]
01470994
en
[ "spi", "info" ]
2024/03/04 23:41:46
2017
https://hal.science/hal-01470994/file/manuscrit-review-unmarked.pdf
Cyril Voyant email: [email protected] Gilles Notton Christophe Paoli Alexis Fouilloy Fabrice Motte Christophe Darras Uncertainties in global radiation time series forecasting using machine learning: The multilayer perceptron case Keywords: Time Series Forecasting, Processing, Artificial Neural Networks, interval, Energy Prediction, Stationarity *Correspondingauthor: Cyril Voyant come Introduction Solar radiation is one of the principal energy sources for physical, biological and chemical processes, occupying the most important role in many engineering applications [START_REF] Hoff | Modeling PV fleet output variability[END_REF]. The process of converting sunlight to electricity without combustion allows to create power without pollution. The major problem of such energy source is its intermittence and its stochastic character which make difficult their management into an electrical network [START_REF] Voyant | Multi-horizon solar radiation forecasting for Mediterranean locations using time series models[END_REF].Thereby, the development of forecasting models is necessary to ideally use this technology [START_REF] Lauret | A benchmarking of machine learning techniques for solar radiation forecasting in an insular context[END_REF]. By considering their effectiveness, it will be possible for example to identify the most optimal locations for developing a solar power project or to maintain the grid stability and security of a power management system [START_REF] Mellit | Artificial intelligence techniques for sizing photovoltaic systems: A review[END_REF]. Thus the solar energy forecasting is a process used to predict the amount of solar energy available for various time horizons [START_REF] Lorenz | Benchmarking of different approaches to forecast solar irradiance[END_REF]. Several methods have been developed by experts around the world and the mathematical formalism of Times Series (TS [START_REF] Join | Solar energy production: Short-term forecasting and risk management[END_REF]) has been often used for the short term forecasting (among 6 hours ahead) [START_REF] Lorenz | Benchmarking of different approaches to forecast solar irradiance[END_REF][START_REF] Wolff | Statistical Learning for Short-Term Photovoltaic Power Predictions[END_REF]. TS is a set of ordered numbers that measures some activities over time [START_REF] Gooijer | 25 years of time series forecasting[END_REF]. It is the historical record of global horizontal irradiance with measurements taken at equally spaced intervals with a consistency in the activity and the method of measurement. Some of the best predictors found in literature are Autoregressive and moving average [START_REF] Abrahart | Neural Network vs. ARMA Modelling: constructing benchmark case studies of river flow prediction[END_REF][START_REF] Mora-López | Multiplicative ARMA models to generate hourly series of global irradiation[END_REF][START_REF] Voyant | Numerical weather prediction (NWP) and hybrid ARMA/ANN model to predict global radiation[END_REF], Bayesian inferences [START_REF] Lauret | Bayesian neural network approach to short time load forecasting[END_REF][START_REF] Pole | Applied Bayesian forecasting and time series analysis[END_REF], Markov chains [START_REF] Logofet | The mathematics of Markov models: what Markov chains can really predict in forest successions[END_REF][START_REF] Kumar | Time series models (Grey-Markov, Grey Model with rolling mechanism and singular spectrum analysis) to forecast energy consumption in India[END_REF], k-Nearest-Neighbors predictors [START_REF] Paoli | Forecasting of preprocessed daily solar radiation time series using neural networks[END_REF], support vector machine [START_REF] Lauret | A benchmarking of machine learning techniques for solar radiation forecasting in an insular context[END_REF][START_REF] Kr̈omer | Support Vector Regression of multiple predictive models of downward short-wave radiation[END_REF], regression tree [START_REF] Tso | Predicting electricity energy consumption: A comparison of regression analysis, decision tree and neural networks[END_REF][START_REF] Lahouar | Day-ahead load forecast using random forest and expert input selection[END_REF], orartificial neural network (ANN) [START_REF] Mellit | Artificial intelligence techniques for sizing photovoltaic systems: A review[END_REF][START_REF] Sm | An ANN-based approach for predicting global radiation in locations with no direct measurement instrumentation[END_REF]. All these approaches are related to the machine learning application [START_REF] Aler | A Study of Machine Learning Techniques for Daily Solar Energy Forecasting Using Numerical Weather Models[END_REF]. The most often used is the last presented method: the artificial neural network and particularly the multilayer perceptron (MLP [START_REF] Costa | Improving generalization of MLPs with sliding mode control and the Levenberg-Marquardt algorithm[END_REF]). In the present study, we focus on this prediction method, the goal being to detail the uncertainties related to the global radiation prediction [START_REF] Ahlburg | Error measures and the choice of a forecast method[END_REF]. The paper is organized as follow: Section 2 describes the data and material needed to conduct our experiments. In the section 3, we propose to define the different component of the errorgenerated throughthe MLP used. These uncertainties can be decomposed into several components that will be explained and developed. In section 4,the results of the error decomposition will be exposed for fivemeteorological sites in order to quantify the reliability of the predictions. The last section will allow to draw conclusions about the present study. Data and material In this work, measured hourly horizontal global radiation data from meteorological ground stations are used to forecast global horizontal solar irradiation (GHI) for a specific horizon [START_REF] Voyant | Numerical weather prediction (NWP) and hybrid ARMA/ANN model to predict global radiation[END_REF]. All the measurements used are obtained from the French Meteorological Organization (Météo-France) data base and from measurement realized in the frame of the Tilos H2020 project (http://www.tiloshorizon.eu).Four sites are studied: Ajaccio, Bastia, Montpellier and Marseille in France.As for all experimental acquisitions, missing values are observed, here, this represents less than 2% of the data. A classical cleaning approach is then operated in order to identify and remove this data [START_REF] Paoli | Forecasting of preprocessed daily solar radiation time series using neural networks[END_REF]. Data In Corsica Island, the data used to build the models are GHI measured in the meteorological stations of Ajaccio (41°55'N, 8°44'E, 4m asl) and Bastia (42°42'N, 9°27'E, 10m asl). They are located near the Mediterranean Sea and nearby mountains (1000 m altitude at 40km from the sites). The data representing the global horizontal solar radiation were measured on an hourly basis from 1998 to 1999 (exactly two years). The two last studied stations are Montpellier (43.6°N and 3.9°E, 2 masl) and Marseille (43.4°N and 5.2°E, 5 masl) concerning the years 2008 and 2009. All these stations are equipped with pyranometers (CM 11 from Kipp&Zonen). The choice of these particular places is explained by their closed geographical and orographical configurations. These stations are located near the Mediterranean Sea and mountains. This specific geographical configuration of the four French meteorological stations makes cloudness difficult to forecast. Mediterranean climate is characterized by hot summers with abundant sunshine and mild, dry and clear winters. Irradiance nighttime values are not being used, the first morning data forecast are operated with the day before evening data. Prediction methodology We chose to develop error propagation in the GHI prediction for the most common used predictor: the MLP. The base of this model is the time series approach (TS). A TS x(t) can be defined by a linear or non-linear model called f n (see Equation 1where t = n,n-1,…,p+1,p with n, the number of observations and p the number of parameters of the model ; n ≫ p; h is the horizon of prediction and 𝜖 𝑡+ the committed error) [START_REF] Voyant | Optimization of an artificial neural network dedicated to the multivariate forecasting of daily global radiation[END_REF]. 𝑥(𝑡 + ) = 𝑓 𝑛 (𝑥(𝑡), 𝑥(𝑡 -1) … . , 𝑥(𝑡 -𝑝 + 1)) + 𝜖 𝑡+ Eq1 To estimate the 𝑓 𝑛 model, a stationarity hypothesis is often necessary. This condition usually implies a stable process [START_REF] Hornik | Multilayer feedforward networks are universal approximators[END_REF][START_REF] Cybenko | Approximation by superpositions of a sigmoidal function[END_REF]. This notion is directly linked to the fact that whether certain feature such as mean or variance change over time or remain constant. Previous studies [START_REF] Lauret | A benchmarking of machine learning techniques for solar radiation forecasting in an insular context[END_REF][START_REF] Voyant | Predictability of PV power grid performance on insular sites without weather stations : use of artificial neural networks[END_REF][START_REF] Voyant | Hybrid methodology for hourly global radiation forecasting in Mediterranean area[END_REF]show that the use of clear sky index (CSI) allows to make stationary the time series and so to correctly use the MLP forecasting. Stationary process In previous studies [START_REF] Paoli | Solar Radiation Forecasting Using Ad-Hoc Time Series Preprocessing and Neural Networks[END_REF][START_REF] Paoli | Use of Exogenous Data to Improve Artificial Networks Dedicated to Daily Global Radiation Forecasting[END_REF], it was demonstrated that the clear sky index calculated with the simplified Solis model [START_REF] Ineichen | A broadband simplified version of the Solis clear sky model[END_REF] is the most reliable for our locations. The Solis model generates a clear sky hourly irradiation (CS) expressed by Eq. ( 2), the use of this model requires fitting parameter (g), extraterrestrial radiation (I 0 ), solar elevation (h) and total measured atmospheric optical depth (): 𝐶𝑆 𝑡 = 𝐼 0 𝑡 . 𝑒𝑥𝑝 -𝜏 𝑠𝑖𝑛 𝑔 𝑡 . 𝑠𝑖𝑛 ( 𝑡 ) Eq2 The simplified "Solis clear sky" model is based on radiative transfer calculations and the Lambert-Beer relation [START_REF] Ineichen | A broadband simplified version of the Solis clear sky model[END_REF] . The expression of the atmospheric transmittance is valid with polychromatic radiations, however when dealing with global radiation, the Lambert-Beer relation is only an approximation because of the back scattering effects. According to [START_REF] Mueller | Rethinking satellite-based solar irradiance modelling: The SOLIS clear-sky module[END_REF] this model remains a good fitting function of the global horizontal radiation. The new computed time series (CSI) can be directly used with the MLP forecasting and is described by the equation 3: 𝐶𝑆𝐼(𝑡) = 𝐺𝐻𝐼(𝑡)/𝐶𝑆 𝑡 Eq 3 MLPprediction Although a large range of different architectures of ANN is available [START_REF] Benghanem | ANN-based modelling and estimation of daily global solar radiation data: A case study[END_REF], MultiLayer Perceptron (MLP) remains the most popular [START_REF] Kalogirou | Artificial neural networks in renewable energy systems applications: a review[END_REF]. In particular, feed-forward MLP networks with two layers (one hidden layer and one output layer) are often used for modeling and forecasting time series. Several studies [START_REF] Mellit | Artificial intelligence techniques for sizing photovoltaic systems: A review[END_REF][START_REF] Diazrobles | A hybrid ARIMA and artificial neural networks model to forecast particulate matter in urban areas: The case of Temuco, Chile[END_REF][START_REF] Zhang | Forecasting with artificial neural networks: The state of the art[END_REF]validated this approach based on ANN for the non-linear modeling of time series. To forecast the time series, a fixed number p of past values are set as inputs of the MLP, the output is the prediction of a future value [START_REF] Crone | Stepwise Selection of Artificial Neural Networks Models for Time Series Prediction[END_REF]. Considering the initial time series equation (Equation 1), this equation can be adapted to the non-linear case of one hidden layer MLP with b related to the biases, f and g to the activation function of the output and hidden layer, and to the weights. The number of hidden nodes (H) and the number of the input node (In) allow to detail this transformation. The number of layer 1 and 2 is given in superscript. (Equation 4): 𝐶𝑆𝐼 (𝑡 + 1) = 𝑓( 𝑦 𝑖 𝐻 𝑖=1 𝜔 𝑖 2 + 𝑏 2 )with𝑦 𝑖 = 𝑔( 𝐶𝑆𝐼(𝑡 -𝑗 + 1) 𝐼𝑛 𝑗 =1 𝜔 𝑖𝑗 1 + 𝑏 𝑖 1 ) Eq 4 In the presented study, the MLP has been computed with the Matlab© software and its Neural Network toolbox. The characteristics chosen and related to previous work are the following: one hidden layer, the activation functions are the continuously and differentiable hyperbolic tangent (hidden) and linear (output), the Levenberg-Marquardt learning algorithm with a max fail parameter before stopping training equal to 5 (early stopping tool allowing the stop the learning when the error increases consecutively 5 times). This algorithm is an approximation to the Newton's method. The prediction of the GHI is obtained using the equation: 𝐺𝐻𝐼 𝑡 + 1 = 𝐶𝑆𝐼 𝑡 + 1 . 𝐶𝑆(𝑡 + 1) Eq 5 To customize the input layer of the MLP we choose the use of the mutual information to determine In as described in [START_REF] Lauret | A benchmarking of machine learning techniques for solar radiation forecasting in an insular context[END_REF][START_REF] Huang | Effective feature selection scheme using mutual information[END_REF][START_REF] Jiang | Mutual information algorithms[END_REF]. According the results obtained in these papers, we use H equal to In for all the experiments conducted in this study.Furthermore in order to improve the learning of the MLP, it is a common practice to filter out the data removing night hours. Indeed we consider only periods between sunrise and sunset [START_REF] Badescu | Modeling solar radiation at the earth's surface: recent advances[END_REF][START_REF] Paulescu | Weather Modeling and Forecasting of PV Systems Operation[END_REF]. We have chosen to apply a selection criterion based on the solar zenith angle (SZA): solar radiation data for which the solar zenith angle is greater than 80° have been removed [START_REF] Lauret | A benchmarking of machine learning techniques for solar radiation forecasting in an insular context[END_REF]. This transformation is equivalent to a filtering related to the solar elevation angle lower than 10°. All the simulations are related to the Matlab software and NNtoolbox use. Error decomposition In these section, we propose to decompose the error considering four kinds of uncertainties: the error due to the measurement, the error due to the variability of the time series, the error related to the machine learning uncertainty and the error related to the horizon. Error due to the measurement (𝝈 𝒎𝒆𝒂𝒔 ) In experimental sciences, there is no perfect measure. Experiments can only be marred with significant errors more or less depending on the selected protocol or the quality measuring instruments [START_REF] Ahlburg | Error measures and the choice of a forecast method[END_REF]. Assess the uncertainty measurement is a complex task that is the subject of a complete branch called metrology. The uncertainty associated with a measurement result allows to provide a quantitative indication of the quality of this result [START_REF] Hopson | Assessing the Ensemble Spread-Error Relationship[END_REF]. In thissection, we will show that it is possible to quantify the impact of a measurement error (or precision) on the MLP output. A MLP with 2 inputs and 2 hidden neurons (H=2 and In=2) is considered here in order to understand the methodology [START_REF] Voyant | Multi-horizon solar radiation forecasting for Mediterranean locations using time series models[END_REF]. The output of this MLP can be defined with the following formula: 𝐶𝑆𝐼 (𝑡 + 1) = (𝑔( 𝐶𝑆𝐼(𝑡 -𝑗 + 1) 𝐼𝑛 𝑗 =1 𝜔 𝑖𝑗 1 + 𝑏 𝑖 1 )) 𝐻 𝑖=1 𝜔 𝑖 2 + 𝑏 2 = 𝜔 1 2 tanh 𝐶𝑆𝐼(𝑡)𝜔 11 1 + 𝐶𝑆𝐼(𝑡 - 1)𝜔121+𝑏11+𝜔22tanh𝐶𝑆𝐼(𝑡)𝜔211+𝐶𝑆𝐼(𝑡-1 𝜔221+𝑏21+𝑏2 Eq 6 In order to calculate the uncertainty propagation of a MLP related to measurement error, we propose to use two methods: (i) the classical variables differentiationand (ii) the differentiation of log(𝐶𝑆𝐼 (𝑡 + 1)). Using the classical variables differentiation, we obtained the following formula for the measurement error: From the two parameters u and v, and considering that𝜎 2 𝐶𝑆𝐼(𝑡 -1) 𝑒𝑡 𝜎 2 𝐶𝑆𝐼(𝑡) are equivalent and equal to the pyranometer uncertainty (𝜎 2 𝐶𝑆𝐼 ), the global error is: 𝜎 𝑚𝑒𝑎𝑠 1 2 𝐶𝑆𝐼 (𝑡 + 1) = 𝜕𝐶𝑆𝐼 (𝑡+1) 𝜕(𝐶𝑆𝐼(𝑡) 2 𝜎 2 𝐶𝑆𝐼(𝑡) + 𝜕𝐶𝑆𝐼 (𝑡+1) 𝜕𝐶𝑆𝐼 (𝑡-1) 2 𝜎 2 𝐶𝑆𝐼(𝑡 -1) = 𝑢. 𝜎 𝑚𝑒𝑎𝑠 1 2 𝐶𝑆𝐼 (𝑡 + 1) = 𝜎 2 𝐶𝑆𝐼 ((𝜔 1 2 𝜔 11 1 ℑ 1 + 𝜔 2 2 𝜔 21 1 ℑ 2 ) 2 + (𝜔 1 2 𝜔 12 1 ℑ 1 + 𝜔 2 2 𝜔 22 1 ℑ 2 ) 2 ) Eq 10 Considering thatℑ 𝑛 = tanh 𝐶𝑆𝐼(𝑡)𝜔 𝑛1 1 + 𝐶𝑆𝐼(𝑡 -1)𝜔 𝑛2 1 +𝑏 𝑛 1 2 -1 𝑓𝑜𝑟 𝑛 ∈ 1, 𝐻 Eq 11 It follows that the uncertainty of a MLP related to measurement error is: 𝜎 𝑚𝑒𝑎𝑠 1 𝐶𝑆𝐼 (𝑡 + 1) = 𝜎 𝐶𝑆𝐼 (𝜔 1 2 𝜔 11 1 ℑ 1 + 𝜔 2 2 𝜔 21 1 ℑ 2 ) 2 + (𝜔 1 2 𝜔 12 1 ℑ 1 + 𝜔 2 2 𝜔 22 1 ℑ 2 ) 2 1/2 Eq 12 The maximum value will be reached when the output of the hidden nodes will be equal to 1(ℑ 𝑖 = 1 ∀𝑖). In this case, we obtain: 𝜎 𝑚𝑒𝑎𝑠 1,𝑚𝑎𝑥 𝐶𝑆𝐼 𝑡+1 = 𝜎 𝐶𝑆𝐼 (𝜔 1 2 𝜔 11 1 + 𝜔 2 2 𝜔 21 1 ) 2 + (𝜔 1 2 𝜔 12 1 + 𝜔 2 2 𝜔 22 1 ) 2 1/2 Eq 13 Note that this formalism is applied only because the orthogonality hypothesisof the inputs have been done (inputs independent) without this approximation the computing is impossible. The generalization for Hhidden neurons andIninput nodes gives: 0 ≤ 𝜎 𝑚𝑒𝑎𝑠 1 𝐶𝑆𝐼 (𝑡 + 1 = 𝜎(𝐶𝑆𝐼) ( 𝜔 𝑖 2 𝜔 𝑖𝑗 1 ℑ 𝑖 𝐻 𝑖=1 ) 2 𝐼𝑛 𝑗 =1 1/2 ≤ 𝜎 𝐶𝑆𝐼 ( 𝜔 𝑖 2 𝜔 𝑖𝑗 1 𝐻 𝑖=1 ) 2 𝐼𝑛 𝑗 =1 1/2 Eq 14 The second method is based on the differentiation of log(𝐶𝑆𝐼 (𝑡 + 1)). This method is simpler but less efficient and does not take into account the error compensation. In the casewe use the following formulas to determine the uncertainty (𝜎 𝑚𝑒𝑎𝑠 2 ): 𝑑 log 𝐶𝑆𝐼 (𝑡 + 1) = 𝜎 𝑚𝑒𝑎𝑠 2 (𝐶𝑆𝐼 (𝑡+1)) 𝐶𝑆𝐼 (𝑡+1) = 𝜎 𝐶𝑆𝐼 (𝜔 1 2 ℑ 1 𝜔 11 1 +𝜔 12 1 +𝜔 2 2 ℑ 2 (𝜔 21 1 +𝜔 22 1 )) 𝐶𝑆𝐼 (𝑡+1) Eq 15 thus: 𝜎 𝑚𝑒𝑎𝑠 2 (𝐶𝑆𝐼 (𝑡 + 1)) = 𝜎 𝐶𝑆𝐼 (𝜔 1 2 ℑ 1 𝜔 11 1 + 𝜔 12 1 + 𝜔 2 2 ℑ 2 (𝜔 21 1 + 𝜔 22 1 )) Eq 16 = 𝜎 𝐶𝑆𝐼 (𝜔 1 2 𝜔 11 1 ℑ 1 + 𝜔 2 2 𝜔 21 1 ℑ 2 + 𝜔 1 2 𝜔 12 1 ℑ 1 + 𝜔 2 2 𝜔 22 1 ℑ 2 ) For H hidden neurons and In input nodes, we obtain the following generalization: 𝜎 𝑚𝑒𝑎𝑠 2 𝐶𝑆𝐼 (𝑡 + 1 = 𝜎(𝐶𝑆𝐼) 𝜔 𝑖 2 𝜔 𝑖𝑗 1 ℑ 𝑖 𝐻 𝑖=1 𝐼𝑛 𝑗 =1 Eq 17 Combining Eq [START_REF] Paoli | Forecasting of preprocessed daily solar radiation time series using neural networks[END_REF] and ( 14), we see that𝜎 𝑚𝑒𝑎𝑠 1 (𝐶𝑆𝐼 (𝑡 + 1)) ≤ 𝜎 𝑚𝑒𝑎𝑠 2 (𝐶𝑆𝐼 (𝑡 + 1)). In orderto take into account compensation only present in meas1 case computing, we think it is preferable to use the first form and in the following we use the uncertainty of the measurement with the equation 18(with Hthe number of hidden neuronsand Inthe number of input nodes): 0 ≤ 𝜎 𝑚𝑒𝑎𝑠 (𝐺𝐻𝐼 (𝑡 + 1)) = 𝜎 𝐺𝐻𝐼 ( 𝜔 𝑖 2 𝜔 𝑖𝑗 1 𝐼𝑛 𝑗 =1 ℑ 𝑖 𝐻 𝑖=1 ) ≤ 𝜎 𝐺𝐻𝐼 ( 𝜔 𝑖 2 𝜔 𝑖𝑗 1 𝐼𝑛 𝑗 =1 ) 𝐻 𝑖=1 Eq 18 Error due to the quick fluctions of the time series We choose to define the error due to the variability of the time series. Indeed, the quick fluctuations of the series are very difficult to predictand generate error in the prediction. One of the possibility to define this kind of uncertainty will be called the inherent error and the second one the variability error. Inherent error (𝝈 𝒊𝒏𝒉 ) TheCartier and Perrin theorem [START_REF] Join | Solar energy production: Short-term forecasting and risk management[END_REF], which is stated in the language of nonstandard analysis, allows to understand the existence of trends for time series [START_REF] Chen | Computationally efficient bootstrap prediction intervals for returns and volatilities in ARCH and GARCH processes[END_REF]. The time series GHI(t) may then be decomposed as a sum where 𝐺𝐻𝐼 𝑡𝑟𝑒𝑛𝑑 𝑡 is the trend and 𝐺𝐻𝐼 𝑓𝑙𝑢𝑐 (𝑡) is a "quickly fluctuating" function around 0 also called theinherent noise of the time series. 𝐺𝐻𝐼 𝑡 = 𝐺𝐻𝐼 𝑡𝑟𝑒𝑛𝑑 𝑡 + 𝐺𝐻𝐼 𝑓𝑙𝑢𝑐 (𝑡) Eq 19 The nature of those quick fluctuations is left unknown and nothing prevents us from assuming that 𝐺𝐻𝐼 𝑓𝑙𝑢𝑐 (𝑡)is random and/or fractal. The forecast of the trend is possible on a "short" time interval under the assumption of a lack of abrupt changes, whereas the forecast ofthe fluctuationterm at a given time instant is meaningless and should be abandoned. Based on this kind of time series definition, an ideal prediction can be obtained from a trend estimation𝐺𝐻𝐼 𝑡𝑟𝑒𝑛𝑑 𝑡 . In our case, we choose to compute it with a classical non-linear fit based on cubic spline data interpolation based on a tridiagonal linear system [START_REF] Voyant | Statistical parameters as a means to a priori assess the accuracy of solar forecasting models[END_REF]. It is solved for the information needed to describe the coefficients of the various cubic polynomials which make up the interpolating spline. Considering the Cartier Perrin theorem the perfect predictor describes the trend while the quick fluctuations are not modelled and the related error is the lowest error than a predictor can generate. The inherent error is so computed with the equation 20: 𝜎 𝑖𝑛 = 𝐺𝐻𝐼 𝑡 + 1 . 𝑛𝑅𝑀𝑆𝐸(𝐺𝐻𝐼 𝑡𝑟𝑒𝑛𝑑 𝑡 -𝐺𝐻𝐼 𝑡 )with 𝑛𝑅𝑀𝑆𝐸 = 𝐸 𝐺𝐻𝐼 -𝐺𝐻𝐼 2 𝐸 𝐺𝐻𝐼 Eq 20 Variability error (𝝈 𝒗𝒂𝒓 ) The knowledge of the volatility of the series at time t provides an a priori information about the variability, and so, about the expected error obtained with machine learning predictions [START_REF] Gneiting | Probabilistic Forecasting[END_REF]. Indeed, it is possible to define the volatility [START_REF] Wilks | Statistical Methods in the Atmospheric Sciences An Introduction[END_REF] via 𝑉𝑜𝑙1 𝑡 = 𝐶𝑆𝐼 𝑡 -𝐶𝑆𝐼 (𝑡) . It is also possible de define another form of volatility (Vol2(t)) integrating the logarithm in order to define the log return which has the nice property of log-normality [START_REF] Divya | Survey on Machine Learning Approaches for Solar Irradiation Prediction[END_REF]. These parameters are constructed in order to take into account the intermittency in the CSI series represented by 𝐺𝐻𝐼 𝑓𝑙𝑢𝑐 in the equation 19 𝑉𝑜𝑙2(𝑡) = log 𝐶𝑆𝐼 𝑡 -log 𝐶𝑆𝐼 𝑡 -1 Eq 21 In the following, Vol2 will be used to compute the volatility. Note that in order to take into account the daily seasonality of the series which would modify the results, the volatility is computed with the CSI and not with the GHI.In previous studies [START_REF] Join | Solar energy production: Short-term forecasting and risk management[END_REF][START_REF] Divya | Survey on Machine Learning Approaches for Solar Irradiation Prediction[END_REF], it has been shown that volatility is linked to the error of prediction (𝑛𝑅𝑀𝑆𝐸 = 𝑓 𝑉𝑜𝑙2 𝑡 ). With this argument, the error variability (𝜎 𝑣𝑎𝑟 )is defined by equation 22 where g is a non-linear function depending on the considered site. 𝜎 𝑣𝑎𝑟 = 𝐺𝐻𝐼 𝑡 + 1 . 𝑔 𝑉𝑜𝑙2 𝑡 Eq 22 Error related to the machine learning uncertainty Another type of error that can be generated during the prediction is related to machine learning approach itself [START_REF] Join | Short-term solar irradiance and irradiation forecasts via different time series techniques: A preliminary study[END_REF]. Indeed, in supervised learning applications and statistical learning theory, the out-of-sample error is a measure of how accurately an algorithm is able to predict outcome values for new data. Because learning algorithms are evaluated on finite samples [START_REF] Andersson | Interpolation and approximation by monotone cubic splines[END_REF], the evaluation of a learning algorithm may be sensitive to sampling error [START_REF] Mcaleer | Realized Volatility: A Review[END_REF]. As a result, measurements of prediction error on the current data may not provide much information about predictive ability on new data. Generalization error can be minimized by avoiding overfitting in the learning algorithm. In the next subsection, the various kind of uncertainties generated under a classical MLP prediction of GHI will be described. Sampling error (𝝈 𝒔𝒂𝒎𝒑 ) The MLP parameters are determined in using the pairs of input and output examples contained in the training data. In fact, this property is related to all the machine learning method. Once the model is fitted, the model can be evaluated on a test data set. In our context, 𝒟 = 𝐱 i , 𝑦 𝑖 i=1 n represents the training data set. The vector x i contains the ppast values of the clear sky index (taken as inputs of the model) for training sample i and y i refers to the corresponding value of the clear sky index for the considered horizon h (in the case of horizon 1 hour, y i =CSI(t i +1 )) The column vector inputs for all n training cases can be aggregated in the so-called n×p design matrix X and the corresponding model's outputs (or targets) are collected in the vector y so we can write D={X,y}. To overcome the problem of sampling phase, often, a k-fold methodology is used [START_REF] Divya | Survey on Machine Learning Approaches for Solar Irradiation Prediction[END_REF]. In k-fold crossvalidation, the original sample is randomly partitioned into k equal sized subsamples [START_REF] Wiens | Three way k-fold cross-validation of resource selection functions[END_REF]. k-fold cross validation should be employed to estimate the accuracy of the model induced from a classification algorithm, because the accuracy resulting from the training data of the model is generally too optimistic [START_REF] Wong | Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation[END_REF]. The cross-validation process is then repeated k-1 times (the folds), with as a result each of the k subsamples used exactly once as the validation data. The k results from the folds can then be averaged (or otherwise combined) to produce a single estimation and the standard deviation (𝜎 𝑠𝑎𝑚𝑝 ) of the results can be computed. Initialization learning error (𝝈 𝒊𝒏𝒊 ) With MLP, there are a lot of methods to initialize weights before to use a back propagation method. The most often used is certainly therandom initializationrun a lot of time, in orderto consider only the initialization which minimize the error of prediction under a test sample. The problem is that a global minimum is maybe not reached. Gradient-based minimization of the cost function during the learning phase is relatively fast, but for complex problems, the training may find local minima of the error function that are far from the global minimum [START_REF] Ding | Nonlinear finite-time Lyapunov exponent and predictability[END_REF]. In order to take into account the phenomenon, we propose to define the standard deviation of the outputs related to 50 trainings which 50 random initializations (arbitrary choice). With this approach, we propose that more is important the number of local minimum, less it is evident to find the global minimum. With this assumption, the initialization learning error is described by the equation 24(for l=50 random initializations). Horizon error (𝝈 𝒉𝒐𝒓 ) It is obvious to understand that more the prediction horizon is important less the prediction is efficient. The prevailing view is that the evidence for long-horizon GHI predictability is significantly stronger than that for short horizons. All researchers plotting an GHI time series autocorrelation observed that the link between GHI (t) and GHI(t+i) (with i>1) decreases when GHI is made stationary without seasonal effect. This lack of correlation is more generally explained by the Hurst exponent or the Lyapunov horizon [START_REF] Ding | Nonlinear finite-time Lyapunov exponent and predictability[END_REF]. To overcome this problem, we propose to compute 𝜎 𝑜𝑟 ()considering the observed error for each horizon in a test sample, as described in equation 25. Global error of prediction (𝝈 𝒕𝒐𝒕 ) In our assumption, all the previous  terms are independentrandom variables that are normally distributed (and therefore also jointly so), then their sum is also normally distributed and the global form of the standard deviation 𝜎 𝑡𝑜𝑡 (𝑡 + 1)becomes (in this equation the quick fluctuations are taken into account with 𝜎 𝑣𝑎𝑟 , but it is also possible to consider 𝜎 𝑖𝑛 ): 𝜎 𝑡𝑜𝑡 (𝑡 + 1) = 𝜎 𝑚𝑒𝑎𝑠 2 + 𝜎 𝑠𝑎𝑚𝑝 (𝑡 + 1) 2 + 𝜎 𝑖𝑛𝑖 (𝑡 + 1) 2 + 𝜎 𝑣𝑎𝑟 (𝑡 + 1) 2 Eq 26 Considering that there is a persistence of the variability for a short horizon, 𝜎 𝑣𝑎𝑟 𝑡 = 𝜎 𝑣𝑎𝑟 (𝑡 + 1) thus: 𝜎 𝑡𝑜𝑡 (𝑡 + 1) = 𝜎 𝑚𝑒𝑎𝑠 2 + 𝜎 𝑠𝑎𝑚𝑝 (𝑡 + 1) 2 + 𝜎 𝑖𝑛𝑖 (𝑡 + 1) 2 + 𝜎 𝑣𝑎𝑟 (𝑡) 2 Eq 27 with𝜎 𝑚𝑒𝑎𝑠 = 𝜎 𝐶𝑆𝐼 ( 𝜔 𝑖 2 𝜔 𝑖𝑗 1 𝑁𝑐 𝑖=1 ) 2 𝑁𝑒 𝑗 =1 1/2 , 𝜎 𝑠𝑎𝑚𝑝 and 𝜎 𝑖𝑛𝑖 are computed respectively with k-fold and 50 random initializations and 𝜎 𝑣𝑎𝑟 = 𝐺𝐻𝐼 𝑡 + 1 . 𝑔 𝑉𝑜𝑙2 𝑡 . For an easier computing, it is also possible to use 𝜎 𝑖𝑛 replacing 𝜎 𝑣𝑎𝑟 (𝑡) with a less robust result(Equation 20) but not dependent on the instant of the prediction. 𝜎 𝑡𝑜𝑡 (𝑡 + 1) = 𝜎 𝑚𝑒𝑎𝑠 2 + 𝜎 𝑠𝑎𝑚𝑝 (𝑡 + 1) 2 + 𝜎 𝑖𝑛𝑖 (𝑡 + 1) 2 + 𝜎 𝑖𝑛 2 Eq 28 It is possible to define a prediction band taking into account all the uncertainties (Eq 29). 𝐺𝐻𝐼 𝑡 + 1 = 𝐺𝐻𝐼 𝑀𝐿𝑃 𝑡 + 1 ± 𝜎 𝑡𝑜𝑡 (𝑡 + 1) Eq 29 Such prediction intervals wereoften proposed in the literature [START_REF] Join | Solar energy production: Short-term forecasting and risk management[END_REF][START_REF] David | Probabilistic forecasting of the solar irradiance with recursive ARMA and GARCH models[END_REF][START_REF] Trapero | Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates[END_REF]; they refer to machine learning method 𝜎 𝑀𝐿 (𝑡 + 1) 2 = 𝜎 𝑠𝑎𝑚𝑝 (𝑡 + 1) 2 + 𝜎 𝑖𝑛𝑖 (𝑡 + 1) 2 [START_REF] David | Probabilistic forecasting of the solar irradiance with recursive ARMA and GARCH models[END_REF][START_REF] Trapero | Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates[END_REF] or to volatility and 𝜎 𝑣𝑎𝑟 (𝑡) [START_REF] Join | Solar energy production: Short-term forecasting and risk management[END_REF] but rarely both to the two kinds of uncertainty and never concerning 𝜎 𝑚𝑒𝑎𝑠 .Note that in the case of other machine learning methods used the term 𝜎 𝑖𝑛𝑖 can be equal to zero (e.g. support vector regression, regression tree etc.). The ideal case would be to systematically propose a confidence interval of prediction related to the three sorts of uncertainty (with 𝜎 𝑇𝑆 = 𝜎 𝑣𝑎𝑟 𝑡 𝑜𝑟 𝜎 𝑖𝑛 considering the desired reliability). 𝜎 𝑡𝑜𝑡 (𝑡 + 1) = 𝜎 𝑚𝑒𝑎𝑠 2 + 𝜎 𝑀𝐿 (𝑡 + 1) 2 + 𝜎 𝑇𝑆 (𝑡) 2 Eq 30 Now, considering the horizon of prediction, we define the new global uncertainty with the equation 30 with 𝜎 𝑜𝑟 = 𝐺𝐻𝐼 𝑡 + . 𝛼(). 𝜎 𝑡𝑜𝑡 (𝑡 + ) = 𝜎 𝑚𝑒𝑎𝑠 2 + 𝜎 𝑀𝐿 (𝑡 + 1) 2 + 𝜎 𝑇𝑆 2 + 𝜎 𝑜𝑟 () 2 Eq 31 Results All the previous uncertainty estimations will be computed for the 4 sites: Ajaccio, Bastia, Montpellier, Marseille. In the end of this section we will be able to propose a global prediction interval for all predictions and locations. Computing of 𝝈 𝒎𝒆𝒂𝒔 Performance ratio or performance index calculations are more relevant when there are based on accurate independent data from a pyranometer than when they are based on a reference cell with lower accuracy and the same inherent flaws as the panel itself. Thepyranometerused for the five meteorological stations measures global horizontal solar irradiance with 1% accuracy. In Eq 12, 𝜎 𝐺𝐻𝐼 can be taken as 1%. 𝐺𝐻𝐼 𝑡 + 1 . The term 𝜔 𝑖 2 𝜔 𝑖𝑗 1 𝐼𝑛 𝑗 =1 𝐻 𝑖=1 for the 5 sites and for 50 simulations (i.e. for 50 MLP configurations for each site), is (considering an average value with 95% confident interval) 𝜔 𝑖 2 𝜔 𝑖𝑗 1 𝐼𝑛 𝑗 =1 𝐻 𝑖=1 = 1.02 ∓ 0.3. For each sites the result are very close and contribute a little bit to the global uncertainty. Note that the number of inputs or of hidden nodes don't modify (or weakly) the result concerning 𝜎 𝑚𝑒𝑎𝑠 𝐺𝐻𝐼 𝑡 + 1 . The uncertainty related to the error measurement is calculated using Eq 18 and we obtained the following result: 𝜎 𝑚𝑒𝑎𝑠 𝐺𝐻𝐼 𝑡 + 1 = 1.02%. 𝐺𝐻𝐼 𝑡 + 1 ≈ 1%. 𝐺𝐻𝐼 𝑡 + 1 Eq 3 As for all the sites, the value of 𝜎 𝑚𝑒𝑎𝑠 is consistent, in the next we consider only the equation 31 for each location. Computing of𝝈 𝒗𝒂𝒓 and 𝝈 𝒊𝒏𝒉 In a previous study realized for three insular sites (Corsica, Guadeloupe and Reunion) [START_REF] Divya | Survey on Machine Learning Approaches for Solar Irradiation Prediction[END_REF], it has been shown that the absolute log-return was correlated with the forecasting error obtained with a MLP prediction in Ajaccio as described in the figure 1. 𝜎 𝑖𝑛 estimates the uncertainty related to the studied time series and based on the definition of the 𝐺𝐻𝐼 𝑡𝑟𝑒𝑛𝑑 𝑡 (Eq 20), the values𝑛𝑅𝑀𝑆𝐸 𝐺𝐻𝐼 𝑡𝑟𝑒𝑛𝑑 𝑡 -𝐺𝐻𝐼 𝑡 for each station are shown in the table 1 . The values of the uncertainty are different for each sites and must be considered separately, for all cities the parameter 𝝈 𝟐 𝒔𝒂𝒎𝒑 + 𝝈 𝟐 𝒊𝒏𝒊 . In the figure 2 the profile of GHI and 𝝈 𝟐 𝒔𝒂𝒎𝒑 + 𝝈 𝟐 𝒊𝒏𝒊 is shown. We can see that when there is no cloud occurrence this parameter is very low but can reach 350Wh/m² for other conditions. In Fig. 3, the maximum and minimumoutput values predicted are reported for 50 trained MLPin summer for Ajaccio and compared to the measure. Site Computing of 𝝈 𝒉𝒐𝒓 The estimation of this error component depends on the chosen time horizon as described in the section 3. 𝜎 𝑜𝑟 = 𝐺𝐻𝐼 𝑡 + (0.0439 * + 0.9561) Eq 33 In the next, for each site, we will use to (t) computed with the data related to the studied site. Computing of 𝝈 𝒕𝒐𝒕 The previous components allows to calculate the global uncertainty and to propose two prediction bands: UB for upper band and LB lower band [START_REF] Mcaleer | Realized Volatility: A Review[END_REF].Thus, the quality of the prediction can be defined by the triplet {𝐺𝐻𝐼 𝑡 + ; 𝐿𝐵; 𝑈𝐵} [START_REF] Join | Short-term solar irradiance and irradiation forecasts via different time series techniques: A preliminary study[END_REF]. We can also estimate the reliability of the prediction considering that the prediction is efficient when UB-LB is very lower than 𝐺𝐻𝐼 𝑡 + and inefficient when UB-LB is equal or upper to 𝐺𝐻𝐼 𝑡 + value. From this hypothesis, we can define the reliability as t(UB(t+1)-LB(t+1))𝐺𝐻𝐼 𝑡 +  Lower is this parameter, more efficient is the prediction.We construct a reliability index between 0 and 1 considering that if (UB(t+1)-LB(t+1))𝐺𝐻𝐼 𝑡 + > 1thent i.e. the prediction is not sureThe final prediction becomes: y = 0,043x + 0,956 R² = 0,952 We can see that 𝜎 𝑚𝑒𝑎𝑠 is the parameter the less interesting for the bands construction and that it is necessary to consider the coupling of 𝜎 𝑖𝑛 and 𝜎 𝑀𝐿 (related to 𝜎 𝑠𝑎𝑚𝑝 𝑎𝑛𝑑 𝜎 𝑖𝑛𝑖 )for a good prediction interval definition. For other sites the obtained curves are similar and no more information is observed.In the figure 6, the topcurve (same prediction configuration that previously) comparesthe average prediction 𝐺𝐻𝐼 𝑡 + (marks) versus the GHI measurement (line). The bottom curveshows the associated reliability index (t We see that when the variability is low (two first day from 3711 to 3726) the reliability is important (close to 70%) but when cloud occurs the value is much lower and can reach 0%. Conclusions In this paper we have shown that it is possible to computea prediction band in the context of global radiation time series forecasting using machine learning. We have defined for a popular machine learning technique, the multilayer perceptron, four kinds of uncertainties: the error due to the measurement, the variability of time series, the machine learning uncertainty (initialization and sampling) and the error related to the horizon. In literature, rarely both to the two kinds of uncertainty 𝜎 𝑖𝑛𝑖 and 𝜎 𝑀𝐿 are studied, and never 𝜎 𝑚𝑒𝑎𝑠 . We have also defined a reliability index which could be very interesting for the grid manager in order to estimate the validity of predictions. The described method has been successfully applied to four meteorological stations in Mediterranean area. In practice it is certainly not necessary to take into account all the proposed components. The ranking of the different uncertainty terms is:𝜎 𝑚𝑒𝑎𝑠 (~1%)<𝜎 𝑀𝐿 ~5% < 𝜎 𝑖𝑛 (~10%)<𝜎 𝑜𝑟 5 -20% . For 8 Tables Table 1. Values of the 𝜎 𝑖𝑛 uncertainty coefficient Table 2. Value of 𝝈 𝟐 𝒔𝒂𝒎𝒑 + 𝝈 𝟐 𝒊𝒏𝒊 for all the studied sites (in Wh/m²) Figure 1 . 1 Figure 1. Link between volatility (absolute log return) and prediction error (nRMSE) for Ajaccio Figure 2 . 2 Figure 2. GHI measured and uncertainties related to initialization and sampling for Ajaccio Figure 3 . 3 Figure 3. Representation of the sampling and initialization uncertainties on the GHI prediction for Ajaccio Figure 4 . 4 Figure 4. Correlation between horizon (in hour) and (t) Figure 5 . 5 Figure 5. Uncertainty in the GHI predictions for the horizon h=1 for Ajaccio case Figure 6 . 6 Figure 6. Comparisonfor the horizon h=1 for Ajaccio of GHI predictions (mark) and GHI measurement (line) on the top and associated reliability index in the bottom Figure 1 .Figure 2 .Figure 3 . 123 Figure 1. Link between volatility (absolute log return) and prediction error (nRMSE) for Ajaccio Figure 2. GHI measured and uncertainties related to initialization and sampling for Ajaccio Figure 3. Representation of the sampling and initialization uncertainties on the GHI prediction Figure 4 .Figure 5 .Figure 6 . 456 Figure 4. Correlation between horizon (in hour) and (t) Figure 5. Uncertainty in the GHI predictions for the horizon h=1 for Ajaccio Figure 6. Comparison for the horizon h=1 for Ajaccio of GHI predictions (mark) and GHI measurement (line) on the top and associated reliability index in the bottom Table 1 . 1 Values of the 𝜎 𝑖𝑛 uncertainty coefficientThe value of 𝜎 𝑖𝑛 could be approximated for all locations by 𝜎 𝑖𝑛 = 12.2% * 𝐺𝐻𝐼 𝑡 + 1 , but in order to customize the prediction band it is necessary choose value related to each site. 𝐂𝐨𝐦𝐩𝐮𝐭𝐢𝐧𝐠 𝐨𝐟 𝝈 𝑴𝑳 (𝝈 𝒔𝒂𝒎𝒑 and 𝝈 𝒊𝒏𝒊 )The error components 𝜎 𝑠𝑎𝑚𝑝 and 𝜎 𝑖𝑛𝑖 are computed together. For each site, we compute50 times the weights learning phase and also we generate 50 different training sets (see k-fold methodology in section 3.4). 𝝈 𝒊𝒏𝒉 𝒊𝒏 % Table 2 . 2 Value of 𝝈 𝟐 𝒔𝒂𝒎𝒑 + 𝝈 𝟐 𝒊𝒏𝒊 for all the studied sites (in Wh/m²) 𝜎 𝑖𝑛 = 𝐺𝐻𝐼 𝑡 + 1 . 𝑛𝑅𝑀𝑆𝐸(𝐺𝐻𝐼 𝑡𝑟𝑒𝑛𝑑 𝑡 -𝐺𝐻𝐼 𝑡 ), 𝜎 𝑜𝑟 () = = 𝐺𝐻𝐼 𝑡 + . 𝛼() and 𝐺𝐻𝐼 𝑚𝑖𝑛 /𝑚𝑎𝑥 𝑡 + are the min and max values of the 50 predictions generated with 50 simulations.The figure 5 shows for Ajaccio an example of the prediction bands, considering all the kind of uncertainty with horizon h=1hour. Line represents measurement and dashed lines the upper and lower bands concerning each kind of uncertainties. 1,25 different training sets) 1,2 1,15 -LB= 𝜎 𝑚𝑒𝑎𝑠 2 + 𝐺𝐻𝐼 𝑚𝑖𝑛 𝑡 + -𝐺𝐻𝐼 𝑡 + 2 + 𝜎 𝑖𝑛 2 + 𝜎 𝑜𝑟 () 2 Eq 34 1,1 -UB= 𝜎 𝑚𝑒𝑎𝑠 h 2 + 𝐺𝐻𝐼 𝑚𝑎𝑥 𝑡 + -𝐺𝐻𝐼 𝑡 + 2 + 𝜎 𝑖𝑛 2 + 𝜎 𝑜𝑟 () 2 Eq 35 1,05 With 𝜎 𝑚𝑒𝑎𝑠 𝐺𝐻𝐼 𝑡 + 1 = 1%. 𝐺𝐻𝐼 𝑡 + 1 , 1 1 2 3 4 5 6 horizons an one time step horizon, consider only 𝜎 𝑀𝐿 and 𝜎 𝑖𝑛 seems to be sufficient, for a larger horizon (>3h) it is essential to add the horizon component𝜎 𝑜𝑟 . Note that for the predictors as SVM, ARMA, Gaussian process, regression tree (and derived) 𝜎 𝑀𝐿 is easily obtained computing only 𝜎 𝑠𝑎𝑚𝑝𝑙𝑒 . We are sure that it is possible to generalize the approach to other sites and other machine learning tools. Thereby in future, we will try to apply the methodology to other time granularities and predictor as SVM, regression tree or random forest. In order to increase the interest of the LB and UB methodology, it will be also possible to physically bounded the upper and lower bands using very performant clear sky modelling in order to obtain the global solar irradiation(maximum value of the band) and the diffuse part of the solar irradiation (minimum value of the band). Acknowledgment This work is supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 646529 through the TILOS project (Technology innovation for the Local Scale, Optimum integration of Battery Energy Storage). The data sets were provides by the CNRS UMR SPE 6134 Laboratory.
40,420
[ "9456", "316" ]
[ "107805", "843", "843", "843", "843", "843", "843" ]
01004932
en
[ "spi" ]
2024/03/04 23:41:46
2006
https://hal.science/hal-01004932/file/SGCGZ.pdf
F Sánchez J A García Francisco Chinesta Ll Gascón C Zhang Z Liang Biao Wang F Sa ´nchez J A Garcı Ll Gasco ´nb A process performance index based on gate-distance and incubation time for the optimization of gate locations in liquid composite molding processes Keywords: C. Computational modelling, E. Resin transfer moulding (RTM), E. Resin flow, E. Cure de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Liquid Molding Processes and, particularly, Resin Transfer Molding (RTM), are being widely used in the manufacturing of fiber-reinforced composite materials. These processes are based on the preform reinforcement impregnation with a low viscosity resin, from which composites parts are conformed after the curing reaction (see Fig. 1). Most applications of the process require a short manufacturing cycle time and low production costs. An appropriate design of the process can be carried out with virtual manufacturing tools such as computer simulations. Many of the final qualities of the piece are determined during the injection and curing stages. An accurate prediction of the mold flow pattern, pressure distribution, temperature and curing profile of the resin using simulation techniques allow one to optimize the process and, hence, to improve the final properties of the manufactured part. The design variables (process inputs) such as injection pressure, injection flow rate, gate and vent locations, mold and resin temperatures, etc. are introduced in the optimization models. Based on the manufacturing objectives, the optimization algorithms determine iteratively the optimal values of the process parameters. Nevertheless, due to the increasing complexity of manufactured parts and to the time required to simulate many of the cases like the threedimensional non isothermal filling, simulation times involving optimisation become excessive. In this manner, the introduction of efficient and accurate simulation tools within the optimization algorithms will be a key issue in composites manufacturing. RTM mold filling simulation The resin flow through a porous medium can be modelled by Darcy's law n Z K K m Vp (1) where n is the velocity, K is the preform permeability tensor, m is the fluid viscosity and p is the pressure. The fluid flow problem is defined in a volume U U Z U f ðtÞg U e ðtÞ [START_REF] Hirsch | Numerical computation of internal and external flows[END_REF] where the fluid at time t occupies the volume U f (t) and U e (t) defines at that time the empty part of the mold. Assuming constant permeability and viscosity, orthotropic preform and the fluid incompressibility, the variational formulation related to the Darcy flow results ð U f ðtÞ ðVp à $K VpÞdU Z 0 ( 3 ) where p* denotes the usual weighting function. The prescribed conditions to impose on the boundary of U f (t) are: † The pressure gradient in the normal direction to the mold walls is zero. † The pressure or the flow rate is specified at the injection nozzle. † Zero pressure is applied on the flow front. The flow kinematics is computed by means of a conforming finite element Galerkin technique applied to the variational formulation extended to the whole domain U [START_REF] Garcı ´a | A fixed mesh numerical method for modelling the flow in liquid composites moulding processes using a volume of fluid technique[END_REF], imposing null pressure at the nodes not connected with at least a completely filled element. An important concern in the mold filling simulation is the numerical treatment of the moving boundary defined by the flow front of the liquid resin. The domain occupied by the fluid where the governing equations have to be integrated changes continuously, so it has to be defined at each time step during the simulation. The fluid domain evolution is accomplished by the resolution of the hyperbolic transport equation that governs the fluid presence function I updating dI dt Z vI vt C n$VI Z 0 (4) with I defined by: Ið x; tÞ Z 1 x 2U f ðtÞ 0 x ;U f ðtÞ ( (5) The numerical resolution of hyperbolic equations has been extensively treated by different authors due to the difficulty to obtain schemes that reproduce with accuracy the solution in both smooth regions and in the presence of discontinuities, see for instance [START_REF] Hirsch | Numerical computation of internal and external flows[END_REF][START_REF] Leveque | Numerical methods for conservation laws[END_REF]. First order upwind techniques have been used to approximate solutions of conservation laws. These methods have a strong numerical dissipation in the neighbourhood of discontinuities giving a low accuracy in smooth regions of the solution, which means in RTM a diffusive definition of the flow front. In order to avoid this problem, some second-order finite difference schemes have been proposed. The Total Variation Diminishing (TVD) schemes were introduced by Harten [START_REF] Harten | High resolution schemes for hyperbolic conservation laws[END_REF]. A particular version of these techniques is the flux limiters introduced by Sweby [START_REF] Sweby | High resolution schemes using flux limiters for hyperbolic conservation conservation laws[END_REF], based on the definition of hybrid schemes that use second-order approximations of the solution in the smooth regions and a limitation of the numerical fluxes to the first order in the vicinity of the discontinuities avoiding the spurious oscillations. Its implementation applied to RTM was issued in [START_REF] Garcı ´a | A fixed mesh numerical method for modelling the flow in liquid composites moulding processes using a volume of fluid technique[END_REF] and a deeper analysis was completed in [START_REF] Sa ´nchez | Propuesta de un esquema nume ´rico eficiente para el tratamiento de los problemas de transporte en la simulacio ´n del moldeo por transferencia de resina[END_REF]. The incubation time as a curing-related simulation variable Another important issue in the numerical simulation is the resin curing stage. In LCM, the goal is to saturate completely the fiber preform with the resin before the curing reaction reaches the threshold value where the viscosity starts to increase abruptly, also known as gelling point. The chemical reaction begins when the pre-polymers are mixed just before the injection in the mold cavity (see Fig. 2). In many cases, the resins need to be heated to initiate the cure, nevertheless, the use of the inhibitors in the resin formulation allows to control and delay the curing [START_REF] Ramis | Efect of the inhibitor on the curing of an unsaturated polyester resin[END_REF][START_REF] Rouison | Kinetic study of an unsaturated polyester resin containing an inhibitor[END_REF] and hence, an isothermal mold filling is possible. Also, if a multi-component injection system is used, one could change the initial concentration of the resin catalyst in the mixing head with a controlled system (see Fig. 3). Then the catalyst concentration may be different for each region of the filled mold yielding different rates of cure. In any case, it is important to prevent early gelation and avoid nonhomogeneous curing. Mold filling constraint can be defined enforcing that the resin gelling time must be higher than the mold filling time, that is t gel K t fill O 0 (6) Moreover, the resin gelling time can be delayed and controlled by using different inhibitor concentrations in the polymerization resin agents as the fluid flows, see for instance [START_REF] Cardona | Spatially homogeneus gelation in liquid composite molding[END_REF][START_REF] Cardona | Etude et contro ˆle de la polime ´risation homoge `ne dans les proce ´de ´s LCM[END_REF]. In [START_REF] Cardona | Etude et contro ˆle de la polime ´risation homoge `ne dans les proce ´de ´s LCM[END_REF], Comas-Carmona et al. used the correlation of the gelling time and the concentration of the inhibitor as follows lnðt gel Þ Z a C bC hardener C cC accelerator C dC inhibitor [START_REF] Ramis | Efect of the inhibitor on the curing of an unsaturated polyester resin[END_REF] where a, b, c, d, are constants to be determined for each resin, and C refers to the concentration of the different chemical agents. On the other hand, the curing reaction proceeds since the pre-polymers are mixed and its evolution depends mainly on the temperature and time. During the mold filling stage, the curing conversion must not reach the gelling point value a gel allowing a complete filling. Once the filling is finished, the curing continues and the part can be demolded when it has reached enough strength. For engineering purposes such as process control and optimization, the cure-kinetic reaction can be modeled as a function of time through the temperature history [START_REF] Dunkers | A mid infrared attenuated total internal reflection cure sensor for control of resin transfer moulding of a pre-ceramic[END_REF] aðtÞ Z X N iZ1 a i exp K ð t 0 k i ðtÞdt (8) where a i and k i depends on the resin system and a is the resin curing conversion factor. The time to gel and the curing conversion depend directly on the resin formulation as described in Eqs. ( 7) and ( 8), but this topic is not addressed in the present work. In all the cases, the curing reaction depend on the time elapsed since the reactive mixture just before the injection, that is, on the incubation time. Moreover, if we consider an isothermal filling due to the use of inhibitors, then, the incubation time is directly related to the resin curing conversion as illustrated later. Therefore, the incubation time resolution can be used as a curing related variable for optimization and control purposes. The main point of this approach is its generality (for all the chemical system), despite the fact that the incubation time is not the most appropriate way to describe the real curing evolution. Thus, by solving the incubation time variable one could detect improper processes. For example high gradients in the incubation time implies high gradient in the curing rates and consequently local thermomechanical stresses. In this work, we will solve the incubation time in different systems, without considering their chemical particularities. The incubation time depends on the traveled path of each fluid particle throughout the mold since its injection, and therefore, is not always easily predicted. In many particular cases, such as multi-component injection systems, areas with different fiber permeabilities, complex geometries invloving obstacles, etc. the incubation time of the resin located in the flow front cannot be the injection time. The calculation of the incubation time E distribution throughout the mold has been achieved by solving the linear advection equation given by dE dt Z vE vt C n$VE Z 1 (9) The value of the incubation time E is zero in the injection nozzle (or a fictitious time accounting for the inhibitors presence, as described later, in controlled processes) and varies throughout the filled part of the mold, but it is not defined on the empty one The incubation time simulation depends directly on the previous resolution of the volume fraction, which allows locating the flow front position of the injected resin. The accurate resolution of both coupled advection problems has been implemented (see [START_REF] Gasco ´n | A flux limiter technique for the calculation of the incubation time in LCM[END_REF][START_REF] Sa ´nchez | Some numerical schemes for the numerical treatment of the advection equation in Liquid Composite Moulding processes[END_REF] for details) using a similar strategy for both equations based on the original definition and implementation of a second-order scheme with flux limiters applied to the integration of a general linear advection equation dJ dt Z vJ vt C n$VJ Z S ( 11 ) where JZI and SZ0 in the volume fraction updating case and we consider JZEI and SZI with the initial conditions ðEIÞð x; tÞ Z Eð x; tÞ x 2U f ðtÞ 0 x ;U f ðtÞ ( ( 12 ) for the integration of the incubation time. This last strategy proposed by Chinesta et al. [START_REF] Chinesta | Some difficulties in the flow treatment in fixed mesh simulations of composites forming processes[END_REF] allows one to circumvent the particular difficulty of the incubation time not being defined in the empty part of the mold. In order to illustrate the use of the incubation time as a resin curing design parameter, we can analyze some RTM simulations obtained in a recent work [START_REF] Sa ´nchez | Propuesta de un esquema nume ´rico eficiente para el tratamiento de los problemas de transporte en la simulacio ´n del moldeo por transferencia de resina[END_REF] for the mold depicted in Fig. 4. In this case, a mold of 1000 mm! 500 mm, with a thickness of 5 mm, a permeability of 10 K7 m 2 containing 50% fiber volume, is considered. The resin viscosity of 1 Pa s, and a constant flow injection rate of 5 cm 3 /s, is considered. The circular obstacles induce velocity variations as well as the welding of different flow fronts. The injection gate is located in the lower left corner and the vent near the upper right one. Fig. 5 shows the flow pattern, which is the time at which the resin reaches each position. Fig. 6 represents the incubation time distribution. High values of the time can be noticed not only in the flow front position but also in the region with very small velocities like downstream the mold obstacles. Both results have been calculated with the simulation code FLOWSIM developed in our research group in [START_REF] Garcı ´a | A fixed mesh numerical method for modelling the flow in liquid composites moulding processes using a volume of fluid technique[END_REF][START_REF] Sa ´nchez | Propuesta de un esquema nume ´rico eficiente para el tratamiento de los problemas de transporte en la simulacio ´n del moldeo por transferencia de resina[END_REF][START_REF] Garcı ´a | Mould filling simulation in RTM processes[END_REF]. Fig. 7 depicts the curing conversion predicted with the commercial code CMOLD for the same process conditions. Assuming a curing conversion a gel Z0.2 for the resin system, it is guaranteed the total mould filling before the viscosity buildup. The knowledge of the resin flow front evolution and incubation time requires only a two-dimensional isothermal simulation. The main advantage in considering the isothermal case is not only the lower CPU time required, but also the fact that the number of material parameters involved in the model is reduced with respect to those required in non-isothermal three-dimensional simulations. Several computer simulations, including some which address the thermal and curing problems, have been developed in RTM process in [START_REF] Bruschke | A numerical approach to model nonisothermal viscous flow through fibrous media with free surface[END_REF][START_REF] Lin | Non-isothermal mold filling and curing simulation in thin cavities with preplaced fiber mats[END_REF][START_REF] Trochu | Numerical analysis of the resin transfer moulding process by the finite element method[END_REF][START_REF] Young | Three dimensional nonisothermal mold filling simulations in RTM[END_REF][START_REF] Antonucci | A simulation of the nonisothermal resin transfer molding process[END_REF]. Related works in RTM process optimization Many research studies have been focused on the use of simulations in the RTM process optimization. We can find different optimization models in which the maximum temperature differences have been considered [START_REF] Young | Three dimensional nonisothermal mold filling simulations in RTM[END_REF][START_REF] Rudd | Process modelling and design for resin transfer molding[END_REF]. The use of neural networks and genetic algorithms has been also applied with success in [START_REF] Jiang | Optimum arrangement of gate and vent locations for RTM process design using a mesh distance-based approach[END_REF][START_REF] Jiang | A process performance index and its application to optimization of the RTM process[END_REF][START_REF] Mathur | Use of genetic algorithms to optimize gate and vent locations for the resin transfer molding process[END_REF][START_REF] Spoerre | Integrated product and process design for resin transfer molded parts[END_REF][START_REF] Luo | Optimum tooling design for resin transfer molding with virtual manufacturing and artificial intelligence[END_REF]. With these techniques, iterative stochastic search algorithms are required and then, the computation involved is very important. In other recent work [START_REF] Gokce | Branch and bound search to optimize injection gate locations in liquid composite molding processes[END_REF], the optimization problem is treated by using the branch and bound search method. In all these cases, the process optimization objective functions, such as minimum filling time or dry spot prevention, have to be defined quantitatively. In this work, a new process performance index is presented. The use of the incubation time as a curing-related variable allows optimizing the LCM process, locating the gates and vents properly in order to reduce cycle time, avoid dry spots and obtain a spatially homogenized curing. This effectiveness of the approach is illustrated through a number of examples which involve race tracking, areas with permeabilities, different filling conditions, etc. Definition of the resin flow index based on gate distance and incubation time The success of filling and curing stages in liquid composite molding (LCM) depends on many variables such as locations of gates and vents, temperature distribution, flow rate, injection pressure, etc. Traditionally the selection of gate and vent locations in mold design is based on experience and trial and error attempts. Recent research studies have been conducted to reduce cycle time by using computer simulation and optimization. Zhang et al. employed in [START_REF] Jiang | A process performance index and its application to optimization of the RTM process[END_REF][START_REF] Mathur | Use of genetic algorithms to optimize gate and vent locations for the resin transfer molding process[END_REF][START_REF] Spoerre | Integrated product and process design for resin transfer molded parts[END_REF][START_REF] Luo | Optimum tooling design for resin transfer molding with virtual manufacturing and artificial intelligence[END_REF] a process performance index based on gate-distance of the resin located on the flow front at different time steps. A good process should have short filling time and a vent-oriented flow with a desired resin flow pattern. At a given time step, the distances from the nodes located on the resin flow front to the outlet are associated with the quality of the filling process (Fig. 8). The standard deviation of those distances is used to evaluate the shape of the flow front (the smaller the better). In addition to the shape of the flow pattern, total mold filling time is included to guarantee overall performance of a mold filling process. A process performance index was proposed as follows Q Z T ! P m kZ1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi P n k i k Z1 ðd ik K d k Þ 2 nK1 r m Z T ! P m kZ1 q k m ( 13 ) where The filling flow pattern is divided into m time steps, and intermediate flow front indexes q k (standard deviation) can be calculated for every flow front considered which is associated to the time step k. Then the overall process performance index Q is calculated from the intermediate values q k as illustrated in Eq. ( 13). Q This index has been extended in this work in order to include the incubation time as a variable related with an optimal curing performance. The new index is defined as follows QD Z T !D ! P m kZ1 q k m ; D Z P n f iZ1 D i n f ; D i Z max nn i jZ1 ðE j Þ K min nn i jZ1 ðE j Þ (14) where: QD overall process performance index (the lower the better) T total mold filling time q k intermediate flow front index m number of flow fronts considered D resin incubation time dispersion index defined for all filled nodes at the end of mold filling D i dispersion index related to node i which is assumed fullfilled E j incubation time for node j nn i number of nodes connected with node i n f number of filled nodes Thus, the overall process performance index QD includes the consideration of the flow front shapes at different time steps with q k , the total mold filling time T and the differences in the incubation time values of all the nodes impregnated by the resin by means of the resin incubation time dispersion index D. For every filled node i, its dispersion index D i is calculated as the maximum difference of the incubation time values E j for all the nodes j connected to node i, see Fig. 9. We note that the dispersion index D i accounts for the local differences in the incubation time, and therefore, in the curing conversion. A spatially homogeneous curing for the whole resin filled part of the mold is then conditioned by a small value of the resin incubation time dispersion index D. According to the previous definition, a good RTM process should have a desirable flow front shape, short filling time and small resin incubation time dispersion index. Numerical simulations In order to validate the resin flow index just presented, different numerical simulation experiments have been carried out to check how the index represents the RTM process behavior. In all cases, the injection process is defined in a mold of 140 mm!100 mm, with a thickness of 6.4 mm, a permeability of 2.714!10 K10 m 2 with 50% fiber volume, a resin viscosity of 0.1 Pa s, and a constant flow rate of 0.5 cm 3 /s. Case with induced variations in the incubation time Case 1 refers to a simple injection example with one inlet and one vent in a mold with uniform permeability, see Fig. 10. In case 1a, the resin is injected with constant inhibitor concentration during the filling process with the injection system represented in Fig. 2. We observe in Fig. 11 the flow pattern, the intermediate flow front index q k evolution, the incubation time E distribution and the dispersion index D i profile throughout the mold. We observe that the regions where the resin velocity is smaller (upper and lower regions of the mold) correspond to the highest values for the dispersion index D i due to the differences of the incubation time of the resin located nearby. In that case, the process performance index defined by Eq. ( 13), takes the value QZ1.09 and the new process performance index defined by Eq. ( 14), QDZ18.85. In the situation depicted in Fig. 10, case 1b, three control points located in the mold are used in the simulations to change the incubation time of the resin located in the injection nozzle as the fluid fills the cavity. One could expect that an appropriate change (according to the chemical kinetics) in the inhibitor concentration could be equivalent to adjust the incubation time of the fluid entering into the mold. Thus, the gate incubation time is set to be the incubation time existing at the control point when the fluid flow front reaches this point. A similar sensor-based control technique was implemented with satisfactory results by Comas-Carmona et al. in [START_REF] Cardona | Etude et contro ˆle de la polime ´risation homoge `ne dans les proce ´de ´s LCM[END_REF]. It can be observed in Fig. 12 that the index defined in Eq. ( 14) QDZ5.00 is better in this case as was expected, and Q does not account for this improvement. The uniform color map of the dispersion index indicates an expected homogeneous curing. Fig. 13 shows the evolution of the resin incubation time dispersion index D for both cases. The three peaks in the controlled case correspond to the instants that the incubation time at the injection nozzle is changed. Case of non-homogeneous preform permeability Non-uniform preform permeability induces important variations in the flow pattern during the filling process. Three cases with different permeability conditions (10, 50, 0.1 and 0.02 times the uniform permeability previously used) as shown in Fig. 14 have been analyzed. The associated results are shown in Fig. 15. It can be noticed that without changes in the incubation time (that is equivalent to use the same resin formulation) at the gate during filling, both index Q and QD agree that case 2a represents the best filling conditions, and case 3b the worst one according with the results obtained for similar conditions in [START_REF] Jiang | A process performance index and its application to optimization of the RTM process[END_REF]. Application in multi-component injection systems Case 4 depicted in Fig. 16 analyzes the effect of a high permeability area in the lower part of the mold, which induces a disturbance in the flow pattern and produces a dry spot in the manufactured part due to a wrong location of the vent as depicted in Fig. 17. To avoid that problem other injection gate can be considered as it is depicted in Fig. 18 which opens only when a sensor previously located detects this anomalous behavior. Many research studies have been done showing the application of control strategies in RTM manufacturing [START_REF] Bickerton | Design and application of actively controlled injection schemes for resin-transfer molding[END_REF][START_REF] Lawrence | An approach to couple mold design and online control to manufacture complex composite parts by resin transfer molding[END_REF]. In this example, the location of this auxiliary gate has been obtained from [START_REF] Gokce | Simultaneous gate and vent location optimization in liquid composite molding processes[END_REF]. The lower race tracking originates a dry spot in the upper right mold corner, which could be corrected with the auxiliary gate. That gate injects resin when a control point located appropriately detects the unbalanced flow front (see Fig. 19) avoiding the mold-filling defect. However, in the case depicted in Fig. 19, a problem arises during the welding of the resin flow fronts with different incubation times. Fig. 20 shows that the resin incubation time and the dispersion index D i vary strongly throughout the mold with the use of multiple injection gates where the inhibitor concentration is taken as constant (that is the incubation time is the same on all the gates during the mold filling). We observe areas with a high gradient in the incubation time and in the dispersion index. In that case, the process performance index takes a value of QDZ36.67. Fig. 21 depicts a better solution characterized by QDZ19.26, obtained by using different incubation times at the gates during the filling using the same strategy as before. We can notice that the incubation time distribution varies smoothly throughout the mold predicting therefore, lower variations in the curing conversion rates. Conclusions The index proposed in this paper allows the consideration of both resin flow and curing issues in process design optimization. Another advantage of using this index in process optimization is that the computation involved is very efficient as only a two-dimensional calculation is required instead of more complex models requiring the simulation of the coupled flow-thermal models that involve 3D or 2.5D computations. This index seems to be very useful in LCM process design optimization where costly iterative stochastic search algorithms such as genetic algorithms are used and could be easily adapted to a specific resin system using its rheokinetic behaviour. This last topic has not been addressed in the present work. Moreover, the comparison between numerical predictions and experiments is a work in progress. Fig. 1 . 1 Fig. 1. Illustration of the stages in resin transfer molding processes. x 2U f ðtÞ Not defined x ;U f ðtÞ ( Fig. 2 . 2 Fig.2. Schematic resin injection system with constant resin formulation. Fig. 3 . 3 Fig. 3. Schematic resin injection system with PC-based control providing automatic mixing of resin components at injection gate. Fig. 4 .Fig. 5 . 45 Fig. 4. Finite element mold discretization and location of the injection gate and vent. Fig. 6 .Fig. 7 . 67 Fig. 6. Incubation time distribution at the end of the mold filling. overall process performance index (the lower the better) T total mold filling time q k intermediate flow front index m number of flow fronts considered n k number of nodes defining the kth flow front d ik distance from node i located on the kth flow front to the outlet d k average distance from the nodes defining the kth flow front to the outlet Fig. 8 . 8 Fig. 8. Desired flow front shape (left) and distances between flow front nodes and vent (right). Fig. 9 . 9 Fig. 9. Dispersio ´n index D i definition. Fig. 11 . 11 Fig. 11. Simulation results for case 1a. Fig. 10 .Fig. 12 . 1012 Fig. 10. Case 1. Uniform permeability and induced variations in the incubation time at the injection nozzle. Fig. 13 .Fig. 14 . 1314 Fig.[START_REF] Sa ´nchez | Some numerical schemes for the numerical treatment of the advection equation in Liquid Composite Moulding processes[END_REF]. Time evolution of the resin incubation time dispersion index D for case 1a (uncontrolled) and case 1b (controlled). Fig. 15 .Fig. 16 . 1516 Fig. 15. Results for different non-homogeneous preforms. Fig. 17 . 17 Fig. 17. Flow pattern without auxiliary gate showing the incomplete mold filling. Fig. 18 . 18 Fig. 18. Case 4. Race tracking and auxiliary gate. Fig. 19 . 19 Fig. 19. Flow pattern modified with the inclusion of an auxiliary gate. Fig. 20 .Fig. 21 . 2021 Fig. 20. Incubation time (left) and dispersion index D i in nodes (right) without inhibitor concentration variation at the injection gates. Acknowledgements This research work is supported by a grant from the spanish Ministerio de Ciencia y Tecnologı ´a (MCYT), project DPI2004-03152 and by the project PRUCH-03/08 of the University Cardenal Herrera-CEU.
30,174
[ "15516" ]
[ "326095", "300772", "59131", "300772", "485957", "485957", "485957" ]
01004901
en
[ "spi" ]
2024/03/04 23:41:46
2011
https://hal.science/hal-01004901/file/LFJA.pdf
E Lacoste S Fréour F Jacquemin A multi-scale study of residual stresses created during the cure process of a composite tooling material Keywords: Composite materials, cure process, residual stresses, scale transition model . The Mori-Tanaka (MT) and Eshelby-Kröner self-consistent (EKSC) models are used in order to achieve a two-steps scale transition procedure, relating the microscopic properties of the material to their macroscopic counterparts. This procedure enables estimating the multi-scale mechanical states experienced by the material, i.e. the local (microscopic) stresses due to thermal and chemical shrinkage of the resin, along a typical, macroscopic stress-free, cure process. The influence of the chosen scale transition model on both the calculated effective properties of the material and its local stress states, is investigated. These results are a first step for investigating the service life fatigue of the material, as well as its failure behaviour. Introduction Context. In the last decade, composite materials based on carbon (or glass) fibers and thermoset resins have been more and more involved in the design of mechanical parts. However, the engineering of composite parts yields several scientific challenges. One of them is the computation of the microscopic internal stresses, due to the properties mismatch between the fibers and the matrix. As curing is considered, the strong thermo-chemical shrinkage of the matrix is counteracted by the fibers, and yield self-compensated stresses in the constituents. These stresses can significantly affect the macroscopic stress-strain, fatigue and failure behaviour, and can result in fiber-matrix debonding or in increased fibers waviness (microbuckling) [2,3]. Review of predictive models. Various experimental approaches were developed for measuring these local stresses and strains; an extensive summary of these methods is given by Parlevliet et al. [4]. Meanwhile, scale transition models proved to be a relevant predictive approach. Tsaï-Hahn's rules of mixtures [START_REF] Tsaï | Introduction to composite materials[END_REF] first gave an easily-understandable estimate of the properties of unidirectional composite plies, taking into account the anisotropy, but failed to predict the local stress states. Another class of models, the so-called "self-consistent" models, became trendier in the last decade. Among them, Mori-Tanaka (MT) [START_REF] Mori | [END_REF] and Eshelby-Kröner Self-Consistent (EKSC) estimates [7,[START_REF] Kocks | Texture and anisotropy[END_REF] are the most used. These models can take into account the morphologies of the heterogeneities (inclusions) constituting the material [START_REF] Fréour | [END_REF]10]. They can also systematically predict the distribution of the mechanical states in the very constituents of the materials. It was recently taken advantage of this capability in order to realistically model realistically the various features of the multi-scale hygro-mechanical coupled phenomena occurring in composite structures submitted to humid air [11,12]. Purpose of the study. The study is focused on a composite tooling material with a non-trivial microstructure, which cannot properly be accounted for, by classical rules of mixture. The EKSC and MT models will be used to carry out a multi-scale analysis of the material behaviour during the cure process. The evolutions of the properties of the resin during the cure will be described, and used to predict the effective mechanical behaviour of the material and the development of residual stresses. The influence of the chosen model on the results will then be investigated and discussed. The scale transition procedure Description of the models. Scale transition models are based upon a representation of the material at two distinct scales: a) the local scale (denoted by the superscript i ), of the size of the constituents and b) the macroscopic scale ( I ), where the behaviour of the effective medium is observed. The corresponding mechanical states are related through Hill's volume weighted average relations [13]: i I ε ε ε ε = = = = ε ε ε ε and i I σ σ σ σ = = = = σ σ σ σ . (1) The arithmetic mean was used according to [14], which showed that it provides more reliable estimates than the geometric average for modeling organic matrix composites. In a previous work, Eshelby studied the behavior of an ellipsoidal inclusion embedded in a homogeneous ambient medium a loaded at the infinite [15]. His work led to the following relation, expressed by Hill [16]: ( ( ( ( ) ) ) ) a i a i : * ε ε ε ε - -- - ε ε ε ε - -- - = = = = σ σ σ σ - -- - σ σ σ σ L , ( 2 ) where L* represents Hill's constraint tensor, which depends upon the stiffness of the medium, and the morphology (orientation and shape factors) of the inclusion (see also [START_REF] Kocks | Texture and anisotropy[END_REF]17,[START_REF] Mura | Micromechanics of Defects in Solids[END_REF]). Both relations, combined with the behaviour law, are used for estimating the effective (macroscopic) properties of the material. They also provide an estimate of the mean local mechanical states, with the following approximations indeed: the inter-particles interactions, the interphase and interfacial effects, are not represented. That's why the choice of the embedding medium has a deep influence on the predicted local states. This choice distinguishes the MT from the EKSC model. For the EKSC scheme, the embedding medium is supposed to have the properties of the effective material at the macroscopic scale; whereas for the MT model [START_REF] Mori | [END_REF], one of the constituents (usually, the matrix phase or the dominant phase) is chosen as the ambient medium. Application to the Hextool. The present work is focused on a composite material developed by Hexcel Composites for manufacturing complex parts: the Hextool [19]. It is made of unidirectional (UD) rectangular-shaped (60x8x0.15 mm) reinforcing strips, randomly disposed in the layout (Fig. 1b). The UD strips are constituted by AS4 fibers and M61© system (high-temperature toughened bismaleimid resin) [19]. The overall resin volume ratio is about 47 %, distributed inside the strips and between the strips (5 % of the overall resin volume). Thereby, the structure of the material involves three different scales: microscopic, mesoscopic and macroscopic. A two-steps scale transition procedure (Fig. 1a) is performed: first, the properties of resin and fibers (microscopic scale) are homogenized to obtain those of the UD strip (mesoscopic scale). Then, another homogenization provides the macroscopic properties, from those of the resin and the UD strips (mesoscopic scale) previously estimated. For the first step, three options will be considered: using MT model with either the resin (MTres) or the fibers (MTfib) as the embedding medium, and using the EKSC model (effective material as the embedding medium). For the second step, only EKSC model was considered, as one cannot distinguish any dominant phase. Moreover, the reinforcing strips were assumed to be disc-shaped for avoiding some limits of EKSC model [1] and optimizing computation time. Micro-scale Meso-scale 1 st step : UD strips 2 nd step: Hextool Macro-scale Investigation of the mechanical properties and cure kinetics of the resin During the cure process, the local and macroscopic behaviours can be described by the following differential chimio-thermo-elastic law, relating the stress σ to the strain ε: ( ( ( ( ) ) ) ) χ T : k k k k k L η η η η - -- - α α α α - -- - ε ε ε ε = = = = σ σ σ σ with k = {i, I}, (3) where the stiffness is represented by the 4 th -order tensor L and the Coefficients of Thermal Expansion (CTE) and Chemical Expansion (CCE) by the 2 nd -order tensors α and η, respectively. The temperature and conversion degree are denoted by T and χ, and the time derivative by the upperscript . A differential form is required, as the resin undergoes important evolutions of its properties during the cure, that strongly affect the development of residual stresses. In order to precisely describe those evolutions, the mechanical properties of the M61 resin were investigated. Cure kinetics. The autocatalytic polymerisation reaction of the M61 resin can be parameterized by the conversion degree χ, defined as the ratio of the enthalpy released at a time, per the overall released enthalpy. This conversion degree is a state variable, and can be used for identifying the mechanical properties of the resin. The cure kinetics of a M21 organic resin, obtained with Differential Scanning Calorimetry (DSC) by Msallem et al. [START_REF] Msallem | [END_REF] was used in the present study. A 180°C cure cycle was considered, as described in (Fig. 2a). The gradients of temperature and conversion degree, which can occur in thick parts due to the exothermic reaction, were neglected. Elastic modulus. The extreme evolution of the viscoelastic modulus of the resin reflects the deep changes of its physical state. At the beginning of the cure, the resin is in a liquid state and has a viscous behaviour. At the gelation point, the resin turns into a solid state, and the relaxation times become important enough so that residual stresses start to develop. The stiffness then increases with the conversion. Although several models were developed to model this hardening of the resin, most authors assume a linear dependence of the elastic modulus upon the conversion degree. However, the percolation theory [21] gives a more suitable expression, as discussed in [START_REF] Msallem | [END_REF]: ( ( ( ( ) ) ) ) ( ( ( ( ) ) ) ) 8/3 2 gel 2 gel 2 cured χ 1 χ χ . T E χ T, E                                 - -- - - -- - = = = = , (4) χ gel stands for the conversion degree at the gelation point, whereas E cured (T) the elastic modulus of the fully cured resin. Fig. 2b above displays the resulting evolution of the elastic modulus versus the temperature and reticulation degree. The evolution of the stiffness with temperature was determined, by co-workers (see acknowledgements), through Dynamic Mechanical Analysis (DMA). DMA provides the instantaneous modulus, which is very close to the equilibrium stiffness if the resin is in a vitreous state. This hypothesis might be partially wrong during the hardening phase, as the vitreous transition temperature is close to the cure temperature. The (Eq. 4) used here neglects the relaxation phenomena, and may thus yield a slight overestimation of local stresses. Thermal expansion. The CTE of the resin also experiences severe evolutions during the curing process: it increases with the temperature and decreases with the conversion degree. However, the dependency on conversion degree should not influence the results here, as the resin is either liquid or fully cured during the heating and cooling stages of the cure cycle. The evolutions of the CTE during the cure process were described by (Eq. 5): (T) α χ) (1 (T) α χ χ) α(T, raw cured × × × × - -- - + + + + × × × × = = = = with             × × × × = = = = × × × × + + + + × × × × + + + + = = = = (T) α 4 (T) α T 0.0003 T 0.1191 35.465 (T) α cured raw 2 cured . (5) Cure shrinkage. The polymerisation reaction corresponds to the formation of covalent bonds between the macromolecules and a constriction of the amorphous network. This induces important bulk shrinkage of the resin, in the order of 3 % up to 9 %, which is comparable to the thermal shrinkage. In this study, we use a value of 5.7 % for the chemical volume shrinkage [START_REF] Msallem | [END_REF], which corresponds to a linear Coefficient of Chemical Expansion η, equal to -1.9 %. Results of the scale transition procedure Computation of effective properties. The scale transition procedure presented above is used to compute the effective properties of the material during the cure process, summed up in (Fig. 3). The gelation point is denoted by a marked gap of some properties. The choice of a model or another mainly affects the out-of-plane properties, but also the in-plane stiffness; the EKSC model results in a more rigid material than the MTres model, but less rigid than the MTfib model. The few available experimental results [19] are consistent with the estimated effective properties, even though discrepancies occur for the in-plane properties (weaker stiffness and stronger coefficients of expansion). The discrepancies are attributed to out-of-plane waviness of the reinforcing strips, and also to the uncertainties on the fiber content. Computation of local residual stresses. The scale transition procedure was used in order to incrementally compute the residual stresses along the cure process. The macroscopic stresses were supposed null, although other boundary conditions (fixed in-plane displacement, interactions with a metallic mold…) can be considered in the same way. Whatever, extensive tests showed that local stress created by those "external stresses" mostly vanish after removal from the mould. 4) above shows the development of residual stresses at the macro, meso and micro levels, dropped in the coordinate system of each reinforcing strips (x, y and z respectively stand for the axial, transverse and out-of-plane directions). One can observe three well-differentiated periods, corresponding to the pre-gelation phase, the hardening, and the thermal cooling. The largest part of residual stresses is created by thermal shrinkage. The role of chemical shrinkage is also significant, but may be overestimated, due to relaxation phenomena. Residual transverse stresses evolutions, predicted in the present study, can be qualitatively compared to those obtained by White and Kim [22] on a [0°/90°] s carbon-epoxy laminate subjected to manufacturing process. This comparison is limited by the Hextool's specific microstructure and resin (BMI instead of epoxy). The choice of the model affects the residual stresses at the meso-level, only. Thus, MT and KESC estimates provide identical results for the UD strips and the extra-strips resin. At the microscopic scale (intra-strips resin and fibers), the MTfib model results in the strongest stresses and the MTres model in the weakest. This gap is particularly sensitive along y and z axes. In the x-direction, the rigid elements undergo compression stresses which may yield micro-and meso-bucklings phenomena. The wavy aspect of the Hextool ply seems to corroborate this statement. On the contrary, the resin undergoes important traction stresses in every direction, which could be the source of delaminations. A Tsaï-Wu quadratic failure criterion [START_REF] Tsaï | Introduction to composite materials[END_REF] enabled quantifying this risk, using data taken in the literature for a high-strength epoxy resin [23]. The values of the "failure factor" (inverse of the standard resistance R) are widely weaker than 100 %, which indicates that the residual stresses should not damage the material after the cure process. Conclusions and perspectives Two scale transition models were used for describing the multi-scale mechanical behaviour of a meso-structured composite material, through a two-steps procedure. This method was applied to the prediction of effective properties and local stress states in the composite, during the cure process. The results show significant residual stresses at the micro-level, but also the strong influence of the chosen scale transition model: the results given by the EKSC model, widely used for computing local stresses, can be enclosed by the two versions of the MT model. The thermal stresses were found predominant over those created by the chemical shrinkage, which may moreover be partially or totally relaxed. According to the results of a Tsaï-Wu failure criterion, the residual stresses in the resin should not induce material damage, but may play a significant role on failure under service loads. The method could also be applied for predicting the material service-life duration (fatigue under repeated thermal cycles, in particular). In further works, those estimates shall be compared to full-field approaches, which provide more realistic estimates of the local stress fields. Figure 1 : 1 (a) Principle of the two-steps scale transition procedure (b) Aspect of the Hextool Figure 2 : 2 (a) Cure cycle and conversion degree (b) Young's modulus of the M61 resin Figure 3 : 3 Evolutions of the main effective properties of the Hextool during the cure process Figure 4 : 4 Figure 4: Residual cure stresses in the material Acknowledgements The authors wish to acknowledge C. Dauphin and M. Bonnafoux (from Hexcel Composites France) for the valuable information provided on the Hextool. We also wish to thank K. Szymanska, S. Terekhina and M. Salvia, from the Laboratoire de Tribologie et de Dynamique des Systèmes (LTDS) of Lyon (France), who provided the results of DMA performed on the M61 resin system.
16,998
[ "957714", "850332" ]
[ "10921", "10921", "10921" ]
01471396
en
[ "spi" ]
2024/03/04 23:41:46
2014
https://hal.science/hal-01471396/file/doc00021964.pdf
Florian Kaltenberger Auguste Byiringiro George Arvanitakis Riadh Ghaddab Dominique Nussbaum Raymond Knopp Marion Bernineau Yann Cocheril Henri Philippe † Ifsttar Broadband Wireless Channel Measurements for High Speed Trains Keywords: MIMO, Carrier Aggregation, Channel Sounding, High-speed train We describe a channel sounding measurement campaign for cellular broadband wireless communications with high speed trains that was carried out in the context of the project CORRIDOR. The campaign combines MIMO and carrier aggregation to achieve very high throughputs. We compare two different scenarios, the first one reflects a cellular deployment, where the base station is about 1km away from the railway line. The second scenario corresponds to a railway deployed network, where the base station is located directly next the the railway line. We present the general parameters of the measurement campaign and some preliminary results of Power Delay Profiles and Doppler Spectra and their evolution over time. I. INTRODUCTION Broadband wireless communications has become an ubiquitous commodity. However, there are still certain scenarios where this commodity is not available or only available in poor quality. This is certainly true for high speed trains traveling at 300km/h or more. While the latest broadband communication standard, LTE, has been designed for datarates of 150Mbps and speeds of up to 500km/h, the practical achievable rates are significantly lower. A recent experiment carried out by Ericsson showed that the maximum achievable datarate was 19Mpbs on a jet plane flying at 700km/h 1 . Two main technologies exist to increase datarates: using multiple antennas to form a multiple-input multipleoutput (MIMO) system, and using more spectrum by means of carrier aggregation (CA). While MIMO has been already included in the first versions of the LTE standard (Rel. 8), CA has only been introduced with LTE-Advanced (Rel.10). To design efficient algorithms that can exploit these two technologies in high-speed conditions it is of utmost importance to have a good understanding of the channel conditions. While some measurements exist for 1 http://www.ericsson.com/news/121101-ericsson-tests-lte-inextreme-conditions 244159017 c 5MHz 10MHz 20MHz f 1 = 771.5MHz f 2 = 2.590GHz f 3 = 2.605GHz Fig. 1. The sounding signal is composed of 3 component carriers, each of which uses 4 transmit antennas SISO channels [START_REF] Min | Analysis and modeling for trainground wireless wideband channel of lte on high-speed railway[END_REF], [START_REF] Min | Analysis and modeling of the lte broadband channel for train-ground communications on highspeed railway[END_REF], there are no reports of MIMO measurements in high-speed trains. There are however a series of MIMO measurements (using a switched array) available for vehicular communications at speeds of up to 130km/h [START_REF] Paier | Car-to-car radio channel measurements at 5 ghz: Pathloss, power-delay profile, and delaydoppler spectrum[END_REF]. To the best of the author's knowledge the measurements presented in this paper are the first measurements that combine MIMO with carrier aggregation at very high speeds of up to 300km/h. Moreover, our MIMO measurement system does not use a switched array, but records channels in parallel. The rest of the paper is organized as follows. We first present the measurement equipment and methodology in Section II, followed by a description of the measurement scenarios in Section III. We present the post-processing in Section IV and the results in Section V. Finally we give conclusions in Section VI. II. MEASUREMENT EQUIPMENT AND METHODOLOGY A. Sounding Signal The sounding signal was designed based on constraints given by the hardware (number of antennas) and the obtained licenses for spectrum user (number of carriers). The final design uses 3 carriers as depicted in Figure 1, each of which uses four transmit antennas. Each carrier is using an OFDM signal, whose parameters are similar to those of the LTE standard. synchronization sequence (PSS) and the rest of the signal is filled with OFDM modulated random QPSK symbols. In order to minimize inter carrier interference (ICI) in high mobility scenarios, we only use ever second subcarrier. To obtain individual channel estimates from the different transmit antennas, we use an orthogonal pilot pattern as depicted in Figure 2. A0 A2 A0 A2 A0 A2 A0 A2 A0 A2 A0 A2 A1 A3 A1 A3 A1 A3 A1 A3 A1 A3 A1 A3 A0 A2 A0 A2 A0 A2 A0 A2 A0 A2 A0 A2 A1 A3 A1 A3 A1 A3 A1 A3 A1 A3 A1 A3 A0 A2 A0 A2 A0 A2 A0 A2 A0 A2 A0 A2 A1 A3 A1 A3 A1 A3 A1 A3 A1 A3 A1 A3 OFDM symbols OFDM subcarriers B. Measurement Equipment The basis for both transmitter and receiver of the channel sounder is the Eurecom ExpressMIMO2 software defined radio card (see Figure 3), which are part of the OpenAirInterface platform2 . The card features four independent RF chains that allow to receive and transmit on carrier frequencies from 300 MHz to 3.8 GHz. The digital signals are transfered to and from the PCI in realtime via a PCI Express interface. The sampling rate of the card can be chosen from n • 7.68 Msps, n = 1, 2, 4, corresponding to a channelization of 5, 10, and 20 MHz. However, the total throughput of one card may not exceed the equivalent of one 20MHz channel due to the current throughput limitation on the PCI Express interface. Thus the following configurations are allowed: 4x5MHz, 2x10MHz, or 1x20MHz. Multiple cards can be synchronized and stacked in a PCI chassis to increase either the bandwidth or the The output of the ExpressMIMO2 cards is limited to approximately 0 dBm, therefore additional power amplifiers have been built for bands around 800MHz (including TV white spaces and E-UTRA band 20) and for bands around 2.6GHz (E-UTRA band 7). The to achieve a total output power 40 dBm at 800MHz and +36 dBm at 2.6 Ghz (per element). As antennas we have used two sectorized, dual polarized HUBER+SUHNER antennas with a 17dBi gain (ref SPA 2500/85/17/0/DS) for the 2.6GHz band and two sectorized, dual polarized Kathrein antennas with a 14.2 dBi gain (ref 800 10734V01) for the 800MHz band (see Figure 5. The receiver is built similarly, but it was decided to use two separate systems for the two bands. The 800Mhz receiver is built from one ExpressMIMO2 card, providing three 5MHz channels for three receive antennas. The 2.6GHz receiver is built from three ExpressMIMO2 cards, providing two 20MHz and two 10MHZ channels in total which are connected to two antenna ports in a similar way as the transmitter. The 2.6GHz receiver additionally uses external low-noise-power amplifiers with a 10dB gain to improve receiver sensitivity. The receiver antennas used are Sencity Rail Antennas from HUBER+SUHNER (see Figure 6. For the 800MHz band we have used two SWA 0859/360/4/0/V3 and one SWA 0859/360/4/0/DFRX304 omnidirectional antennas with 6dBi gain (the latter one also provides an additional antenna port for a global navigation satellite system (GNSS)). For the 2.6GHz band we have used two SPA 2400/50/12/10/V5 antennas that provide two ports each, one pointing to the front and one to the back of the train, each with 11dBi gain. However, for the experiments we have only used one port from each antenna that are pointing in the same direction. C. Data acquisition We save the raw IQ data of all antennas in realtime. The data of the 5MHz channel at 771.5 MHz is stored continuously and for the two (10+20MHz) channels at 2.6 GHz we only save 1 second out of 2, due to constraints of the hard disk speed. III. MEASUREMENT SCENARIOS AND DESCRIPTION The measurements were carried on board of the IRIS320 train6 along the railway line "LGV Atlantique" around 70km southwest of Paris. The train passes the area with a speed of approximately 300km/h. The antennas are mounted on the top of the train, approximately half way between the front and the rear. Three scenarios were measured: 1) Scenario 1: The eNB is located 1.5km away from the railway and all the TX antennas are pointing approximately perpendicular to the railway. This scenario corresponds to a cellular operator deployed network. 2) Scenario 2a: The eNB is located right next to the railway line and the half of the TX antennas pointing at one direction of the railway, and the other half are pointing at the opposite direction. This scenario corresponds to a railway operator deployed network. 3) Scenario 2b: Same as Scenario 2a, but this time all the 4 TX antennas are oriented in the same sense. For all scenarios the base station height is approximately 12m. IV. MEASUREMENT POST PROCESSING A. Synchronization Synchronization it is the most important part of the Post processing. In OFDM systems, there exist three different problems related to synchronization: The first one is frame synchronization, which allows the receiver to determine the starting point of the received frame. The second one is the frequency synchronization, which tries to eliminate the carrier frequency offset caused by the mismatch from the radio frequency local oscillators and the Doppler shift. Finally, the last issue is the sampling clock synchronization, which manages to synchronize the sampling frequency between transmitter and receiver, because both of them work with different physical clocks. 1) Initial Timing Synchronization: To define the start of the frame we make a cross correlation between a received data and the (known) synchronization sequence (PSS) which is in the beginning of every frame. We then look for the highest peak within every frame (discarding peaks below a certain threshold) and repeat this process for several (e.g., 100) consecutive frames. We finally take the median value of the offset of the peaks within each frame. Note that this procedure is necessary, since we have no other mean of verifying that the synchronization was achieved. In a real LTE system, after the detection of the peak of the correlator the receiver would attempt to decode the broadcast channel and thus verifying the synchronization. 2) Tracking: Due to the differences in sampling clocks between the transmitter and the receiver, the frame offset might drift over time and thus needs to be tracked and adjusted. This is done by tracking the peak of the impulse response of the estimated channel and adjusting the frame offset such that the peak is at 1/8th of the cyclic prefix. If the peak drifts further away than 5 samples, the frame offset is adjusted. This method avoids jitter of the frame offset but means that frame offset jumps a few samples. Another possibility to compensate for the timing drift would be to apply Lanczos resampling, but this method is computational very complex and has not been applied here. B. Channel Estimation After synchronization we apply a standard OFDM receiver, which applies an FFT and removes the cyclic prefix. After this operation the equivalent input-output relation can be written as y i ,l = H i ,l x i ,l + n i ,l , (1) where i denotes OFDM symbol and l the subcarrier, x is the transmitted symbol vector described in Section II, H is the frequency domain MIMO channel matrix (MIMO transfer function) and y is the received symbol vector. Since the transmitted symbols are all QPSK, we can estimate the channel matrix as Ĥi ,l = y i ,l x H i ,l . (2) Note that due to the orthogonal structure of the transmitted pilots in x (cf. Figure 2), Ĥi ,l will be sparse. For the ease of notation and processing we thus define new indices i = 0, . . . , N s -1 and l = 0, . . . , N c -1 that group refer to a block of six subcarriers and two OFDM symbols respectively (these blocks are also highlighted in Figure 2). Thus Ĥi,l does not contain any zero elements, N s = 60, and N c = 50, 100, 200 depending on the bandwith of the carrier. For further reference we also compute the MIMO channel impulse response ĥi,k = FFT l { Ĥi,l }. ( C. Power Delay Profile Estimation We estimate the Power Delay Profile by averaging over all OFDM symbols N s = 60 in a frame and thus introducing a new time variable j which denotes one frame (10ms). P j,k = 1 N s (j+1)Ns-1 i=jNs | ĥi,l | 2 , (4) D. Delay-Doppler Power Spectrum Estimation We estimate the Delay-Doppler Power Spectrum (sometimes also called the scattering function) by taking the inverse Fourier transform of blocks of 100 frames S t,u,k = 1 √ 100N s 100(t+1)Ns-1 i=100tNs ĥi,k e 2πjiu 100Ns , (5) where we have introduced the new time variable t whose resolution depends on the carrier. In the case of the 5MHz carrier (at 800MHz), it is 100 frames (1s) and in the case of the 10+20MHz carrier at 2.6GHz it is 200 frames (2s), since we only store the signal for one out of 2 seconds. This method will give us a resolution in Doppler frequency u of 1 Hz. From S t,u,k we can also compute the marginal Doppler profile at time t by averaging over the delay time k D t,u = 1 N c Nc-1 k=0 |S t,u,k | 2 , (6) V. CHANNEL CHARACTERIZATION RESULTS A. Path Loss We estimate the path loss component as the slope (or gradient) of linear interpolation of the received signal strength in respect to the 10 log (d): As an example we plot the results from trial 2, run 1 in Figure 8. The average estimated path loss component for the 800MHz band is 3.2 and for 2.6GHz is 3.5, which is in line with established path loss models for rural areas. P RX = P TX -α10 log (d) + N (7) B. Delay and Doppler Spectra We first show the results for the 800MHz band. In Figure 9 we show the Delay-Doppler Power Spectrum S t,u,k of trial 1, run 1 for three different blocks. At t = 50 the train is approaching the base station, at t = 90 it is the closest to the base station and at t = 130 it is departing from the base station. It can be seen that there is one dominant component in the spectrum corresponding to the line of sight (LOS), which is moving from approximately f 1 = -625Hz to f 2 = -1040Hz. This effect can be seen even better in Figure 10, where we plot the marginal Doppler Profile D t,u over the whole run. The difference between these two frequencies correspond more or less exactly to Doppler bandwidth B D = 2f c vmax c ≈ f 2 -f 1 . The common offset f o = f1+f2 2 correspond to the frequency offset in the system, which was (unfortunately) not calibrated beforehand in the first trial. For the 2.6GHz band we show the Delay-Doppler Power Spectrum S t,u,k of trial 1, run 1, carrier A (10MHz) in Figure 11 for three different blocks (approaching, close, departing). Moreover, we plot the temporal evolution of the marginal Doppler profile in Figure 12. Compared to the measurement in the 800MHz band, we can see that the Doppler component at f 1 = 1040Hz persists after the train passes the base station in addition to the second Doppler component appearing at f 2 = -370Hz. Our explanation for this behavior is that there must be a strong reflector directly on the train somewhere between the front of the train and the antenna. Indeed, the IRIS 320 train has two "observation towers" at the front and at the rear with a large glass surface acting as a reflector. In the measurements we can only see the tower in the front, since the antenna is directional and pointing only to the front. This hypothesis can be confirmed by looking at the results of run 2, where the train takes the same route in the other direction. As can be seen in Figure 13, here the two Doppler components are present when the train approaches the base station and vanish when the train has passed the base station. Moreover this phenomenon can be observed on both carriers at 2.6GHz (not shown). It is however interesting that this phenomenon does not exist in the 800MHz band, which can be explained by the fact that this antenna has a higher attenuation along the horizontal plane. It could also be that the reflector material has different reflection coefficients in the 800MHz band. VI. CONCLUSIONS Achieving broadband wireless communication for high speed trains is not trivial and requires a good understanding of the underlying wireless communica- tion channel. We have presented a channel sounding measurement campaign carried out in the context of the project CORRIDOR and presented some initial results. A surprising result was the reflection that comes from the observation tower of the IRIS 320 train and which results in a very large Doppler spread that can have a negative impact on the communication link as it results in high inter-carrier interference. However, this effect should not be present on a regular train. In future work we will analyze the spatial properties of the measured channels and fit suitable channel models to the data. Fig. 2 . 2 Fig. 2. Allocation of resource elements (RE) to antennas. Empty REs are unused to reduce inter carrier interference (ICI). Fig. 3 . 3 Fig. 3. Express MIMO 2 board Fig. 4 . 4 Fig. 4. Schematics of the transmitter. Fig. 5 . 5 Fig. 5. Antenna setup. Left: trial 1, right: trial 2. Fig. 6 . 6 Fig. 6. Antennas on top of the IRIS 320 train. Fig. 7 . 7 Fig. 7. Map showing the different measurement scenarios. The black line is the railway line and the arrows indicate the direction the antennas are pointing. In scenario 1, all 4 antennas point in the same direction. In scenario 2a, two antennas point northeast while two antennas point southwest and in scenario 2b all four antennas point northeast. Fig. 8 . 8 Fig. 8. Path loss component Fig. 10 . 10 Fig. 10. Doppler Profile for the 800MHz band, trial 1, run 1 Fig. 9 . 9 Fig. 9. Doppler Delay Power Spectrum for the 800MHz band, trial 1, run 1 2 1 1 21 Fig. 14. Explanation of the Doppler Profile. Top: only one strong Doppler component is present. Bottom: due to a reflector in the front of the train two Doppler components with opposite signs are present. http://www.openairinterface.org http://goo.gl/QDlRg1 http://goo.gl/GavYgG http://goo.gl/xBHSv2 http://en.wikipedia.org/wiki/SNCF TGV Iris 320 ACKNOWLEDGEMENTS This work has been supported by the projects COR-RIDOR (ANR), SOLDER (EU FP7), and SHARING (Celtic+). The authors would also like to thank the CNES for providing the 800MHz power amplifiers as well as Claude Oestges for his advice.
18,289
[ "1199119", "874088", "919653" ]
[ "421532", "421532", "421532", "421532", "421532", "421532", "222122", "222122", "198197" ]
01471415
en
[ "math" ]
2024/03/04 23:41:46
2018
https://hal.science/hal-01471415/file/Vuillermot1%202017.pdf
Pierre-A Vuillermot On the time evolution of Bernstein processes associated with a class of parabolic equations In this article dedicated to the memory of Igor D. Chueshov, I …rst summarize in a few words the joint results that we obtained over a period of six years regarding the long-time behavior of solutions to a class of semilinear stochastic parabolic partial di¤erential equations. Then, as the beautiful interplay between partial di¤erential equations and probability theory always was close to Igor's heart, I present some new results concerning the time evolution of certain Markovian Bernstein processes naturally associated with a class of deterministic linear parabolic partial di¤erential equations. Particular instances of such processes are certain conditioned Ornstein-Uhlenbeck processes, generalizations of Bernstein bridges and Bernstein loops, whose laws may evolve in space in a non trivial way. Speci…cally, I examine in detail the time development of the probability of …nding such processes within two-dimensional geometric shapes exhibiting spherical symmetry. I also de…ne a Faedo-Galerkin scheme whose ultimate goal is to allow approximate computations with controlled error terms of the various probability distributions involved. Introduction and outline This article is a tribute to some of the works and achievements of our friend and colleague Igor D. Chueshov, who unfortunately and unexpectedly passed away on April 23rd, 2016. The qualitative analysis of the behavior of solutions to various stochastic partial di¤erential equations, henceforth SPDEs, was one of Igor's strong points. I have therefore deemed it appropriate to brie ‡y summarize here the results that he and I obtained in that area over a period stretching from 1998 to 2004. As far as the presentation of the many other facets of his activities is concerned, I am thus referring the reader to the other contributions in this volume. When Igor and I …rst met in 1994 on the occasion of an international conference on SPDEs in Luminy, we set out to investigate the behavior of solutions to those stochastic parabolic equations which speci…cally occur in population dynamics, population genetics, nerve pulse propagation and related topics, given the fact that there were already a substantial number of works in those areas concerning the deterministic case (see, e.g., [START_REF] Bernfeld | Large-time asymptotic equivalence for a class of non-autonomous semilinear parabolic equations[END_REF], [START_REF] Vuillermot | Global exponential attractors for a class of almostperiodic parabolic equations in R N[END_REF] and the many references therein). But instead of starting up front with partial di¤erential equations driven by some kind of noise, we …rst considered a class of random parabolic initial-boundary value problems mainly for the sake of simpli…cation. Assuming then various statistical and dynamical properties such as those of the central limit theorem and the Ornstein-Uhlenbeck process for the lower-order coe¢cients of the equations, we eventually elucidated the ultimate behavior of the corresponding solution random …elds in [START_REF] Chueshov | Long-time behavior of solutions to a class of quasilinear parabolic equations with random coe¢ cients[END_REF]. In particular, we established the existence of a global attractor, determined its detailed structure and were able to compute the Lyapunov exponents explicitly in some cases. We then extended these results to the case of parabolic SPDEs driven by a homogeneous multiplicative white noise de…ned in Stratonovitch's sense in [START_REF] Chueshov | Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Stratonovitch's case[END_REF], investigated there various stability properties of the non-random global attractor and established the existence of a recurrent motion of sorts among its components. Furthermore, in [START_REF] Chueshov | Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Itô's case[END_REF] we analyzed the same type of equations as in [START_REF] Chueshov | Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Stratonovitch's case[END_REF] but with the noise de…ned in Itô's sense. In this way we were able to establish the existence and many properties of a random global attractor and excluded in particular the existence of any kind of recurrence phenomena, thereby obtaining radically different results than in [START_REF] Chueshov | Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Stratonovitch's case[END_REF]. The analysis carried out in [START_REF] Chueshov | Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Itô's case[END_REF] was further deepened in [START_REF] Bergé | On the behavior of solutions to certain parabolic SPDEs driven by Wiener processes[END_REF], where it was shown that the stabilization of the solution random …elds toward the global attractor is entirely controlled by their spatial average, thereby obtaining exchange of stability results particularly relevant to the description of certain migration phenomena in population dynamics. Finally, in [START_REF] Chueshov | Non-random invariant sets for some systems of parabolic stochastic partial di¤ erential equations[END_REF] we proved the existence of invariant sets under the ‡ow generated by certain systems of SPDEs including those of Lotka-Volterra and Landau-Ginzburg. But Igor's interests did not limit themselves to investigations of solutions to SPDEs as he was also genuinely interested in the many possible connections that exist between systems of di¤erential equations on the one hand, and the theory of random dynamical systems and stochastic processes on the other hand (see, e.g., [START_REF] Chueshov | Monotone Random Systems -Theory and Applications[END_REF]). This prompted me to present here some very recent and preliminary results concerning the time evolution of certain Bernstein processes naturally associated with a class of deterministic linear partial di¤erential equations. Accordingly, the remaining part of this article is organized as follows: In Section 2 I recall what a Bernstein process is, and state there a theorem that shows how to associate such a process with the two adjoint parabolic Cauchy problems @ t u(x; t) = 1 2 4 x u(x; t) V (x) u(x; t); (x; t) 2 R d (0; T ] ; u(x; 0) = '(x) = N ' 0 (x); x 2 R d (1) and @ t v(x; t) = 1 2 4 x v(x; t) V (x) v(x; t); (x; t) 2 R d [0; T ) ; v(x; T ) = (x) = N T (x); x 2 R d ; (2) where T > 0 is arbitrary and where 4 x stands for Laplace's operator with respect to the spatial variable. In these equations N > 0 is a normalization factor whose signi…cance I explain below. Moreover, V is real-valued while ' 0 and T are positive data which are assumed to be either Gaussian functions of the form ' 0 (x) = exp " jx a 0 j 2 2 0 # ; (3) T (x) = exp " jx a T j 2 2 T # (4) where 0;T > 0 and a 0;T 2 R d are arbitrary vectors with j:j the usual Euclidean norm, or ' 0 (x) = d Y j=1 1 jx j a 0;j j 0 _ 0 ; (5) T (x) = d Y j=1 1 jx j a T;j j T _ 0 : (6) In ( 5)-( 6), x j and a 0;T;j denote the j th component of x and a 0;T , respectively. Furthermore these initial-…nal conditions have localization properties which are more clear-cut than those of ( 3)-( 4) in that they vanish identically outside hypercubes in R d . The cases where ' 0 (x) = 0 (x) (7) with 0 the Dirac measure concentrated at the origin and T given by ( 4) or [START_REF] Chueshov | Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Stratonovitch's case[END_REF] are also considered. An important observation here is that (3)-( 4) and ( 5)-( 6) are not normalized as standard probability distributions, for the only normalization condition needed below involves ' 0 , T and N in a rather unexpected way which is inherently tied up with the construction of Bernstein processes. Finally, the following hypothesis is imposed regarding the potential function in (1)-( 2): (H) The function V : R d 7 ! R is continuous, bounded from below and satis…es V (x) ! +1 as jxj ! +1. An immediate consequence of this hypothesis is that the resolvent of the usual self-adjoint realization of the elliptic operator on the right-hand side of (1)-( 2) is compact in L 2 C R d , the usual Lebesgue space of all square integrable, complex-valued functions on R d . This means that the operator in question has an entirely discrete spectrum (E n ) n2N d , and that there exists an orthonormal basis (f n ) n2N d L 2 C R d consisting entirely of its eigenfunctions (see, e.g., Section XIII.14 in [START_REF] Reed | Methods of Modern Mathematical Physics IV : Analysis of Operators[END_REF]). In the context of this article the convergence of the series X n2N d exp [ tE n ] < +1 (8) for every t 2 (0; T ] is also required. Then, under the above conditions the construction of a Markovian Bernstein process rests on two essential ingredients, namely, Green's function (or heat kernel) associated with (1)-( 2), which satis…es the symmetry and positivity conditions g(x; t; y) = g(y; t; x) > 0 for all x; y 2 R d and every t 2 (0; T ], and the probability measure on R d R d whose density is given by (x; y) = '(x)g(x; T; y) (y); [START_REF] Davies | Heat Kernels and Spectral Theory[END_REF] which satis…es the normalization condition Z R d R d dxdy'(x)g(x; T; y) (y) = N 2 Z R d R d dxdy' 0 (x)g(x; T; y) T (y) = 1: (11) Notice that (11) may be considered as the de…nition of N , and that the inequality in ( 9) is a consequence of two-sided Gaussian bounds for g whose existence follows from the general theory developed in [START_REF] Aronson | Non-negative solutions of linear parabolic equations[END_REF] and further re…ned in Chapter 3 of [START_REF] Davies | Heat Kernels and Spectral Theory[END_REF]. Moreover, as a consequence of (H) and ( 8), Green's function admits an expansion of the form g(x; t; y) = X n2N d exp [ tE n ] f n (x)f n (y) (12) which converges strongly in L 2 C R d R d for every t 2 (0; T ] (unless more detailed information about the f n 's or ultracontractive bounds become available, in which case the convergence can be substantially improved, see, e.g., Chapter 2 in [START_REF] Davies | Heat Kernels and Spectral Theory[END_REF]). Thus, in Section 2 the knowledge of g and is used to state a theorem about the existence of a probability space which supports a Markovian Bernstein process Z 2[0;T ] whose state space is the entire Euclidean space R d , and which is characterized by its …nite-dimensional distributions, the joint distribution of Z 0 and Z T and the probability of …nding Z t at any time t 2 [0; T ] in a given region of space. In that section a very simple result regarding the time evolution of Z 2[0;T ] is also proved when considering (1)-( 2) with ( 5)- [START_REF] Chueshov | Long-time behavior of solutions to a class of quasilinear parabolic equations with random coe¢ cients[END_REF]. Section 3 is devoted to the analysis of the function that determines the time evolution of the probability of …nding Z 2[0;T ] in particular two-dimensional geometric shapes that exhibit spherical symmetry in the case of the so-called harmonic potential V (x) = jxj 2 2 ; (13) and for various combinations of the initial-…nal data given above. Finally, a simple Faedo-Galerkin scheme is proposed whose ultimate goal is to allow approximate computations of all the probability distributions involved. An existence result for a class of Bernstein processes in R d As a stochastic process a Bernstein process may be de…ned independently of any reference to a system of partial di¤erential equations, and there are several equivalent ways to do so (see, e.g., [START_REF] Jamison | Reciprocal processes[END_REF]). I shall restrict myself to the following: De…nition. Let d 2 N + and T 2 (0; +1) be arbitrary. An R d -valued process Z 2[0;T ] de…ned on the complete probability space ( ; F; P) is called a Bernstein process if E f (Z r ) F + s _ F t = E (f (Z r ) jZ s ; Z t ) (14) for every bounded Borel measurable function f : R d 7 ! R and for all r; s; t satisfying r 2 (s; t) [0; T ]. In ( 14), the -algebras are F + s = Z 1 (F ) : s; F 2 B d (15) and F t = Z 1 (F ) : t; F 2 B d ; (16) where B d stands for the Borel -algebra on R d . Moreover, E denotes the (conditional) expectation functional on ( ; F; P). The dynamics of such a process are, therefore, solely determined by the properties of the process at times s and t, irrespective of its behavior prior to instant s and after instant t. Of course, it is plain that this fact generalizes the usual Markov property. In what follows an important rôle is played by the positive solution to (1) and the positive solution to (2), namely, u(x; t) = Z R d dyg(x; t; y)'(y) (17) and v(x; t) = Z R d dyg(x; T t; y) (y); (18) respectively. Taken together, ( 1) and ( 2) may thus be looked upon as de…ning a decoupled forward-backward system of linear deterministic partial di¤erential equations, with [START_REF] Roelly | A characterisation of reciprocal processes via an integration by parts formula on the path space[END_REF] wandering o¤ to the future and ( 18) evolving into the past. The functions p (x; t; z; r; y; s) = g 1 (x; t s; y)g(x; t r; z)g(z; r s; y) and P (x; t; F; r; y; s) = Z F dzp (x; t; z; r; y; s) (20) with F 2 B d , both being well de…ned and positive for all x; y; z 2 R d and all r; s; t satisfying r 2 (s; t) [0; T ], are equally important as is the probability measure whose density is [START_REF] Davies | Heat Kernels and Spectral Theory[END_REF], namely, (G) = Z G dxdy'(x)g(x; T; y) (y) (21) where G 2 B d B d , which satis…es the normalization condition [START_REF] Erdélyi | Higher Transcendental Functions[END_REF]. The corresponding initial and …nal marginal distributions then read F R d = Z F dx'(x) Z R d dyg(x; T; y) (y) = Z F dx'(x)v(x; 0) and (R d F ) = Z F dy (y) Z R d dxg(x; T; y)'(x) = Z F dyu(y; T ) (y) respectively, as a consequence of ( 17) and [START_REF] Schrödinger | Sur la théorie relativiste de l'électron et l'interprétation de la mécanique quantique[END_REF]. It is the knowledge of ( 20) and ( 21) that makes it possible to associate with ( 1) and ( 2) a Bernstein process in the following sense: Theorem. Assume that V satis…es Hypothesis (H), that condition (8) holds and that P and are given by ( 20) and [START_REF] Vuillermot | Bernstein di¤ usions for a class of linear parabolic partial di¤ erential equations[END_REF], respectively. Then there exists a probability space ( ; F; P ) supporting an R d -valued Bernstein process Z 2[0;T ] such that the following properties are valid: (a) The process Z 2[0;T ] is Markovian, and the function P is its transition function in the sense that P (Z r 2 F jZ s ; Z t ) = P (Z t ; t; F; r; Z s ; s) for each F 2 B d and all r; s; t satisfying r 2 (s; t) [0; T ]. Moreover, P (Z 0 2 F 0 ; Z T 2 F T ) = (F 0 F T ) (22) for all F 0 ; F T 2 B d , that is, is the joint probability distribution of Z 0 and Z T . (b) The …nite-dimensional probability distributions of the process are given by P (Z t1 2 F 1 ; :::; Z tn 2 F n ) (23) = Z F1 dx 1 ::: Z Fn dx n n Y k=2 g (x k ; t k t k 1 ; x k 1 ) u(x 1 ; t 1 )v(x n ; t n ) for every integer n 2, all F 1 ; :::; F n 2 B d and all t 0 = 0 < t 1 < ::: < t n < T , where u and v are given by ( 17) and ( 18), respectively. (c) The probability of …nding the process in a given region F R d at time t is given by P (Z t 2 F ) = Z F dxu(x; t)v(x; t) (24) for each F 2 B d and every t 2 [0; T ] : (d) P is the only probability measure leading to the above properties. I omit the proof of this theorem, which can be adapted either from the abstract arguments in [START_REF] Jamison | Reciprocal processes[END_REF] or from the more analytical approach in [START_REF] Vuillermot | Bernstein di¤ usions for a class of linear parabolic partial di¤ erential equations[END_REF], and will rather focus on its consequences regarding the time evolution of Z 2[0;T ] . Prior to that some comments are in order: Remarks. (1) Hypothesis (H) and condition (8) are su¢ cient but not necessary for the theorem to hold. However, the advantage of having ( 12) is that such an expansion greatly simpli…es some calculations and also has the virtue of making theoretical results amenable to approximations and computations. I will dwell a bit more on this point in the next section. (2) Bernstein processes may be Markovian but in general they are not. Independently of that they have played an increasingly important rôle in various areas of mathematics and physics over the years. It is not possible to give a complete bibliography here, but I will refer instead the interested reader to [START_REF] Jamison | Reciprocal processes[END_REF], [START_REF] Roelly | A characterisation of reciprocal processes via an integration by parts formula on the path space[END_REF] and [START_REF] Vuillermot | Bernstein di¤ usions for a class of linear parabolic partial di¤ erential equations[END_REF] which contain many references describing the history and earlier works on the subject, tracing things back to the pioneering works [START_REF] Bernstein | Sur les liaisons entre les grandeurs aléatoires[END_REF] and [START_REF] Schrödinger | Sur la théorie relativiste de l'électron et l'interprétation de la mécanique quantique[END_REF]. Moreover, Bernstein processes have also lurked in various forms in more recent applications of Optimal Transport Theory, as testi…ed by the monographs [START_REF] Galichon | Optimal Transport Methods in Economics[END_REF] and [START_REF] Villani | Optimal Transport: Old and New, Grundlehren der Mathematischen Wissenschaften[END_REF]. In this regard it is worth mentioning that they are also referred to as Schrödinger processes or reciprocal processes in the literature. (3) The probability measure of a non-Markovian Bernstein process does not have as simple a structure as that given by [START_REF] Vuillermot | Bernstein di¤ usions for a class of linear parabolic partial di¤ erential equations[END_REF]. A case in point is the so-called periodic Ornstein-Uhlenbeck process, which is one of the simplest stationary Gaussian non-Markovian processes that can be viewed as a particular Bernstein process, as was recently proved in [START_REF] Vuillermot | On some Gaussian Bernstein processes in R N and the periodic Ornstein-Uhlenbeck process[END_REF] (see also, e.g., [START_REF] Roelly | A characterisation of reciprocal processes via an integration by parts formula on the path space[END_REF] and the references therein for other analyses of the periodic Ornstein-Uhlenbeck process). In this case the construction of the measure is much more complicated than in the Markovian case, as it involves a weighted average of a sequence of suitably constructed signed measures naturally associated with an in…nite hierarchy of forward-backward linear parabolic equations. Coming back to the main theme of this article, it is interesting to note that the probability of …nding the process at any given time t 2 [0; T ] in an arbitrary region of space is expressed as an integral of the product of u and v through the simple formula (24). This is a manifestation of the fact that the process Z 2[0;T ] is actually reversible and exhibits a perfect symmetry between past and future, a property already built to some extent into the de…nition given at the beginning of this section. It is of course di¢ cult to say more about the time evolution of Z 2[0;T ] unless we know more about the potential function V . However, at the very least the following result holds, which in e¤ect describes a recurrence property of the process in a particular case: Proposition 1. Let Z 2[0;T ] be the Bernstein process associated with (1)-( 2) in the sense of the above theorem, where ' 0 and T are given by ( 5) and ( 6), respectively, and let C a0; 0 = x 2 R d : jx j a 0;j j < 0 ; j = 1; :::; d be the hypercube outside which ' 0 vanishes identically, that is, ' 0 = 0 on F a0 ; 0 = R d n C a0; 0 . Let C a T ; T be de…ned in a similar way. Then P (Z 0 2 C a0; 0 ) = 1 and P (Z T 2 C a T ; T ) = 1: Proof. This is an immediate consequence of (24), for P (Z 0 2 F a0 ; 0 ) = Z Fa 0 ; 0 dx'(x)v (x; 0) = 0 and P (Z T 2 F a T ; T ) = Z Fa T ; T dxu (x; T ) (x) = 0. Thus, in this case the process certainly starts its journey within C a0; 0 and ends it within C a T ; T . Since this is true no matter how small 0;T are, that constitutes a generalization of the class of Bernstein bridges constructed in [START_REF] Vuillermot | On some Gaussian Bernstein processes in R N and the periodic Ornstein-Uhlenbeck process[END_REF]. In particular, if a 0 = a T and if T 0 the inclusion C a T ; T C a0; 0 holds, so that the process goes back to the region where it started from with probability one, independently of its unknown whereabouts at intermediary times t 2 (0; T ). These properties and Proposition 1 remain true for all choices of ' 0 , T that vanish identically outside of a given Borel set, for instance for the isotropic version of ( 5)-( 6), namely, ' 0 (x) = 1 jx a 0 j 0 _ 0; T (x) = 1 jx a T j T _ 0; provided the sets C a 0;T ; 0;T are replaced by the d-dimensional open balls B a 0;T ; 0;T = x 2 R d : jx a 0;T j < 0;T of radius 0;T centered at a 0;T . It would be interesting to carry out a numerical simulation in real time of the behavior of the processes generated in this way. The preceding result fails to hold if the initial-…nal data are not of the above form. In the next section I investigate this issue more closely in case the potential function is given by (13). Some new results for the harmonic case The starting point is thus the forward-backward system @ t u(x; t) = 1 2 4 x u(x; t) jxj 2 2 u(x; t); (x; t) 2 R d (0; T ] ; u(x; 0) = ' (x) =N ' 0 (x) ; x 2 R d (25) and @ t v(x; t) = 1 2 4 x v(x; t) jxj 2 2 v(x; t); (x; t) 2 R d [0; T ) ; v(x; T ) = (x) =N T (x) ; x 2 R d : (26) Green's function associated with (25)-( 26) is known to be Mehler's multidimensional kernel g(x; t; y) = (2 sinh (t)) d 2 exp 2 4 cosh (t) jxj 2 + jyj 2 2 (x; y) R d 2 sinh (t) 3 5 (27) where (:; :) R d denotes the usual inner product in R d (see, e.g., the Appendix in [START_REF] Vuillermot | On some Gaussian Bernstein processes in R N and the periodic Ornstein-Uhlenbeck process[END_REF]). Then if ' 0 , T are given by ( 3)-( 4), the solutions ( 17)-( 18) and the integral on the left-hand side of ( 11) can all be computed explicitly since the integrals are Gaussian. For instance, the forward solution reads u(x;t) = N 0 0 cosh(t) + sinh(t) d 2 exp " ja 0 j 2 2 0 # exp " cosh(t) jxj 2 2 sinh(t) + j 0 x+ sinh(t)a 0 j 2 2 0 sinh(t) ( 0 cosh(t) + sinh(t)) # ( 28 ) for every t 2 (0; T ], while the backward solution is obtained from (28) by replacing 0 by T , a 0 by a T and t by T t, respectively. The downside is that these expressions are complicated, cumbersome and in any case unsuited to extract valuable information out of (24) unless particular choices are made for these parameters. For example, if 0 = T = 1 and a 0 = a T = 0, the forward solution (28) and the related backward solution reduce to u(x;t) = N exp " jxj 2 + dt 2 # ; (29) v(x;t) = N exp " jxj 2 + d(T t) 2 # ; (30) respectively, while an explicit computation from [START_REF] Erdélyi | Higher Transcendental Functions[END_REF] gives N = d 4 exp dT 4 9 for the corresponding normalization factor. Therefore, the substitution of these expressions into (24) leads to P (Z t 2 F ) = Z F dxu(x; t)v(x;t) = d 2 Z F dx exp h jxj 2 i for each t 2 [0; T ] and every F 2 B d , so that the probability of …nding the process in any region of space is here independent of time. The reason for this independence can easily be understood by means of the substitution of ( 27) and ( 29)-( 30) into ( 23), which …rst leads to the Gaussian law of (Z t1 ; :::; Z tn ) 2 R nd and from there eventually to the covariance E Z i s Z j t = 1 2 exp [ jt sj] i;j for all s; t 2 [0; T ] and all i; j 2 f1; :::; dg, where E denotes the expectation functional on the probability space of the theorem. Therefore, the Bernstein process thus constructed identi…es in law with the standard d-dimensional Ornstein-Uhlenbeck velocity process, so that the choice of ( 3)-( 4) as initial-…nal data corresponds in a sense to an equilibrium situation whereby the law remains stationary (see, e.g., [START_REF] Karatzas | Brownian Motion and Stochastic Calculus[END_REF] for general properties of this and related processes). For instance, if A R1;R2 = x 2R 2 : R 1 jxj < R 2 is the two-dimensional annulus centered at the origin with R 1 0 and R 2 > 0, then P (Z t 2 A R1;R2 ) = exp R 2 1 exp R 2 2 : The situation is quite di¤erent if the system (25)-( 26) is considered with ' 0 given by [START_REF] Chueshov | Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Stratonovitch's case[END_REF] and T given by ( 4) where T = 1 and a T = 0. In this case u(x;t) = N (2 sinh(t)) d 2 exp " coth(t) jxj 2 2 # (31) and v(x;t) = N exp " jxj 2 + d(T t) 2 # ( 32 ) for the forward and backward solutions, respectively, and furthermore the value of N can again be determined directly from [START_REF] Erdélyi | Higher Transcendental Functions[END_REF]. Indeed the relevant integral is Z R d R d dxdy 0 (x)g(x; T; y) exp " jyj 2 2 # = exp dT 2 by virtue of (27), so that N = exp dT 4 : Therefore, one obtains in particular P Z 0 2 R d n fog = N Z R d nfog dx 0 (x)v(x; 0) = 0 so that the process is conditioned to start at the origin since P (Z 0 = o) = 1: (33) Moreover, for positive times an explicit evaluation from (24) leads to P (Z t 2 F ) = (2 (t)) d 2 Z F dx exp " jxj 2 2 (t) # where the width parameter is identi…ed as (t) = sinh(t) exp [ t] : (34) It is then instructive to consider again the case of Z 2 [0;T ] wandering in the two-dimensional annulus A R1;R2 , and to investigate the way that P (Z t 2 A R1;R2 ) = exp R 2 1 2 (t) exp R 2 2 2 (t) (35 P (Z 0 2 A 0;R2 ) = 1 and the function t 7 ! P (Z t 2 A 0;R2 ) is monotone decreasing on [0; T ], eventually reaching the minimal value P (Z T 2 A 0;R2 ) = 1 exp R 2 2 2 (T ) : (b) If 0 < R 1 < R 2 < 1 one has P (Z 0 2 A R1;R2 ) = 0 (36) and P (Z t 2 A R1;R2 ) > 0 as soon as t > 0. Moreover, if T is su¢ ciently large there exists a t 2 (0; T ) such that the function t 7 ! P (Z t 2 A R1;R2 ) is monotone decreasing for every t 2 [t ; T ] : (c) If 1 R 1 < R 2 one still has (36), but the function t 7 ! P (Z t 2 A R1;R2 ) is monotone increasing throughout [0; T ]. Proof. Statement (a) follows immediately from (33) and (35) for R 1 = 0, as does the very …rst part of (b) since then o = 2 A R1;R2 . Now d dt P (Z t 2 A R1;R2 ) = 0 (t) 2 2 (t) ( (R 1 ; t) (R 2 ; t)) where (R; t) = R 2 exp R 2 2 (t) ; (37) and for any …xed t 2 (0; T ] this function is monotone increasing for R < p 2 (t) and monotone decreasing for R > p 2 (t). Furthermore, (34) and t 7 ! p 2 (t) are monotone increasing and concave with p 2 (t) < 1 uniformly in t. Therefore, if 0 < R 1 < R 2 < 1 and if T is large enough, there exists a t 2 (0; T ) such that R 1 < R 2 < p 2 (t ) p 2 (t) for every t 2 [t ; T ], which implies the last claim of (b) since then (R 1 ; t) (R 2 ; t) < 0. Finally, if 1 R 1 < R 2 one has a fortiori p 2 (t) < R 1 < R 2 for every t 2 [0; T ] so that (R 1 ; t) (R 2 ; t) > 0, which implies (c). A natural interpretation of Statement (a) is that the process leaves the origin as soon as t > 0, and tends to quickly "leak out" of the disk A 0;R2 when R 2 is su¢ ciently small. Moreover, Statement (b) means that the probability of …nding the process in the annulus increases for small times, then reaches a maximal value and eventually decreases for large times when R 1 and R 2 are su¢ ciently small, in sharp contrast to Statement (c) where the probability in question is monotone increasing for all times if R 1 and R 2 are su¢ ciently large. Finally, the substitution of ( 27) and ( 31)-(32) into (23) again determines the projection of the law onto R nd and, after long algebraic manipulations, the covariance E Z i s Z j t = 1 2 exp [ (t + s)] (exp [2(t ^s)] 1) i;j for all s; t 2 [0; T ] and all i; j 2 f1; :::; dg. Therefore, the Bernstein process thus constructed is identical in law with the Ornstein-Uhlenbeck process conditioned to start at the origin of R d . A last example can be provided by choosing ' 0 and T both of the form ( 7) in ( 25)-(26). In this case one gets u(x;t) = N (2 sinh(t)) d 2 exp " coth(t) jxj 2 2 # and v(x;t) = N (2 sinh(T t)) d 2 exp " coth(T t) jxj 2 2 # for the respective solutions, where the exact value of the normalization factor is N = (2 sinh(T )) d 4 . Arguing as in the preceding example one then obtains P (Z 0 = o) = P (Z T = o) = 1 (38) so that the process is conditioned to start and end at the origin, thereby representing a random loop in R d . Moreover, for positive times one still gets from ( 24) P (Z t 2 F ) = (2 (t)) d 2 Z F dx exp " jxj 2 2 (t) # and in particular P (Z t 2 A R1;R2 ) = exp R 2 1 2 (t) exp R 2 2 2 (t) (39) in the case of the two-dimensional annulus, but with a width parameter now given by Moreover, the function t 7 ! P (Z t 2 A 0;R2 ) is monotone decreasing on 0; T 2 and monotone increasing on T 2 ; T , thereby taking the minimal value (t) = sinh(t) sinh(T t) sinh(T ) (40 P Z T 2 2 A 0;R2 = 1 exp " R 2 2 2 T 2 # : (b) If 1 R 1 < R 2 one has P (Z 0 2 A 0;R2 ) = P (Z T 2 A 0;R2 ) = 0: Moreover, the function t 7 ! P (Z t 2 A R1;R2 ) is monotone increasing on 0; T 2 and monotone decreasing on T 2 ; T , thereby taking the maximal value P Z T 2 2 A R1;R2 = exp " R 2 1 2 T 2 # exp " R 2 2 2 T 2 # : Proof. While (41) follows from (38), Relation (39) with R 1 = 0 leads to d dt P (Z t 2 A 0;R2 ) = 0 (t) 2 2 (t) (R 2 ; t) where 0 (t) 0 for t 2 0; T 2 and 0 (t) 0 for t 2 T 2 ; T according to (40), which implies Statement (a). Statement (b) follows from these properties of 0 and an analysis similar to that of Statement (c) in Proposition 2. Indeed, we remark that the curve : [0; T ] 7 ! [0; +1) given by ( 40) is concave aside from satisfying (0) = (T ) = 0, and that it takes on the maximal value T 2 = sinh 2 T 2 sinh(T ) at the mid-point of the time interval. Therefore, the inequalities p 2 (t) s 2 T 2 1 hold for every t 2 [0; T ], which implies that (37) is monotone decreasing throughout the time interval as a function of R, a consequence of the hypothesis regarding the radii. The above properties of (40) thus show that the Bernstein process of Proposition 3 constitutes a generalization of a Brownian loop, that is, of a particular case of a Brownian bridge (see, e.g., [START_REF] Karatzas | Brownian Motion and Stochastic Calculus[END_REF]). This renders the preceding result quite natural, in that the probability of …nding the process in the disk A 0;R2 is minimal at the mid-point of the time interval where there is maximal randomness. At the same time, the situation is reversed if the annulus is relatively far away from the origin. As long as the regions of interest are spherically symmetric, the preceding calculations may be performed in any dimension and not merely for d = 2. However, I shall refrain from doing that and rather focus brie ‡y on what to do when the values of the parameters 0;T and a 0;T are arbitrary, or when other combinations of the above initial-…nal data are chosen. It is here that an expansion of the form ( 12) is essential, and I will now show what [START_REF] Galichon | Optimal Transport Methods in Economics[END_REF] reduces to in the case of (27). First, the spectral decomposition of the elliptic operator on the right-hand side of (25)-( 26) is known explicitly (the operator identi…es up to a sign with the Hamiltonian of an isotropic system of quantum harmonic oscillators, see, e.g., [START_REF] Messiah | Quantum Mechanics[END_REF]). Indeed, let (h n ) n2N be the usual Hermite functions h n (x) = 1 2 2 n n! 1 2 exp x 2 2 H n (x) (42) where the H n 's stand for the Hermite polynomials H n (x) = ( 1) n exp x 2 d n dx n exp x 2 : (43) Then, it is easily veri…ed that the tensor products d j=1 h nj where the n j 's run independently over N provide an orthonormal basis of eigenfunctions in L 2 C R d which satisfy the eigenvalue equation 1 2 x + jxj 2 2 ! h n (x) = E n h n (x) for each n 2N and every x 2 R d , where n = (n 1 ; :::; n d ) 2 N d and E n = d X j=1 n j + d 2 ; (44) h n = d j=1 h nj : (45) The immediate consequences are that (8) holds, and that expansion ( 12) for (27) takes the form g(x; t; y) = X n2N d exp [ tE n ] h n (x) h n (y) (46) where the series is now absolutely convergent for each t 2 (0; T ] uniformly in all x; y 2 R d . This very last statement follows from Cramér-Charlier's inequality jh n (x) h n (y)j k 2d d 2 (47) valid uniformly in n, x and y, where k 1:086435 (see, e.g., Section 10.18 in [START_REF] Erdélyi | Higher Transcendental Functions[END_REF] and the references therein). The advantage of having ( 46) is that the forward solution (17) may now be rewritten in terms of the Fourier coe¢ cients of ' and along the basis (h n ) n2N d , namely, u(x;t) = X n2N d n exp [ tE n ] h n (x) (48) where n = N Z R d dx' 0 (x)h n (x) ; (49) which in case of Gaussian initial-…nal data provides a nice representation of (28). In a similar way the backward solution (18) is v(x;t) = X n2N d n exp [ (T t)E n ] h n (x) (50) where n = N Z R d dx T (x)h n (x) ; (51) so that the normalization condition (11) now reads X n2N d n exp [ T E n ] n = 1: (52) Proof. It is clear that (57) holds because of (56) since ^ 0 > 0, ^ 0 > 0 by virtue of the fact that the eigenfunction h 0 associated with the bottom of the spectrum is strictly positive in R d . Then, the proof that the remaining term satis…es N 0;T X n2N d ; n6 =0 ^ n exp [ T E n ] ^ n = O (exp [ T ]) 2 follows from the fact that the ^ n 's and the ^ n 's are uniformly bounded in n, and from the summation of the underlying geometric series which is made possible thanks to the explicit form (44). Then, in case of Gaussian initial-…nal initial data in (25)-(26) one gets: Proposition 4. Assume that ' 0 and T are given by ( 3) and ( 4), respectively, and let Z 2[0;T ] be the Markovian Bernstein process associated with (25)-(26). Then the following statements hold: (a) For all F 0 ; F T 2 B d we have P (Z 0 2 F 0 ; Z T 2 F T ) = 4 2 0 T d 2 Z F0 dx exp " 1 2 0 x a 0 1 + 0 2 # Z F T dx exp " 1 2 T x a T 1 + T 2 # + O (exp [ T ]) (58) for T su¢ ciently large, where where g is given by (46), that is, g(x; T; y) = ĝ(x; T; y) + X because of (57), as desired. Finally (58) implies (59), and also (60) under the hypothesis in (b) since the function t 7 ! P (Z t 2 F ) given by (24) is then independent of t. Proposition 2 . 2 ) varies in the course of time for various values of the radii: The following statements hold: (a) If 0 = R 1 < R 2 one has ) for every t 2 Proposition 3 . 23 [0; T ]. This function is quite di¤erent from (34), and the following result is valid: The following statements hold: (a) If 0 = R 1 < R 2 one has P (Z 0 2 A 0;R2 ) = P (Z T 2 A 0;R2 ) = 1: (41) 2 # 2 # 2 Z 222 +O (exp [ T ]) :(59) (b) If 0 = T : = and a 0 = a T := a and if the process Z 2[0;T ] is stationary, the preceding relations reduce toP (Z t 2 F ) = (2 ) +O (exp [ T ])(60)for T large enough, each t 2 [0; T ] and every F 2 B d , where = 1+ .Proof. From (21) and (22) one hasP (Z 0 2 F 0 ; Z T 2 F T ) = N F0 dx' 0 (x)Z F T dyg(x; T; y) T (y) 2 ZF0 2 n2N d ; n6 =0exp [ T E n ] h n (x) h n (y) 44) and (45) for n = 0. One then obtains N replacing N by N 0;T together with the explicit evaluation of these Gaussian integrals gives the leading term in (58).It remains to show that the contribution to (58) coming from the second term on the right-hand side of (61) is exponentially small. Writing momentarily g(x; T; y) = Xn2N d ; n6 =0 exp [ T E n ] h n (x) h n (y)and estimating the absolute value of g by using (44) and (47), one eventually gets jg(x; T; y)j c d [ T ]) d uniformly in all x; y 2 R d by summing the underlying geometric series as before, where c d is a positive constant depending only on d. Therefore, [ T ]) d = O (exp [ T ]) Acknowledgements. I am particularly indebted to Prof. W. Petersen and Prof. T. Rivière for having made several visits in Zurich …nancially possible through funds from the Forschungsinstitut für Mathematik of the ETHZ, where parts of this work were carried out and whose warm hospitality I gratefully acknowledge. This way of formulating things, in turn, leads to the possibility of constructing a sequence of Faedo-Galerkin approximations to the problem at hand. Thus for any positive integer N 1, let E N R d be the N d -dimensional subspace of L 2 C R d generated by the h n 's where n j 2 f0; :::; N 1g for each component of n. Green's function (46) may then be approximated by to ( 48) and (50), respectively. Consequently, various numerical computations and controlled approximations of the probability distributions of interest now become possible. I complete this short article by a simple illustration of this fact stated in Proposition 4 below, whose proof is based on the following result which provides an approximate value for N : Lemma. Let (52) be written as where for every n 2 N d . Then for all 0;T > 0, a 0;T 2 R d , the unique positive solution to is of the form where c > 0 is a constant depending only on 0;T and a 0;T . Moreover, with the value (57) in ( 49) and (51) one gets X for T su¢ ciently large. Remark. The …rst term on the right-hand side of (61) corresponds to the minimal choice N = 1 in the Galerkin approximation (53), for (44) and (45) with n = 0 imply that (62) is Using once more (44) and (45) with n = 0, the corresponding approximation (54) for t = T then reads so that replacing N by (57) and arguing as in the above proofs one eventually gets for T su¢ ciently large. A similar approximation procedure applies to the backward solution, so that in the end one obtains yet another algorithm to compute (59) since ^ 0 and ^ 0 can be determined explicitly in case of Gaussian initial-…nal data. It would have been di¢ cult to evaluate (64) directly from (24) given the complicated form (28). As a matter of fact, the technique used also works if the data are of the form ( 5)-( 6) since ^ 0 and ^ 0 are then easily determined by numerical calculations. More generally, there is an important computational issue about (54) and (55), namely, that of knowing how large one has to choose N as a function of the desired degree of precision to reconstruct u and v. As long as error terms of the form O (exp [ T ]) are considered satisfactory, the above considerations show that the choice N = 1 is su¢ cient. If not, larger values of N will do. Finally, thanks to an expansion of the form [START_REF] Galichon | Optimal Transport Methods in Economics[END_REF], similar Faedo-Galerkin approximation methods may be applied to the forward-backward solutions of (1)-(2) when the potential function satis…es Hypothesis (H) and [START_REF] Chueshov | Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Itô's case[END_REF], or even more general conditions, provided that precise information be available about the spectrum (E n ) n2N d and the corresponding sequence of eigenfunctions (f n ) n2N d . The detailed results will be published elsewhere.
39,812
[ "747618" ]
[ "211251", "39398" ]
01471416
en
[ "spi" ]
2024/03/04 23:41:46
2015
https://hal.science/hal-01471416/file/doc00021540.pdf
Kiswendsida Abel Ouedraogo Julie Beugin El Miloudi El Koursi Joffrey Clarhaut Dominique Renaux Frédéric Lisiecki Harmonized methodology for Safety Integrity Level allocation in a generic TCMS application INTRODUCTION Rail systems safety remains a major concern in railway domain, where accidents can result in significant damages on the system and on the environment and cause many victims (e.g. railway accidents of summer 2013 in France, in Spain and in Switzerland). In Europe, the design and operation conditions of these systems are now governed by the rules described in legal texts (directives, regulations, decrees, etc.) and by a normative reference that require system safety demonstration. The reference documents are composed of specific European standards (the EN 50126,8,9 soon replaced by the unique EN 50126 multi-parts standard) derived from the functional safety generic standard IEC 61508 (2011); and it describes the safety aspects to be applied to different levels of the rail system life cycle. Railway standards recommend the application of the risk management process upstream of the rail system design. It involves setting safety levels in terms of SILs (Safety Integrity Levels) to most of system parts. A given SIL between SIL 0 and SIL 4 is linked to qualitative and quantitative requirement specifications for a safety-related function that are defined according to the random and the systematic failures related to the E/E/PE safety systems that perform the function (prEN 50126 2012 -part 2, §10.2). SIL 4 is related to the most demanding requirements to counteract the hazard causes arising from these two kinds of failures. However several sector safety standards derived from the IEC 61508 differ in their derivation of SILs resulting then to misunderstand the SIL allocation process. A state of the art and consultations have been completed explaining some practices employed in the railway domain compared to other domains in order to clarify, for railway actors, the SIL allocation process, especially concerning TCMS (Train Control and Monitoring Systems) that manage all the hardware and software parts inside trains (Ouedraogo et al. 2014). Specific rules implicitly used for SIL allocation process in TCMS have been identified/formalized and integrated in the proposed methodology. Firstly, this paper will present how SILs are used within the harmonized risk management process for railway systems in the European Union. Then, the methodology aiming to harmonize SIL allocation in a generic TCMS application is described in detail. The "Passengers doors" and "Emergency brake" subsystems are retained as application studies and the obtained SIL allocation results are presented. SIL USE IN THE HARMONIZED RISK MANAGEMENT PROCESS Railway safety is part of the various European Union texts recommending an unified European rail network in which future transportation systems will be interoperable (directive 2004/49 ABSTRACT: This article presents a generic methodology for SIL allocation to railway rolling stock safetyrelated functions to solve the SIL concept application issues. This methodology is based on the flowchart formalism already used in CSM European regulation. It starts with the use of quantitative safety requirements, particularly the Tolerable Hazard Rate (THR). The THR apportioning rules are applied. On the one hand, the rules are related to logical combinations of safety-related functions preventing hazard occurrence. On the other hand, to take into account technical conditions (last safety weak link, functional dependencies, technological complexity, etc.), specific rules implicitly used in existing practices, are defined for readjusting some THR values. SIL allocation process based on apportioned and validated THR values is finally established. Generic "Passengers doors" and "Emergency brake" sub-systems are examined in terms of SIL allocation to the safety-related functions and the methodology is validated by compliance with the safety requirement objectives. developed their own rules and safety standards mainly at national level based on national technical and operational concepts. Differences exist and can affect the optimum functioning of rail transport in the EU and the approval of a system by some National Safety Authorities (MODSafe, 2010;MODURBAN, 2006). Some steps have been taken to support the safety process harmonization as: the adoption of subsystems Technical Specifications for Interoperability (TSI), the definition of the Common Safety Targets (CST) and the definition of the Common Safety Method (CSM). The unification of railway methods and safety objectives continues with the establishment of CSM Design Targets for technical systems (CSM-DT). The CSM regulation (402/2013/EU) defines a harmonized and generic risk management process to be applied to new rail systems in agreement with the EN5012x standards or to systems with a significant change that has an impact on safety. After the definition of the system under assessment, the risk management process, a global iterative process is depicted in regulation 402/2013/EU: appendix. In this process SIL can specify safety requirements to safety-related functions given the conclusions of the risk analysis and evaluation that derive global safety objectives associated to hazards, some objectives being defined in terms of Tolerable Hazard Rate (THR). Then, function ability performed by a safety-related system to comply with SIL must be validated. Operating procedures, testing and maintenance must also comply with the requirements of SIL. According to sectors, there are different methods to allocate SIL depending on standard in use, national practices and regulations, project's and operator's methods in use or available data (Rouvroye 2001, Smith & Simpson 2004, MIL-STD-882 E 2012, IEEE 10122012, IEC 62061 2005, IEC 61511 2003). Those mostly employed in railway domain are the well-known risk matrix and the risk graph, even if they are mainly used to derive safety requirements in general. The methodology for SIL allocation presented hereafter is dedicated to railway rolling stock safetyrelated functions and aims to solve the SIL concept application issues. Each step of the methodology is illustrated by TCMS examples. Particular attention is drawn to the fact that this methodology should fit into the context of European regulation harmonization especially, the CSM regulation risk management process. SIL associated measures allow specific safety requirements for E/E/PE sub-systems in the CSM process to be laid down. METHODOLOGY FOR SAFETY INTEGRITY LEVELS ALLOCATION The generic methodology is based on the flowchart formalism already used in CSM regulation. It is principally dedicated to allocate SIL when the explicit risk acceptance principle is used, SIL allocation being direct for other principles (codes of practice, use of a reference system). It applies also to rolling stock safety-related functions rather than signalling functions even if the principles are still applicable to the latter. Indeed, for a signalling system that intervenes as final barrier against the rolling stock device failures, the allocation process is often direct by setting the highest SIL. For a rolling stock that combines different types of functions whose failures indirectly lead to risks, the methodology has to handle complex functional interactions. The methodology includes steps based on practical rules and hypotheses to be tested, with the aim of an effective application. It starts with the use of quantitative safety objectives associated to hazard situations, particularly THR as they are considered in many analyses and as the regulation CSM-DT are quantitative. One THR objective is declined to apportioned THRs to functions whose failures lead to a given identified hazardous situation. Even if the initial THR objective is quantitative, it is also recognized to set specifications on the integrity of random and systemic failures and then SIL. The methodology is illustrated by the overview in Figure 1. This is a macro view highlighting two main processes detailed subsequently with examples. In the process 1, the THR apportioning rules are applied to the safety-related functions. On the one hand, these rules are based on the logical combinations of these functions. On the other hand, to take into account technical conditions (last safety weak link, functional dependencies, technological complexity, etc.), specific rules implicitly used in existing practices and that the paper makes explicit, are defined for readjusting some THR values. SIL allocation based on apportioned and validated THR values, are finally established in process 2. The methodology requires the following input data based on a complete functional analysis: -The list of hazardous situations for the considered system (examples of generic hazards covering standard railway operations are listed in (ERA, 2009 -annex C17); -The list of safety related-functions directly or indirectly leading to hazards; -The list of function failure combinations (scenarios) leading to each hazard and functional failures leading directly to a risk of death; The risk criteria associated to hazards (i.e., maximal THR) or to functions (as CSM-DT). The external barriers to reduce the risk of the system (prevention barriers against accident or protection barriers against damages) are not included in the methodology (e.g., external technical systems, human factors, operational rules). Indeed, the THR objectives associated to hazardous situations are considered already taking into account these external barriers. Note that the considered safety-related functions, i.e. the functions whose failures affect the system safety (e.g., open the doors, maintain the speed), include the safety functions, i.e. the functions that have for primary role to reduce risks (e.g., control the speed, lock the doors) and contribute to the implementation of technical safety barriers (physical or non-physical means reducing the hazard frequency/potential accident caused by the hazard/the severity of potential accidents caused by the hazard). Figure 2 shows the detailed flowchart of the generic SIL allocation methodology for safetyrelated functions. Before detailing the application of this flowchart through 2 TCMS examples, the required input data for these examples are defined. The generic rolling stock subsystems considered The choice of the "Passenger doors" subsystem as an application study is motivated by its complexity. Generic safety-related functions encountered in the "Passenger doors" subsystem are presented within a Function Analysis System Technique (FAST) diagram in Figure 3 with functions taken from (EN 15380-4 2013, EN 62290-2 2012, TSI LOC&PAS 2013, EN 14752 2006). The subsystem "Emergency brake" is also considered in this study. The main generic safety-related functions are (EN 15380-4 2013): -Acquire emergency brake request and its tree subfunctions: acquire emergency brake request triggered by the driver, by automatism or by the passengers; -Operate the emergency brake and its two subfunctions: operate the emergency brake triggered by the driver or by automatism; -Traction request by emergency brake; -Isolate emergency devices; -Execute emergency brake. The first sub-system, the "Passenger doors" subsystem, is considered without movable step management (to reduce the gap between vehicle and platform) as it is the case for most of metro systems. It has the following functional characteristics: -Automatic opening/closing; -Obstacle detection interrupting the closure leading to the doors opening; -Both visual and acoustic sign/alarm of the door imminent closing and indicating abnormal condition; -Indication of "doors closed and locked" status allowing the train departure; and during the train route, the doors must remain closed and locked; -In case of technical incident or accident, the doors unlocking and opening functions are insured by operating on a manual device ("unlocking handle"). An accidental door unlocking during the train route triggers the emergency brake for stopping train. A list of generic hazardous situations related to "Passenger doors" subsystem or to "Emergency brake" subsystem can be done considering functional failures and taking into account the context (e.g., train in station, off station) and the areas (e.g. wrong side of train). Lists of generic hazardous situations With the identified generic safety-related functions of "Passenger doors" system (see Fig. 3), some lists of hazardous situations can be established based on functions and associated sub-functions contributing to the considered situation. Table 1 presents a list of generic hazardous situations related to "Emergency brake" subsystem. The combinations of functions and their associated sub-functions (see Fig. 3) whose functional failures lead to each hazardous situation are identified by the fault tree method. Then, the developed methodology describes the way how each safety objective is apportioned in terms of THR to these safety-related functions and their associated sub-functions. Table 1. Generic hazardous situations related to "Emergency brake" sub-system. __________________________________________________ Hazardous situation by type of accident Safety objective __________________________________________________ Collision/derailment (leading to one (critical) or more death (catastrophic) Applies to units fitted with a cab (brake command) THR≤10 -9 After activation of an emergency brake command no deceleration of the train due to failure in the brake system (complete and permanent loss of the brake force). Applies to units equipped with traction equipment THR≤10 -9 After activation of an emergency brake command, no deceleration of the train due to failure in the traction system (Traction force ≥ Brake force). After activation of an emergency brake command THR≤10 -7 The stopping distance is longer than the one in normal mode due to failure(s) in the brake system. __________________________________________________ Process 1 for THR apportionment Given the fault trees associated to each hazardous situation, the process 1 for THR apportionment can start. It comprises 4 phases reiterated for each tree: -The allocations of the THR objective at the top of the fault tree; -The apportionment of the THR objective to safety-related functions based on Boolean logical combination rules; -Then, some "apportioned THR" modifications based on specific rules; -The "apportioned THR" analysis and quantitative validation. These 4 phases are described and illustrated by examples related to both TCMS subsystems considered. The "Fault Tree" module of GRIF software is used for the representation and calculation process. Allocation of the THR objective For each hazardous situation, a safety objective is set in terms of THR. Then these THR objectives (Cf. examples in Tables 12) are reported to the fault tree top event. Apportionment of the THR objective based on Boolean logical combination rules A THR objective is apportioned to the functions of the fault tree through logic "OR/AND" gates to obtain apportioned THR. The use of Boolean logical apportionment rules is conditioned by the fact that the functions are independent and that the THR values are very small compared to 1. The first condition allows the use of elementary probability laws associated to "OR/AND" gates instead of conditional probabilities. The second condition allows the use of rates in the same way as probabilities. The dependent functions are allocated to an identical THR. An independence test (Cf. process 1.1 in Fig. 1 & 2) must be performed and the associated sub-process applied. "Level i" denotes the set of all "OR /AND" gates and the associated functions F ij of the fault tree. The integer variable j is used to browse all functions F ij associated to the considered level i. For a given level i, we define the number of "branches" (set of functions and all sub-elements as gates and associated sub-functions) as equal to the number of functions F ij associated to this level gate. In Figure 4, we have tree functions F11, F12 and F13, and therefore three branches (k = 1, 2 and 3) composed of functions and gates, sub-functions associated with each of these functions. Two safety-related functions are independent if they control the same hazard, but each of them performs its control autonomously, no matter whether the other is present or not (prEN 50126-2 2012). The independence test verifies that the "basic events" sub-functions associated to each function at a given level i are not repeated in the "basic events" sub-functions of the other branches of the same level. The THR Top/Down apportionment rules are then as follow: following the independence test, the immediate top THR is reported to each function; however, this THR value can then be further apportioned to sub-functions subject to their independence demonstration. Example: Figure 5, function 14 THR is reported to dependent functions 14.1 & 14.4 as they share repeated functions 14.5 & 14.6 (encircled events); c. For an "OR" gate, if the sub-functions F ij are independent following the test, the given THR is apportioned equally to each independent subfunctions (prEN50126-2 2012). Example: Figure 5, function 17 THR is apportioned equally to independent sub-functions 14 & 18. For an "AND" gate, if the sub-functions F ij are independent following the test, the THR apportionment has to take into account functions failures "Safe Down Time" (SDT); the "Safe Down Rate" is equal to SDR=1/SDT. For n independent function, the THR is apportioned based on the following formula: . . 1 After this phase of THR apportionment based on Boolean logical combinations rules, some "apportioned THR" must be modified based on more specific rules. Apportioned THR modifications based on specific rules Specific rules taken from the standard prEN50126-2 (2012) and from some railway organism consultations involve modifying already apportioned THR. From the fault tree top event, all the functions F ij are examined level by level in order to modify their THR based on the specific rules as follow: d. For a function whose technical implementation is already fixed (e.g., commercial off-the-shelf COTS, technical conditions in use on a railway network, etc.), the THR is modified based on the available feedback or on reliability data (e.g., from FIDES guide) associated to a given technical solution implementing a safety-related function. The known rate can be reported to the function (Cf. rule at points #17 & 18 on Fig. 1). Example: the compatibility between all rolling stocks and French national railway network requires defining technical implementation conditions in advance. e. For a function subject to a strong safety constraint (Example: function whose failure leads directly to hazard, signalling functions, braking system functions, etc.), a more constraining THR should be allocated based on safety requirements (Cf. points #19 & 20 on Fig. 1). f. For functions or sub-functions repeated in different branches of the same fault tree, the apportioned THR must be identical (Cf. points #21 & 22 on Fig. 1). Example: in Figure 5, the two repeated "basic events" sub-functions 14.5 & 14.6 have their THR set to 2,5.10 -8 /h; g. For functions appearing in another hazard fault trees, the minimum THRmin between the THR is reported to each function; a tracking procedure (process 1.2) based on Breadth-first search algorithm for repeated functions has been defined (Cf. rule at points #23 on Fig. 1). An analysis needs to be done in order to validate the THR apportionment process. THR apportionment analysis and quantitative validation. After the apportioned THR modifications based on the specific rules, a quantitative Down-Top analysis is completed to verify compliance with the hazard safety objectives. Example: for the hazardous situation "After activation of an emergency brake command, the stopping distance is longer than the one in normal mode due to failure(s) in the brake system", the safety objective 10 -7 /h is firstly apportioned equally to the 3 independents subfunctions based on logical combination rules through "OR" gate. But functions n° 3 & 5 are repeated in the two others hazardous situation related to "Emergency brake" sub-system fault trees with more safety constraints; thus the most restrictive THR (3,33 10 - 10 /h) from the other fault trees is reported to these functions (specific rule d). Therefore the quantitative Down-Top analysis allows function n°1 to be set to THR max =9,93 10 -8 /h for a maximum constraint relaxation on this function and its associated subfunctions with compliance with the hazard safety objectives (Cf. Fig. 7).The constraint relaxation from the THR apportionment validation allows a function to be set at a less restrictive SIL through process 2 for SIL allocation described below. Process 2 for SIL allocation The SIL allocation to safety-related functions is set in principle by a THR to SIL correspondence (Cf. table A1, EN50129 2003-appendix A); but for some complex functions including their technical realization, some specific rules must be taken into account in order to modify the allocated SIL (Cf. process 2 presented in Fig. 1 7 with validated THR apportionment, the process 2 for SIL allocation application gives the following results summarized in Table 3. Functions 3 & 5 have quantitative requirements more demanding than 10 -9 [h -1 ], therefore, SIL 4 will be allocated to these functions in combination with other technical or operational measures. 4 CONCLUSION Functional safety standards require that SIL has to be allocated to safety-related functions but these standards differ in their derivation of SILs resulting to the misuse of the concept. Based on a state of the art of the SIL use in different domains and some consultations, a SIL allocation methodology is proposed and detailed with examples for an application to a generic TCMS rolling stock. This paper clarifies the specific rules that are implicitly used to guide the THR apportionment (process 1) and then SIL allocation process (process 2). The "Passengers doors" and "Emergency brake" subsystems are retained as applications studies; the proposed methodology allows the THR apportionment to all fault tree functions and the SIL allocation. Future research will concentrate on the need for feedback from already consulted organisms for improved guidance in term our generic SIL allocation methodology development. Figure 1 . 1 Figure 1. Overview of process 1 & 2: THR apportionment & SIL allocation internal emergency opening system of THR≤10 -7 two adjacent doors, platform side, train stop Not reporting no disabled accessibility THR≤10 -5 __________________________________________________ Figure 4 . 4 Figure 4. Setting levels and branches for functions independence test a. If there is only one function at the down level, the immediate top level THR is reported. Example: functions 18 & 18.1 or hazard & function 17 in Figure 5; b. Dependent functions must have identical THR. If all functions at a given level are dependent Figure 2 .Figure 3 . 23 Figure 2. Flowchart of the generic SIL allocation methodology Figure 6 . 6 Figure 6. THR apportionment through "AND" gate illustration /EC amended by directive 2008/110/EC for railways safety and harmonization principles of safety approval; decision 2009/460/EC on common safety method for assessing safety achievement). Member states have Harmonized methodology for Safety Integrity Level allocation in a generic TCMS application K.A. Ouedraogo 1,2 , J. Beugin 1,2 , E.-M. El-Koursi 1,2 , J. Clarhaut 3 , D. Renaux 3 & F. Lisiecki 4 1 Univ. Lille Nord de France, F-59000 Lille, France 2 IFSTTAR, COSYS, ESTAS, 20 Rue Elisée Reclus -BP 70317 -59666 Villeneuve d'Ascq Cedex, France 3 LAMIH -Auto UMR CNRS 8201, University of Valenciennes and Hainaut-Cambresis Le Mont Houy -F-59313 Valenciennes Cedex 9, France 4 EPSF, Regulations Directorate -Rules and Standards Units 60 Rue de la Vallée CS 11758, 80017 Amiens Cedex 1, France Table 2 2 presents a list of generic hazardous situations related to "Passenger doors" subsystem and the associated safety objective in terms of THR values. These values are taken from objectives that are available at the time of this work, i.e. objectives taken from French regulations (SAM -Spécifications d'Admission Matériel -approvals specifications for rolling stock) or from European TSI (2013, related to the rolling stock -locomotives and passenger).Traction authorized with several doors not closed but THR≤10 -7 reported erroneously closed secured Possibility to open on request (excluding emergency THR≤10 -7 opening) a door in inappropriate situations (train stop in platform side or in wrong side) Table 2. Generic hazardous situations related to "Passenger doors" subsystem __________________________________________________ Hazardous situation by type of accident __________________________________________________ Safety objective Fall of passengers or collision (train fouling the gauge) Several doors are opened in inappropriate situations THR≤10 -9 (train running, one or two sides of the train) Several doors are opened in inappropriate situations THR≤10 -5 (train stop at the platform) Wedging Inappropriate closure without imminence phase Table 3 . 3 SIL allocation to safety-related functions __________________________________________________ Functions/ sub-functions __________________________________________________ THR(/h) SIL 1. Acquire emergency brake request >10 -8 3 1.1 Acquire emergency brake request >10 -8 3 triggered by the driver 1.2 Acquire emergency brake request >10 -8 3 triggered by automatism 1.3 Acquire emergency brake request >10 -8 3 triggered by the passengers 3. Traction break request by emergency brake >10 -10 4 5. Execute emergency brake __________________________________________________ >10 -10 4 Draft Regulation LOC&PAS TSI. 2013. Technical Specification for Interoperability relating to the 'rolling stock -locomotives and passenger rolling stock' subsystem of the rail system in the European Union. EN 14752. 2006. Railway applications -Bodyside entrance systems. CENELEC. EN 15380. 2013. Railways applications -Classification system for railways vehicles -Part 4: Function groups. CENELEC. EN 50129. 2003. Railways applications -Communication, signaling and processing systems -Safety related electronic systems for signaling. CENELEC. EN 62290-2. 2012. Railway applications -Urban guided transport management and command/control systems -Part 2: Functional requirements specification. CENELEC. ERA, 2009. Collection of examples of risk assessments and of some possible tools supporting the CSM Regulation. IEC 61508. 2011. Functional safety of E/E/PE safety-related systems. IEC 61508-1 to 7 IEC 61511. 2003. Functional safety -Safety instrumented systems for the process industry sector. IEC 62061. 2005. Safety of machinery -Functional safety of electrical, electronic and programmable control systems for machinery. IEEE 1012. 2012. IEEE standard for system and software verification and validation. IEEE-SA Standards Board. MIL-STD-882 E. 2012. Department of Defense Standard Practice -System Safety. MODSafe 2010. WP 4 -D4.1 State of the art analysis and review of results from previous projects. Modular Urban Transport Safety and Security Analysis. MODURBAN 2006. WP23 -D 86. Safety Conceptual Approach for functional and technical prescriptions. Modular Urban guided Rail systems. Ouedraogo K.A., Beugin J., El-Miloudi E.-K., Clarhaut J., Renaux D., Lisiecki F. 2014. Allocation rules of Safety Integrity Levels in a generic TCMS application. FORMS-EU on the Common safety method for risk evaluation and assessment. Rouvroye J. L. 2001. Enhanced Markov Analysis as a method to assess safety in the process industry. Beta Research School for Operations Management and Logistics. Technische Universiteit Eindhoven. The Netherlands. Smith D. J. and Simpson K. G. L. 2004. Functional safety: a straightforward guide to IEC 61508 and related standards. FORMAT, Braunschweig, Germany, septembre. prEN50126. 2012. Railways applications -The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS). CENELEC standard project Part 1 to 5. Regulation 402/2013/ OUEDRAOGO, Kiswendsida Abel, BEUGIN, Julie, EL KOURSI, El Miloudi, CLARHAUT, Joffrey, RENAUX, Dominique, LISIECKI, Frédéric, 2015, Harmonized methodology for Safety Integrity Level allocation in a generic TCMS application, ESREL 2015 -European safety and reliability conference, Zürich, SUISSE, 2015-09-07, 8p
28,945
[ "1110519", "890516" ]
[ "222119", "222119", "222119", "1303", "485982", "485982" ]
01471797
en
[ "info" ]
2024/03/04 23:41:46
2013
https://hal.univ-reunion.fr/hal-01471797/file/proceedings-4th-enollss13.pdf
Noel Conruyt Véronique Sébastien Didier Sébastien Olivier Sébastien David Grosser URBAN AND TERRITORIAL INNOVATION WITH LIVING LABS 1. Semiotic Web and Sign management as new paradigms for Living Labs in Education-Applications in natural and cultural heritage of insular tropical islands Keywords: Living Lab, Semiotic Web, Sign management, e-service, Creativity Platform, education Introduction In the context of sustainable development of insular tropical islands, and more specifically for sustainable education in the South West of Indian Ocean, data and knowledge management of specialists of natural or cultural diversity is at the heart of designing new ICT services. For example, the objectives of these e-services are to manage biodiversity and musical information on the Web, in order to preserve insular tropical islands common heritage. But the method of building data and knowledge bases is moving towards more Open, Inclusive and Smart approaches for a new 2020 Horizon. Open was initially inspired by EU (INSPIRE directive) and is characterized by opening public databases, for them to be enhanced by companies in new useful e-services for citizens. As such, Web Services are used for mutual inter-operability of databases. Inclusive is related to the different types of people that can participate to data and knowledge creation, i.e. experts, managers, stakeholders, amateurs who wish to involve themselves in useful e-services for the benefit of the community. Social Web is thus a means to connect people and facilitate communications between them in the common society. Smart is a sort of collective intelligence where digital and structured knowledge is used to answer more efficiently to complex problems that have been formalized in ontologies. This decision help solution brought by Semantic Web is the third technological response for being sustainable in the UE worldview. Problem But for us, it lacks another dimension to reach the knowledge society rather than the knowledge economy for sustainable development. This can be termed Desirable, which is the first spirit dimension of such a digital ecosystem that is linked to information search. Desirable is at the root of human motivation to make actions in a certain direction. It is a psychological process (volition) that is anchored in living beings that are immersed in their milieu (umwelt). What is desirable today for young people who are digital natives? It seems that Immersive Web would be the fourth criterion to this new 2020 Horizon, because a lot of people play with video games today. But in an education perspective anchored on a territory such as in Reunion Island, the adequate answer would be to develop game-based learning e-services that are altogether desirable, open, social and inclusive. The New Media Consortium in its Horizon 2013 report pointed out the next challenges in education for the next five years [Johnson et al., 2013], learning games being expected to be adopted in two or three years. This is why we developed the Wisdom project (Wide Immersive Solution for Data Object Model) that makes use of Semiotic Web and Sign management as new paradigms for Teaching and Learning by Playing in the Future Web. Semiotic Web and Living Labs The problem that we have to solve in our research team is how to develop sustainable e-services that are really used for involving citizens in the preservation of their natural and cultural heritage (action research). It seems that Living Labs with their user-centred design methods are the best answer for co-designing these solutions with motivated end-users, i.e. lead users. So in Reunion Island, we instantiated our University of Reunion Living Lab in Teaching and Learning to tackle this problem, each of co-authors being a Ph.D. lead user specialized in one of the four different dimensions for biodiversity teaching (corals and forests) and instrumental e-learning of music (guitar and piano). We then found a new paradigm called the Semiotic Web2 that combines social, semantic and immersive Web services (see Figure 1), in order to put human beings at the centre of innovative technologies. Living Lab (LL) is the overall conceptual frame that stresses on political and methodological principles for user-driven open innovation. On more pragmatic, scientific and technical aspects, we conceived a LL method based on Sign management and a tool, the Creativity Platform, used for co-designing e-services iteratively with the whole community users [Conruyt, 2010]. Sign management Sign management is the new ecosystem of knowledge management that we want to promote on our Creativity Platform. For making e-services with people and not only with specialists, a more concrete vision of cognition is required. Sign management is a solution for managing living knowledge, which is bi-directional between teachers and learners. It stresses the importance on the sharing of subjects' interpretations, i.e. subjective know-how of end-users, rather than on the transmission of fossilized objects, i.e. explicit knowledge of experts found in documents or books. This new framework tries to give sense to shared information for all the users (specialists, endusers, lead users) acting in their communities. The notion of Sign is more central than Knowledge for our purpose. It is composed of Data (object), Information and Knowledge. A Sign is the interpretation of an object by a subject at a given time and place, which takes account of its form (Information), its content (Data) and its sense (Knowledge). What is exchanged on a support between subjects is called Information and this digitized codification can be managed. This makes our Sign management ecosystem a tetrahedron model (cf. Figure 1) that is more involved in concrete life with end-users. It emphasizes the sign-ification of objects by different subjects (i.e. sobjects) by allowing them to show their interpretations of objects with multimedia (audio, video) annotated in textual descriptions. Signification or semiosis is the key psychological process that makes sense for practising usage based research and development with people by sharing data, information and knowledge [Conruyt, 2013]. Conclusion Social Semantic Immersive and Service Web form the Semiotic Web by showing know-how (human performances) on top of written and formalized knowledge (machine representations). This endeavour matches recommendations of EU for an open, smart and inclusive innovation pragmatic action. The last pillar of such a vision with ICT is to render this pathway for Future Web desirable. A Semiotic platform such as Wisdom is thus an objective that should not be missed in the frame of our Living Lab methodology for a better education with people. Dauphine in machine learning and symbolic-numeric data analysis applied to the knowledge management of specialists in biology. Noël Conruyt is also an engineer in agronomy and a classical guitar player from the Regional Music Conservatoire of Reunion Island. Thanks to these other skills, he wants to study computer science through the eyes of an end-user of such domains. Figure 2 : 2 Figure 2: the four dimensions of Semiotic Web: Immersive Social Semantic Service This term comes from Biosemiotics[Sebeok, 1992], a science that started by studying semiotic phenomena in animals and then in other living creatures.
7,411
[ "176983", "12582", "969032", "6169", "13525" ]
[ "54305", "54305", "54305", "54305", "54305" ]
01472227
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472227/file/978-3-642-40352-1_13_Chapter.pdf
Giovanni Miragliotta email: [email protected]@mail.polimi.it Fadi Shrouf Using Internet of Things to improve eco-efficiency in Manufacturing: a review on available knowledge and a framework for IoT adoption Keywords: Internet of Things, eco-efficiency, framework Green manufacturing and eco-efficiency are among the highest priorities of decision makers in today's manufacturing scenario. Reducing the energy consumption of production processes can significantly improve the environmental performance of the human activity. This paper aims at investigating, according to currently available technologies and to their short-term feasible evolution paths, how the Internet of Things (IoT) paradigm could actually be implemented to increase efficiency in energy consumption, and reducing energy consumption cost. Introduction The manufacturing sector is one of the largest energy consumer, estimated at more than 31% of global energy consumption [1]. Relevant energy savings are expected to be achievable both from increasing the energy efficiency of production and logistic processes as well as in innovative energy monitoring and management approaches [START_REF] Weinert | Methodology for planning and operating energy-efficient production systems[END_REF]. In this scenario, emerging technologies such as IoT paradigm are believed to play a lead role in increasing energy efficiency, working at different levels. More specifically, IoT could play such a role first by increasing the awareness of energy consumption patterns (real time data acquisition level), and then by improving local (single machine) or global efficiency (multiple machines) by decentralizing data elaboration or even actuation decisions. In this regard, the paper is arranged in two sections. The first section is dedicated to a literature review covering energy efficient management, and IoT paradigm. Relying on this knowledge, the second section is devoted to present a framework for IoT adoption when pursuing energy efficiency targets in manufacturing. Some concluding remarks are drawn to address future research. 2 Literature review Energy Management and Monitoring Many methods and tools have been used for reducing energy consumption, such as energy monitoring tools, process modeling, simulation and optimization tools, process integration, energy analysis, and decision support tools, etc. [START_REF] Muller | An energy management method for the food industry[END_REF]. Often energy reduction approaches in industry are related to lean manufacturing concepts, and rely on empirical observation as in [START_REF] Michaloski | Analysis of Sustainable Manufacturing Using Simulation for Integration of Production and Building Service[END_REF]. The adoption of eco-efficiency needs to be included at all levels of production process, including the machinery [START_REF] Reich-Weiser | Appropriate use of Green Manufacturing Frameworks[END_REF]. Energy consumption of a machine is not strongly related to the production rate; conversely, the amount of consumed energy is mostly related to the time spent in specific operative states [START_REF] Gutowski | Electrical Energy Requirements for Manufacturing Processes[END_REF]; according to [START_REF] Park | Energy Consumption Reduction Technology in Manufacturing -A Selective Review of Policies , Standards , and Research[END_REF] the potential energy savings from reduction of waiting time or in the start-up mode are estimated around 10-25%. Real time data from energy monitoring systems is necessary for improving energy efficiency on manufacturing [START_REF] Vijayaraghavan | Automated energy monitoring of machine tools[END_REF]. For optimizing energy consumption of manufacturing processes, energy consumption awareness should be achieved first [START_REF] Karnouskos | Towards the energy efficient future factory[END_REF]. Monitoring and analysis of the energy consumption of machines are major steps towards increasing energy efficiency [START_REF] Bunse | Integrating energy efficiency performance in production management -gap analysis between industrial needs and scientific literature[END_REF]. Some conventional production systems are not able to collect data on the amount of energy consumed in production processes [START_REF] Ikeyama | An Approach to Optimize Energy Use in Food Plants[END_REF]. In this regard, IoT based solutions can be very useful to drive energy efficient applications: smart metering is an example to show the importance of the IoT in the energy area; in addition to providing real-time data, it could make decisions depending on their capabilities and collaboration with other services, as mentioned in [START_REF] Haller | The Internet of Things in an Enterprise Context[END_REF]. Internet of Things Actually IoT technologies consist of an integration of several technologies, as identification, sensing and communication technologies, which consist of RFID, sensor, actuators, and wireless information network [START_REF] Atzori | The Internet of Things: A survey[END_REF]. The IoT paradigm has opened new possibilities in terms of data acquisition, decentralized data elaboration/ decision making and actuation. The real time monitoring system enables continually monitoring production processes and machine status [START_REF] Subramaniam | Production Monitoring System for Monitoring the Industrial Shop Floor Performance[END_REF], and covering the disturbances through production line, such as machine failures, production errors, etc. as in [START_REF] Meyer | Production monitoring and control with intelligent products[END_REF]. Adoption of IoT technologies in manufacturing processes have been investigated in some researches, as shown in table 1. Few researches have investigated how and when IoT is useful for improving energy efficiency at the shop floor. 3 A framework for IoT Adoption, and managerial impact. Stemming from the literature review, and focusing on discrete manufacturing systems, a framework for IoT adoption has been conceived, which considers the following factors:  Type of energy fee structure. Two types of energy fee structure have been considered, fixed price, and variable energy price [24].  Eco-efficiency targets (awareness, improvement, optimization).  Existing IT infrastructure; Rely on the American national standard framework as in [START_REF]ANSI/ ISA: Enterprise-Control System Integration Part 3: Activity Models of Manufacturing Operations Management[END_REF]¸ which specifies 4 levels as listed below. For each of these levels, depending on the current state of IT adopted by the factory, different IoT applications may be conceived. ─ Level 1 defines the activities involved in sensing. ─ Level 2 defines the activities of monitoring and controlling processes. ─ Level 3 defines the activities of the work flow to produce the end products ─ Level 4 defines activities include plant schedule, inventory levels materials movements. Information from level 3 is critical for level 4 activities. By combining the aspects above, the framework in Figure 1 points out a set of feasible application domains, covering some of the existing possibilities. For each feasible domain (i.e. energy fee structure, existing IT infrastructure and eco-efficiency targets) the paper qualitatively discusses the current applicability of the IoT paradigm and points which specific functionalities could be entrusted to such solutions (measurement, sensing and actuation). Real time data from IoT including data from smart metering will be used to increase the awareness of energy consumption of each processes and machines, and providing new measures. Taking energy consumption in consideration during planning processes will lead to improve and optimize energy consumption, which requires more flexibility in planning and control, enable machine to machine communication, and adopting of decentralized decision making at shop floor. With adopting variable energy cost, the way of improving and optimizing energy consumption will be different, compared with fixed energy cost. IoT applications and real time data will play an important role in configuring, schedule, changing machine states, defining of current energy consumption during each process, monitoring, planning and control of the production processes at shop floor. IoT can feed MES with real time data from shop floor, and inventory movements, etc.; then, an efficient deci- Achiev e Achieve Achieve sion can be made. In this regards, adopting of decentralized decision making is important, whether decision will be made by shop floor supervisor, an employee, or by the machines. Examples of these decisions are: change the priority of production processes, change status, reschedule the production process, and inventory movement at shop floor. Actually many scenarios can be applied in real world depending on the framework. In Table 2, three scenarios are discussed to explain the framework, encompassing both fixed and variable energy cost structure. The first scenario assumes the available IT in a factory is level 1 and 2; the target is increasing the awareness of energy consumption. The second scenario assumes the available IT in a factory is level 3; and the target is improving energy consumption. The third scenario supposes the available IT in a factory is level 4; and the target is optimizing energy consumption. The integration of real time data from smart metering on energy consumption (per machine and per process), real time data on inventory, production status, and other real time data from the field on production process, will be useful for optimizing energy consumptions. This data can be also useful for production schedule etc., Acquired data on energy should be feed to ERP systems to be considered during next production planning and during configuring of production systems, etc. The integrations of variable energy cost information, with other information from MES, and ERP Systems, will be useful for optimize energy consumptions. Some alternatives will be offered for decision making, such as, changing machine statues, and reschedule production process (when it is possible, and depend on the energy cost). This means production planning is required to be more flexible to fit with energy price. Acquired data form IoT on energy consumption will effect on future production processes. Conclusions Adopting of IoT technology (as smart metering) is able to provide high level of awareness of energy consumption at all factories level. The awareness leads to find available energy saving opportunities in production process, this lead to reduce energy consumption. Also, IoT technologies may provide new opportunities for improved monitoring of the inventory, and traceability and visibility of manufacturing process, which lead to improve the processes at shop floor, accordingly reduce of energy consumption. Integration of the collected real time data with manufacturing systems, such as ERP and MES help the decision makers to make decisions with considerations of energy consumption. Variable energy cost structure effect on the way of achieving awareness, improving, and optimizing of energy consumption at manufacturing plant. More investigated is needed; in how effectively connect IoT technologies (i.e. sensor) to machines, connected to interconnect sensors and collect data in a business sustainable as (costs, flexibility, etc.). And how to insert the energy perspective in current manufacturing's planning and control, and decision making. Fig. 1 . 1 Fig. 1. Framework of adapting IoT for eco-efficiency at shop floor Table 1 . 1 Research on RFId and IoT application domains in manufacturing Manage and control invento-ry and material (trace and track) Monitoring shop floor planning and control shop floor Monitor and control the pro-duction process /machine status Author Year Meyer [15] 2011 x x x x Poon [16] 2011 x x Meyer [17] 2009 x x Zhang [18] 2011 x x x Zhou [19] 2007 x x Huang [20] 2008 x x Chen [21] 2009 x x Hameed[22] 2010 X Wang [23] 2011 x X
12,170
[ "1002103", "1002104" ]
[ "125443", "125443" ]
01472228
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472228/file/978-3-642-40352-1_14_Chapter.pdf
Ying Liu Niels Lohse email: [email protected] Sanja Petrovic email: [email protected] Nabil Gindy email: [email protected] An Investigation into Minimising Total Energy Consumption, Total Energy Cost and Total Tardiness Based on a Rolling Blackout policy in a Job Shop Keywords: Energy efficient production planning, sustainable manufacturing, job shop scheduling Manufacturing enterprises nowadays face the challenge of increasing energy price and emission reduction requirements. An approach to reduce energy cost and become environmental friendly is to incorporate energy consumption into consideration while making the scheduling plans. The research presented by this paper is set in a classical job shop circumstance, the model for the triple objectives problem that minimise total electricity cost, total electricity consumption and total tardiness when the Rolling Blackout policy is applied. A case study based on a 3*3 job shop is presented to show how scheduling plans affect electricity consumption and its related cost, and to prove the feasibility of the model. Introduction Manufacturing industry is one of the most important energy consumers and carbon emitters in the world. For instance, every year in China, manufacturing generates at least 26% of the total carbon dioxide emission [START_REF] Tang | On the Developmental Path of Chinese Manufacturing Industry Based on Resource Restraint[END_REF]. In order to reduce the carbon emission and balance the time-based unevenness of electricity demand, some countries, like China had promulgated corresponding electricity usage control policies and tariffs (EPTs), such as the Rolling Blackout policy for industry electricity supply, which means the government electricity will be cut off several days in every week resulting in manufacturing companies illegally starting their own diesel generators to maintain production. However, the private diesel electricity is more polluting and costly than the government supplied resource. Thus, the increasing price of energy and the current trend of sustainability have exerted new pressure on manufacturing enterprises, therefore they have to reduce energy consumption for cost saving and to become more environmentally friendly. As a result, employing operational methods to reduce the energy consumption and its related cost can be a feasible and effective approach for manufacturing enterprises [START_REF] Mouzon | A framework to minimize total energy consumption and total tardiness on a single machine[END_REF]. The modelling method proposed in this paper can be applied to discrete event machining production system and may save significant amounts of energy and cost as well as keeping a good performance on classical scheduling objectives. The term "machining" will refer to processes such as milling, turning, drilling, and sawing [START_REF] Dahmus | An environmental analysis of machining[END_REF]. In following content, the research problem will be raised after the research background and motivation; then the model will be presented, followed by a case study to demonstrate how scheduling plans affect the electricity consumption and its related cost in a job shop. Background and motivation A considerable amount of research has been conducted in the area of sustainable machining. A detailed process model that can be used to determine the environmental impacts resulting from the machining of a particular part had been presented in [START_REF] Munoz | An analytical approach for determining the environmental impact of machining processes[END_REF]. The authors of [START_REF] Dahmus | An environmental analysis of machining[END_REF] and [START_REF] Kordonowy | A power assessment of machining tools[END_REF] developed a system level research which not only includes energy requirement for material removal process itself, but also associated processes such as axis feed. The approach which breaks the total energy use of machining processes will be employed as the base for modelling electricity consumption of machine tools in this research. Several operational methods, such as genetic algorithm, to minimise the electricity consumption and classical scheduling objectives on a single machine and parallel machines had been proposed in [START_REF] Mouzon | Operational methods for minimization of energy consumption of manufacturing equipment[END_REF] and [START_REF] Mouzon | Operational methods and models for minimization of energy consumption in a manufacturing environment[END_REF]. These methods are based upon the realization that in manufacturing environment, large quantities of energy are consumed by non-bottleneck machines as they lay idle. The Turn Off/On method developed in the above approach will be applied in this research. However, the applicable range of these works limits in single machine and parallel-machines circumstance. A modelling method for minimising energy consumption of manufacturing processes had been developed in [8], nevertheless, this research is based on the assumption that alternative routes with different energy consumption amounts exist for jobs in the manufacturing system. Therefore, the model is not applicable for workshops without, or having identical alternatives routes for jobs. What had been discussed above provides the motivation for this research from an academic aspect, i.e. that employing operational methods to reduce the electricity consumption in a typical job shop still has not been explored very well. More importantly, from a practical aspect, the application of EPTs further complicate the aforementioned scheduling problem, since a scheduling plan that leads to reduction in electricity consumption does not necessarily lead to reduction in electricity cost in this situation. However, currently very little research focuses on this problem, even though it is important to deliver a trade-off between electricity consumption reduction and cost saving. Only [START_REF] Herrmann | Process chain simulation to foster energy efficiency in manufacturing[END_REF] considered an instantaneous power limit in a case study. In this case the authors tried to use a discrete event simulation method to find a favourable solution. However, the solution quality could have been much improved if the intelligent search algorithms were applied. Therefore, the new problem can be raised as: The Multi-objective Total Electricity Cost, Total Electricity Consumption and Total Tardiness Job Shop Scheduling problem based on Rolling Blackout policy (EC2T). The modelling method for this problem will be presented below. Models and Case Study In this section, models for EC2T will be defined. A case study of 3*3 job shop will be presented to show how scheduling plans affect the objectives of Total Tardiness, Total Electricity Consumption and Total Electricity Cost. Job shop model: Referring to [START_REF] Özgüven | Mathematical models for job-shop scheduling problems with routing and process plan flexibility[END_REF] and [START_REF] Antonio | A new dispatching rule based genetic algorithm for the multi-objective job shop problem[END_REF] , in the job shop scheduling problem, { } , a finite set of jobs are to be processed on { } , a finite set of machines following a predefined order.; { } is a finite set of ordered operations of ; is the -th operation of processed on and it requires a processing time denoted . indicates the time that begins to be processed on , while is the corresponding completion time of that process. is a decision variable that if precedes on , otherwise. Each has a release time into the system and a due date . is the weight associated with . Constraints: ( ( Where { } Constraint (1) makes sure that the starting time of any job must greater than its release time. Constraint (2) ensures that the precedence relationships between the operations of a job are not violated, i.e. the is not started before the had been completed, and no job can be processed by more than one machine at a time. Constraint (3) takes care of the requirement that no machine can process more than one operation at a time, i.e. no pre-emption is allowed. A schedule that complies with constraints (1) to ( 3) is said to be a feasible schedule. is a finite set of all feasible schedules that . Given a feasible schedule , let indicate the completion time of in schedule . The tardiness of can be denoted as { }.The objective is to minimise the total weighted tardiness of all jobs. (∑ ) (4) Electricity consumption model: Based on existing research work on environmental analysis of machining [START_REF] Kordonowy | A power assessment of machining tools[END_REF], [START_REF] Dietmair | Energy Consumption Forecasting and Optimisation for Tool Machines[END_REF], [START_REF]Machine Tool Design and Operation Strategies for Green Manufacturing[END_REF], [START_REF] Avram | Machine Tool Use Phase : Modeling and Analysis with Environmental Considerations[END_REF], the simplified power input model of when it is working on is shown in Fig. 1. refers to the idle power of , and represent the power and energy consumed by when it executes the runtime operations for processing that . is defined as the time interval between coolant switching on and off. and are the power energy consume by when it actually executes cutting for , . is the corresponding cutting time. . , and can be seen as constants when both of the product's and machine tool's characteristics are known. Thus, define as energy consumed by runtime operations and cutting of on , that . can be seen as a constant. To simplify the power input model, it is supposed that all the runtime operations and the actual cutting share the same starting and ending time. Therefore, define as the average power input of during , ⁄ , . This model will simplify the calculation of total electricity consumption and electricity cost of the job shop, as well as guarantee the necessary accuracy for EC2T problem. Based on the model discussed above, it is easy to see that is the processing related energy consumption, and ∑ will not be affected by different scheduling plans. Thus, the objective to reduce total electricity consumption of a job shop can be converted to reduce total non-processing electricity consumption which includes idle and Turn Off/On electricity consumption of machine tools [START_REF] Mouzon | Operational methods and models for minimization of energy consumption in a manufacturing environment[END_REF].The objective function can be set as: start and completion time of on . A schedule can be graphically expressed as a Gantt Chart, the calculation of the total non-processing electricity consumption will be based on it. Fig. 2 is an example for the calculation. (∑ ) (5 , , , and are processed by .The Turn Off/On method suggested by [START_REF] Mouzon | Operational methods for minimization of energy consumption of manufacturing equipment[END_REF] is allowed, then: [ ∑ ∑ ( ) ] ∑ (6) According to [START_REF] Mouzon | Operational methods for minimization of energy consumption of manufacturing equipment[END_REF], is the the energy consumed by Turn Off/On; is the breakeven duration of machine for which Turn Off/On is economically justifiable instead of running the machine idle, ⁄ . is the time required to turn off then turn on ; is a decision variable that if , otherwise. Electricity cost model (based on the Rolling Blackout policy): The objective function for electricity cost of a job shop is: (7) ∑ (8) ∫ ( ) { [ (9) As seen in Fig. 3, and respectively refer to the total electricity cost of the job shop and in a feasible schedule ; represents the electricity price that if it is government electricity supply, while if it is private diesel electricity supply. denotes the cycle period of Rolling Blackout policy. has separated to and that indicate the period of government and private electricity supply respectively. In this model, is the natural numbers starting from ; indicates the time. Referring to [START_REF] Vá Zquez-Rodríguez | A new dispatching rule based genetic algorithm for the multi-objective job shop problem[END_REF], the objective function of EC2T can be expressed as: ( ) 0 s t T s T t  2T 2 s T t  3T ( 1) n T  ( 1) s n T t   nT s t  o t  k P t ∑ ∑ (10) Case Study: Parameters of the 3*3 job shop [START_REF] Liu | Intelligent Optimization Scheduling Algorithms for Manufacturing Process and Their Applications[END_REF] is given in Table 1, numbers in the brackets are the values of . The values for the parameters are assumed, based on experiments of [START_REF] Lv | Research on energy consumption modeling of CNC machine tool for non-cutting operations[END_REF] in Table 2. Total electricity cost outperforms on minimising total tardiness, while outperforms on minimising total non-processing electricity consumption. However, when applying the Rolling Blackout policy, the comparison between the two schedules on minimising total electricity cost demonstrates that scheduling plan ( ) that reduces electricity consumption does not necessarily reduce electricity cost. This simple case could demonstrate Conclusion and Future Work Reducing electricity consumption and its related cost as well as keeping good performance in classical job shop scheduling objectives is a difficult problem that can take a large amount of time if optimally solved. The model for EC2T had been developed in this paper. A case study had been presented to show how scheduling plans affect the objectives of total tardiness, total non-processing electricity consumption and total electricity cost. Obviously, the differences in electricity consumption and its related cost among different scheduling plans will increase along with the increasing units in each job. This provides new insight for managers that optimisation in scheduling is not only a factory performance improvement approach, but also a route that leads to sustainability. In future work, more complicated job shop instance will be studied based on the aforementioned model. The Non-dominated Sorting Genetic Algorithm (NSGA-II) had been selected as the problem solving approach. For more information refer to [START_REF] Deb | A Fast and Elitist Multiobjective Genetic Algorithm : NSGA-II[END_REF]. In addition, various situations about job arrival patterns will also be taken into consideration in the future work. Fig. 1 . 1 Fig. 1. Actual power input at machine tool main connection over time and its simplified model when it is working on one operation (based on [12]) Fig. 2 .Fig. 3 . 23 Fig. 2. Gantt chart of 𝑀 𝑘 and its corresponding power profile Fig. 4 4 Fig. 4 shows a comparison between two feasible schedules. The reasonable Turn Off/On plans for each of them are developed. The values of the three objectives of the two plans are shown in Fig. 4 . 4 Fig. 4. Comparison between two feasible schedules on total electricity consumption and total tardiness Table 3 .Table 1 . 31 The 3*3 job shop Table 2 . 2 values for other parameters Table 3 . 3 values of three objectives for two scheduling plans Total weighted Total non-processing tardiness electricity consumption
15,261
[ "1002105", "1002106", "1002107", "1002108" ]
[ "407023", "407023", "407023", "265417" ]
01472229
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472229/file/978-3-642-40352-1_15_Chapter.pdf
Marco Taisch email: [email protected] Bojan Stahl email: [email protected] Requirements analysis and definition for ecofactories: the case of EMC2 Keywords: eco-factory, energy efficiency, sustainable manufacturing, simulation requirements, energy flow des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Climate change mitigation and security of supply are the major drivers for raising, keeping and enhancing the attention towards energy consumption on European level, addressing policy-makers, industry and society in the same way [START_REF]Measuring progress towards a more sustainable Europe[END_REF]. Energy consumption implies severe ecological consequences such as global warming and resource depletion on the one hand, and being dependent of supply from non-European countries may lead to potential economic and social risks. Industry is facing a strong economic pressure which might endanger the global competiveness. Energy and raw material prices are steadily increasing in the last years, and the public awareness for green products is steadily rising with more enlightened customers. Consequently, awareness towards energy efficiency is strongly addressed by European policy-makers [START_REF]European Commission: EUROPE 2020 A strategy for smart, sustainable and inclusive growth[END_REF]. The European 2020 strategy defined several eco-targets to be reached by 2020, i.e. the reduction of emissions by 20%, and the reduction in primary energy of 20% to be achieved by improving energy efficiency, while securing the competiveness of European companies at the same time. Production processes are the backbone of the industry in Europe regarding economic success but also regarding environmental and social impact. Resources and energy are the major inputs for the transformation processes to create value. However, creating value by the input of resources and energy leads also wastes in terms of losses, heat, and emissions. Together with households, commercial and transportation in the domain of energy consumption, manufacturing is considered as a main contributor, with being responsible for approximately 37% of primary energy consumption on global scale. At European level, industry is responsible for about 40% of the electricity consumption [3]. Future manufacturing paradigm should therefore be built on two pillars: avoiding material and energy waste through increased efficient production and avoiding energy and material consumption with harmful impact on the environment. Diverse studies have been carried out to highlight the improvement potential within industry. The international Energy Agency has stated in their Energy Efficiency Policy Recommendation that potential savings sum up to 18.9 EJ/year and 1.6Gt CO2/year by 2030 [START_REF]Energy Efficiency Policy Recommendations Worldwide Implementation[END_REF]. A German-lead consortium of the Fraunhofer Gesellschaft highlights the major potential of increased production process efficiency to optimize the environmental as well as economic performance of companies [START_REF]Energieeffizienz in der Produktion -Untersuchung zum Handlungs-und Forschungsbedarf[END_REF]. A consultation group operating on European level has identified saving potentials of 10-40% in manufacturing in their "Smart Manufacturing" report [START_REF]ICT and Energy Efficiency -Consultation Group on Smart Manufacturing: Report: Energy Efficiency in Manufacturing -The Role of ICT[END_REF]. A similar study carried out on country level in Germany has likewise identified a saving potential of 10-30% on energy consumption [START_REF] Seefeldt | Potenziale für Energieeinsparung und Energieeffizienz im Lichte aktueller Preisentwicklungen[END_REF]. Production system simulation incorporating environmental perspectives is seen as one appropriate tool in the design and optimization of manufacturing systems towards energy efficiency. The paper presents an approach for defining requirements on such simulation approaches by applying a requirements engineering approach. The aim is to gain a detailed insight into what developers, stakeholders and users see as main features to be implemented in order to support the successful application in industry and research. Methodology A requirement can be defined as a need that a specific product shows by attributes, characteristics or specific qualities that the end product should possess [START_REF] Sawyer | Software Requirements[END_REF]. According to [START_REF] Thayer | Software Requirements Engineering[END_REF], requirements engineering regards establishing and documenting software requirements. Through this process it is possible defining what the system should be able to do and the constraints in which it needs to operate. This process consists of four major steps: elicitation, analysis, specification, verification and management. The requirements management is represented by the planning and controlling of the other steps of the requirements engineering phase. In the elicitation phase the problem to be solved is defined through the identification of the boundaries and the stakeholders' and goals' definition. It is the first step of the requirements engineering. The information gathered in this phase need to be interpreted, analysed and validated [START_REF] Nuseibeh | Requirements Engineering : A Roadmap[END_REF]. Different techniques exist to perform the elicitation phase, e.g. interviews, scenarios creation, prototypes, meetings and observation. The second phase, analysis, potential conflict by different requirements sources could imply need to be solved. Consequently this step deals with the requirements' conflicts detection and resolution. The conflict resolution is lead by negotia-tion. Furthermore, this is the phase in which the system bounds are discovered together with the interactions with the environment [START_REF] Sawyer | Software Requirements[END_REF]. In the documentation phase the requirements identified are specified and formalized. It is important that the requirements can be traced in order to be easy to read, understood and navigated [START_REF] Nuseibeh | Requirements Engineering : A Roadmap[END_REF]. The output of this process is a requirements document. Finally, in the validation the requirements are checked in order to find out further omissions and conflicts. In this phase it is checked if the requirements document created is consistent and complete and requirements priorities are attributed. This is the step into which the quality of the model developed is validated [START_REF] Sawyer | Software Requirements[END_REF]. In order to be complete, the process of requirements engineering should consider all the important sources of requirements from which the requirements are gained. These sources can be in conflict among them and part of the requirements engineering is, for this reason, represented by negotiation and prioritization activities. Each step is described in detail in the subsequent sections. Elicitation In this step of the simulation requirements engineering, the requirements are collected from different sources and different stakeholders. It is the first phase of the requirements engineering and it helps the developers of the product in understanding the problem. The stakeholders from which the requirements are developed may belong to different categories. They can be end-users, customers, regulators, developers, neighbouring systems and domain experts. In this case the requirements have been collected from three sources. The first one is represented by a classical literature review of papers dealing with simulation of energy in manufacturing, in order to take into account from them the general simulation requirements to be used in the model development. The second type of source is represented by the opinions of experts, stakeholders and users, represented by the partners and consortium of the EMC2-Factory project. The last source is represented by the developers themselves by having a visionary idea of building a holistic simulation model for combining energy and material flow simulation. Different works available in the literature have been reviewed according to their postulations of requirements for "integrating energy, material and building simulation which resulted in total in 52 requirements [START_REF] Thiede | Energy Efficiency in Manufacturing Systems[END_REF]- [START_REF] Prabhu | Simulation Modeling of Energy Dynamics in Discrete Manufacturing Systems[END_REF]. 42 requirements have been identified by the partners and consortium members. For this reason questionnaires were provided to the stakeholders in order to capture their opinion. The developers themselves provided 17 more requirements. Summarized, 111 requirements have been identified in total from the literature review, the stakeholder consultation and the internal development team. The requirements present overlaps and conflicts which need to be managed in order to avoid repetitions and contradictions for the final requirements which is done in the next step. Analysis In the second step of the requirements methodology applied, two main activities are performed: the requirements classification and the requirements negotiation. In the classification of the requirements, they are divided into distinguishable categories while in the negotiation the resolution of the conflicts has been conducted. The following paragraph identifies the categories in which the requirements are classified and divided. The simulation requirements were divided in four categories: general, functional, non-functional and implementation requirements. General requirements are high level conceptual requirements. Implementation requirements focus on the system-side conceptualization of the simulator. Functional requirements define what the simulator should be able to perform. Non-functional requirements describe characteristics linked to the way in which the simulator performs its functions. According to the mentioned categories, the requirements listed in the requirements elicitation step have been divided and categorised, each one in one of the above mentioned categories. Hence, the requirements with the same or similar meaning and scope have been joined together and the ones in conflicts have been resolved. Starting from the 111 requirements identified in the elicitation phase it has been possible to group and merge them according to the scope and the meaning they presented and to arrive until 33 final requirements. The meaning and the scope of each requirement has been identified by checking how each requirement fit with the simulation purpose. The set of requirements obtained at the end of the analysis process are listed and explained in the documentation phase in which all the requirements identified are explained and classified. Documentation and Validation Documenting the requirements is the fundamental condition to the requirements handling. In this phase the structure of the requirements document is developed. The two tables below provide a short excerpt of the documentation of the requirements defined in the analysis step. The last phase of the requirements engineering is represented by the validation. This steps deals with the analysis of the document produce in the documentation phase in order to be sure that it represents the right requirements in the most complete way. In this phase the focus is on looking for mistakes, errors or lack of clarity. According to [START_REF] Westfall | Software Requirements Engineering: What, Why, Who, When, and How[END_REF], the requirements validation needs to be performed in order to be sure that the requirements are: unambiguous, allowing only a single interpretation; concise; finite; measurable; feasible to be implemented with the available technologies; traceable. This phase has been performed, however results are not explicitly mentioned here. Table 1: Functional Requirements Requirements Description Assessing dif- The KPIs' framework should include the evaluation of pro-ferent kind of KPIs duction, energy and environmental KPIs both real-time and post-processing in order to allow timely decisions oriented to the reduction of the energy costs and to the decrease of the environmental impact. Considering production assets and technical building services It means modeling and simulating not only the energy consumption of the production-related assets but also of the nonproductive equipments, like the central TBS and the periphery systems, supporting their dimensioning and optimization. Supporting the dynamic connections of elements and interrelations Representing in the simulation the dynamic connections existing among production assets, periphery systems and central technical building services. Bringing inside the states based energy consumption calculations It underlines the importance of implementing energy calculations during the simulation running basing them on the energy consumption associated to entities' states; representing the basis for non-productive states energy consumption reduction. Interfacing with production control policies It means supporting the connection of the simulation with production control policies at machines' coordination level, oriented toward the energy consumption reduction and able to influence the equipments' states. Preventing energy peaks It underlines the important of intervening real time on the manufacturing through control policies when the energy consumption shows rapidly increases. Including multi-product perspective Including product type control policies and supporting the different product type's identification during the simulation. Supporting green oriented perspectives Enabling green oriented decisions in terms of scheduling, production configurations. Allowing different levels of energy consumption calculations It underlines the importance for the simulation model to calculate energy consumptions starting from the machine level and arriving till the factory one. Considering different energy carriers and supply systems It stresses the high contribution to the holistic perspective given by the consideration of different energy carriers and different periphery systems. Including stochastic inputs and unpredictable events The model should allow the possibility to insert input data which are not deterministic. Increasing the level of data detail The model should support the highest possible level of data detail. Summary and Outlook The definition of functional requirements for simulation of eco-factories is a first step to guide the way from traditional factory design and operations to ecoenabled competitive factories. The functional requirements build the backbone of the definition of a holistic reference model for eco-factories which serves on the one hand as a meta-model for future activities in the project itself and on the other hand as an enabler for a comprehensive guide or toolkit to support industrial implementation by hands on suggestions. The functional requirements need to be implemented into a discrete event simulator. This paper presented the requirement engineering process as an initial step before the development of a discrete event simulation environment which combines energy and material flow simulation. Analyzing the requirements leads to the following conclusions. There is an increasing demand in adapting simulation applications to a larger user group, hence giving support by pre-defined objects and decreasing the modelling complexity. It is demanded to increase user-friendliness and keep solutions easy and simple. On the other side the integration of different tools and increase of level of detail of data are requirements which may result in a stress-field in accomplishment. The functional requirements are widely focused on the increase of detail concerning energy considerations and widening the scope of traditional production asset simulations taking into account periphery systems and central technical building services. Finally it can be summarized that literature and stakeholders see an increasing interest in making energy simulations in manufacturing more comprehensive, detailed, but also simple. The requirement engineering process served as an initial step of a simulation environment development approach. The next major phase includes the conceptual design of a simulation environment which combines material flow and energy flow in production systems taking into account production assets as well as technical building service and being applicable on different scales in the factory. Table 2 : 2 Non-functional requirements Requirements Description Creating a sim-ple and controlla-ble solution It underlines the need of a manageable solution, easy to be controlled and customized. Integrated solu- It means integrating different tools together in order to de- tions velop an interoperable simulation approach. Acknowledgements. The introduced requirements and results were developed within the research project EMC2 (Eco Manufactured transportation means from Clean and Competitive Factory) funded by the European Commission within FP7 (285363).
17,339
[ "931901", "991606" ]
[ "125443", "125443" ]
01472235
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472235/file/978-3-642-40352-1_1_Chapter.pdf
Gökan May email: [email protected] Marco Taisch email: [email protected] Bojan Stahl email: [email protected] Vahid Sadr email: [email protected] Toward Energy Efficient Manufacturing: A Study on Practices and Viewpoint of the Industry Keywords: Energy efficient manufacturing, energy management, industrial viewpoint, manufacturing plant, sustainability The main objective of this study is to assess current situation and applications in the industry with respect to energy efficiency. From literature, a four pillar framework (strategy, tool, process, technology) has been developed, outlining the essential elements to successfully integrate energy efficiency in manufacturing. Based on the framework, a questionnaire is developed to assess manufacturing companies in Europe. How and what companies are doing currently to integrate energy efficiency in their manufacturing is under investigation through surveys and complementary case studies. This paper presents a fact finding study, aimed to understand the main motivations, limitations, and effectiveness of integrating energy efficiency in manufacturing. Hence, this study intends to establish a basis for the companies and academia to have a holistic understanding on energy efficient practices as a first step on the way to integrate energy efficiency in manufacturing. Consequently, the gaps between theory and practice are revealed. Introduction Manufacturing has changed its focus and approaches from pure cost to quality, productivity and delivery performance in the last couple of decades [START_REF] Hon | Performance and evaluation of manufacturing systems[END_REF]. Currently, the new topic of interest that has gained significant attention from both academia and the industry is energy efficiency due to the significant environmental and economic impacts associated with consumption of energy. The change of approach in the industrial world stemmed from the global drivers for improving energy efficiency such as climate change, scarcity of resources and energy supply as well as the industrial drivers including rising energy prices, ever-stricter becoming legislations, customer demand and awareness along with competitiveness have pushed energy efficient manufacturing to the top of the agenda for both governments and companies. Companies have been feeling the pressure and urge to examine how their processes, methods and structures could become more energy-efficient considering global and industrial drivers forcing them to do so. Since energy consumption advances in having paramount importance for manufacturing companies, continuous improvement in energy management is now integrated into strategies of companies. In this regard, taking action (e.g. adopting energy management standards such as ISO 50001) to build a continuous improvement process to use energy resources more efficiently becomes essential [START_REF]Designing Energy-Efficient Production Processes[END_REF]. In this manner, this research is based on a fact finding study, aimed to understand the main motivations, limitations, and effectiveness of integrating energy efficiency in manufacturing. In this vein, we investigate the current practices and approaches of the companies regarding energy efficiency where we suggest an evaluation framework in order to successfully integrate Energy Efficiency (EE) in manufacturing in a systematic way, with the aim of gaining insight into energy efficiency in manufacturing and finding out the gaps between theory and practice. Thus, the main objective of the study is to assess current situation and applications in the industry with respect to energy efficiency improvement measures using the proposed framework to help companies to integrate energy efficiency in manufacturing. How and what companies are doing currently to integrate energy efficiency in their manufacturing is investigated through surveys and case studies. Hence, this study intends to establish a basis for the companies and academia to have a holistic understanding on energy efficient practices as a first step on the way to integrate energy efficiency in manufacturing. In this context, we identify the main objectives as below:  Understand how and what companies are doing currently to integrate energy efficiency in their manufacturing  Determine the priorities of companies and stimuli for energy efficient manufacturing  Understand the level of consideration given to integration of energy efficiency in manufacturing on the industry side  Develop insight into the use of ICT and other supporting tools/methodologies for improving energy efficiency  Identify the gap between the theory and the practice State of the Art Nowadays, energy consumption is considered as one of the key item for sustainable development due to its environmental, economic and social impacts. Brundtland Commission [START_REF]Report at world commission on environment and development[END_REF] defines sustainable development as "a development that meets the needs of the present without compromising the ability of future generations to meet their own needs". In this regard, IEA [START_REF] Iea | Worldwide Trends in Energy Use and Efficiency, Key Insights from IEA Indicator Analysis[END_REF] considers energy efficiency improvements as a principal component for sustainable development. Thus, energy efficiency becomes a core issue for policy-makers, industries, and society. Indeed, improving EE has been mentioned as paramount by several communities of research, industry and policymaking in order to overcome the challenges of today stemmed from both global and industrial drivers of energy efficiency (e.g. [START_REF] Icc | Energy efficiency: a world business perspective[END_REF]). Aligned with 3 pillars of sustainability, improving EE provides benefits in environmental (e.g. reduced GHG emissions), economic (e.g. reduced costs, improved competitiveness) and social (e.g. increased security of energy supply) dimensions. Hence, above mentioned gains prove powerful interdependence between energy efficiency and sustainable development [START_REF]Measuring progress towards a more sustainable europe[END_REF]. Energy efficiency is a key target for the factory of the future stemmed from both global and industrial drivers and hence many studies have been carried out in the field of industrial energy efficiency. In fact, Chow et al. [START_REF] Chow | Energy Resources and Global Development[END_REF] points EE as the "lifeblood of technical and economic development". Based on the critical review of the literature, the studies on energy efficient manufacturing can be grouped in six main dimensions which are essential to integrate energy efficiency in manufacturing. ─ Drivers/Barriers: As the name implies, these are the stimuli/drivers for energy efficient manufacturing which are the main reasons for the industrial companies to implement energy efficiency measures in manufacturing. ─ Strategic approach: The main issue here is the alignment of energy efficiency in manufacturing with corporate goals. Commitment of top management is a crucial part for implementation of any kind of measures in the industry and establishes the base for strategic focus. The concepts under scrutiny regarding the strategic focus of the companies are e.g. policies & standardization; strategic decisions (e.g. buy-sell, demand management, location decisions, etc.); technology selection, development and deployment; Investments on R&D and innovation; and eventually voluntary initiatives (e.g. CSR, etc.). ─ Information and Communication Technologies: ICT has the potential to play a very significant role in improving energy efficiency in manufacturing and its share and importance has been growing during recent years. Hence, ICT as a supporting tool and an enabler for achieving energy efficiency in manufacturing needs a further consideration. ─ Supporting tools and methodologies: This part comprises the methods and tools available which mostly provide support in making energy related analysis and decisions in manufacturing environment. Examples to this dimension could be modeling and analysis (e.g. simulation, DEA modeling, etc.), energy assessment tools, sustainability tools (e.g. LCA), emission calculation tools and benchmarking tools & techniques. ─ Manufacturing Process Paradigms: To manufacture any product, some set of processes should be followed and types of processes depend on the type of products as well as companies' considerations of different aspects and related decisions in manufacturing. Manufacturing technologies (e.g. development of technologies & materials), manufacturing process management (MPM), process design & optimization, switching energy modes of machines (i.e. on, off, standby, etc.) and scheduling are examples to paradigms to be considered. ─ Manufacturing Performances: Traditional performance measures considered in manufacturing include factors such as quality, cost, delivery time and safety. Thus, it is essential to investigate the impacts of integrating energy efficiency as another performance dimension in manufacturing on traditional performances. Research Design and Framework Taking the concepts to support and foster energy management in manufacturing such as processes, methods along with standardization and combining them with the significant role of ICT also stressed out in the literature as mentioned before, we come up with a 4-pillar framework comprised by the enablers for energy efficient manufacturing (i.e. Strategic approach, manufacturing process paradigms, supporting tools/methodologies and ICT). Integrated with the drivers/barriers on one side and attained manufacturing performances on the other side, we propose the framework (see Figure 1) for successfully integrating energy efficiency in manufacturing that can also be of use for assessing the current practices in the industry with the aim of finding out the gaps between the concepts in theory and industrial practices in the area. Fig. 1. Framework for successfully integrating energy efficiency in manufacturing The framework provides a holistic view on all the aspects to be considered for implementing energy efficiency measures in manufacturing. The model can as well be of use for assessing the current practices of the industrial companies from an energy efficiency perspective and for understanding how and what companies are doing currently to integrate energy efficiency into their manufacturing processes. Thus, it provides support in identifying the gaps between the available solutions in theory and companies' current actual practices. Utilizing the proposed framework, it is possible to gain a profound insight into current practices in the industry that might reveal the gap with theory as well as into overall integration of energy efficiency in manufacturing. Besides, we investigate the importance of ICT as an enabler for energy efficient manufacturing along with the non-energy benefits associated with energy improvement measures. Further, the impact of energy efficient way of thinking and implementation on traditional manufacturing performances is highlighted. Research Methodology To pursue the objectives of the study, an explorative research is carried out through the development and use of an on-line survey in order to assess manufacturing companies operating in Europe and is further supported by complementary case studies. Survey method is mainly followed in the study to assess the energy efficiency related practices in the industry with the aim of gaining insight using the developed framework. Based on the research objectives and framework described previously, a questionnaire of 10 comprehensive questions has been developed to assess the companies. This survey was composed of questions about company characteristics, energy efficient strategies and applications of the company, supporting tools and relevant manufacturing processes. Questionnaire is thus based on the main components of the framework which comprises four enablers for energy efficient manufacturing, drivers/barriers and attained manufacturing performances. So, questions are identified related to these sub-points. On-line version of the questionnaire has been created using Survey Monkey and was sent to companies via the following link: http://www.surveymonkey.com/s/Y59Y36C. Questionnaire was sent to a combination of large companies and SMEs. Sample firms were selected from manufacturing companies operating in Europe in mechanical, electric and automotive sectors. The respondents were all relatively large companies. This survey includes questions about company characteristics, energy efficient strategies and applications of the company, supporting tools and relevant manufacturing processes. Furthermore, case study methodology is used as complementary to surveys to further analyze and validate the gathered data. The principal way of data gathering through the case study is conducting interviews and analyzing sustainability related reports. Results and Discussion As explained in Part 4, the questions are organized in three parts (i.e. drivers/barriers; enablers; manufacturing performances in trade-off). The discussions of the results are also presented in this structure: Drivers and Barriers Companies considering energy efficiency performance measures in manufacturing could have many reasons or drivers to do so. Among them, reduction of costs, reduction of the environmental impact (commitment to reduce the environmental impact), image improvement due to enhanced reputation and achieving sustainability targets are the most important reasons provided by companies interviewed. Changes in customer behavior and compliance with legislations are considered relatively less forcing factors whilst reducing carbon footprint is not considered as a driver in most of the companies. However, companies' responds regarding their commitment to reduce environmental impact here might not reflect the reality as they tend to show/think their companies more committed than they actually are. In reality, most of the decisions and interest depend on either costs or long term plans like image improvement. From the study it has been found that companies are feeling responsible for considering energy efficiency as a new performance target area in manufacturing. However, insufficient investment paybacks (e.g. ROI), the lack of fund to pay for improvements and insufficient government incentives are considered by them as the most important barriers to implementing energy efficiency in manufacturing. Some companies mention the weakness of current legislations and policies on forcing the industry to become more energy efficient since they are mostly easy to tackle and not challenging enough especially for large enterprises. In this manner, manufacturing firms do not aim toward energy efficiency by considering it as one of the key target areas unless they find strong financial motivation in doing so. Enablers for Energy Efficient Manufacturing Most of the companies interviewed have high level sustainability and energy related initiatives such as CSR (corporate social responsibility) strategic scheme, ISO14001 (Environmental management system) and ISO 50001 (Energy Management System). This shows that top managements are also committed for overall sustainability and energy efficiency in the industry. However, when it comes to the ground level, as of manufacturing, there is not enough evidence that energy efficiency is properly integrated in top-down approach. To better understand if top managements have genuinely cascaded energy efficiency into manufacturing, other questions have been asked such as the way they integrate energy efficiency in manufacturing, level of investments, and implementation of different options for improvements in energy efficiency, how tradeoffs between traditional performances (cost, quality, delivery and functionality) are considered and finally the type of methods they follow. The significance of different methods and ways in integrating energy efficiency in manufacturing was one of the key concerns of the study. In this context, companies reported "optimization of current production processes" and "aligning production planning with energy efficiency practices (e.g. switching energy states of the equipment)" as two key factors to be followed effectively. "Integrating energy efficiency in product and process design" along with "combining energy efficiency with manufacturing strategies (e.g. resource planning)" are considered as important however with a less impact compared to former ones. The companies were interviewed also for the activities they have been following to improve energy efficiency in their manufacturing plants. They mentioned energy monitoring and control, use of new technologies and consultancy with energy experts among the mostly adopted ways in their company. Energy recovery and renewable energy use, on the other hand, are not considered and do not take place in most of the firms. In fact, these are the two areas for energy improvement in manufacturing facilities which have been somehow ignored to date but are considered as two important future streams both for academia and industry as far as energy efficiency in manufacturing is concerned. When companies were asked to rate the importance of different ways for improving energy efficiency regardless of they have been following or not, the most important activities highlighted were "more efficient equipment and technologies for improving production processes", "optimizing the plant level activities by using technical services (e.g. HVAC control system), "enhanced monitoring and control of energy consumption" and "use of energy related performance indicators". Above mentioned activities are indeed aligned with the two ways for reducing energy consumption as defined by Rahimifard [11]. The latter two are in fact the main essential elements for an effective energy management in a manufacturing plant. On the other hand, ICT support on energy efficient manufacturing and consideration of energy efficiency in long term decisions are not considered as essential and important by contacted companies. Taking into account the importance of ICT related stream for research in energy efficiency and so for academia, we might say that the viewpoint of the industry is not aligned with academia at this point. However, this might be due to the fact that industrial contacts might have thought regarding the current practices and needs whereas academia is more into potential solutions for future aligned with the vision for Factories of the Future. With regards to ICT support for enhanced energy management, "improving the reliability of the data" and "evaluation software for process performance with respect to energy efficiency" are the main aspects companies stressed out. Especially, the latter one is of good use since it helps evaluating the trade-off between traditional performances in manufacturing such as cost, quality, delivery and flexibility and improvements in energy efficiency as well. Besides, there are several tools used in manufacturing to support decision making with regards to energy efficiency, as also mentioned in part 3. Companies interviewed mostly use energy assessment tools and cost calculation & energy tracking tools whilst benchmarking and simulation tools are used relatively less compared to former mentioned ones. However, carbon footprint measurement tools and sustainability assessment tools such as LCA are not adopted by majority of the companies for energy related decision making. Manufacturing Performances Till now, the main drivers and barriers for companies towards improving energy efficient manufacturing then the main enablers to integrate energy efficiency in manufacturing have been discussed. Now, in this section the impacts of integrating energy related issues on traditional performances are investigated. This is a crucial issue, since it is seldom that companies worry about energy efficiency issues if the traditional manufacturing performances such as cost, quality, delivery and flexibility are jeopardized. Companies interviewed stressed that fully integrating energy efficiency in manufacturing might sometimes cost more than the gained they could have achieved. Moreover, quality could be affected due to some strategies towards energy efficiency (e.g. shutting down the equipment to reduce energy consumption). But, they also stress that non-financial gains have been achieved such as improved company image, new skills and competences, increased customer acceptance and raised overall sensitivity towards environmental impacts inside the company. Therefore, the main challenge here is how to structure manufacturing strategy, tools, processes and information systems in such a way those traditional manufacturing performances are either unaffected or even improved. To sum up, the developed framework can easily be used to evaluate how energy efficiency is effectively addressed in manufacturing industries. As also highlighted with the key findings of the study, the consideration of energy efficiency is not matured enough to attain the general energy performance goals expected by multiple stakeholders. In that vein, academia should not only focus on developing solutions for key aspects (i.e. 4 enablers) in a disparate manner but also on need to investigate new way of holistic and integrative design of manufacturing systems, better aligning strategy objectives with energy improvement goals. Conclusion In this research, we investigated practices and viewpoint of the manufacturing industry with regards to energy efficiency. The findings revealed that although there has been a consistent progress in the industry toward energy efficiency, the implementation of the concept is still not mature enough. There is no sign of strong evidence regarding the integration of energy efficiency in manufacturing as a new performance target area.
22,153
[ "1002132", "931901", "991606", "1002133" ]
[ "125443", "125443", "125443", "125443" ]
01472236
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472236/file/978-3-642-40352-1_20_Chapter.pdf
Julien Maheut Jose Pedro A parallelizable heuristic for solving the Generic Materials & Operations Planning in a Supply Chain Network: a case study from the automotive industry Keywords: Operations Planning, MRP, Generic Materials & Operations Planning, Mixed Integer Linear Programming, Supply Network, Automotive Industry 1 A trend in up-to date developments in multi-site operations planning models is to consider in details the different ways to produce, buy or transport products and the distributed decision-making process for operations planning. One of the most generic approaches to support global optimization in those supply chain networks by considering all the different operations alternatives and product structures is the Generic Materials & Operations Planning Problem. This problem can be modelled by a Mixed Integer Linear Programming model capable of considering production, transportation, procurement tasks and their alternatives and other relevant issues such as packaging. The aim of this paper is to introduce the implementation of a parallelizable heuristic method for materials and operations planning and its application to a case of a Supply Chain Network of the automotive industry. The approach uses variants of the GMOP model to overcome traditional MRP systems' limitations. Introduction Multi-site operations planning in a Supply Chain Network (SCN) is the process that consists in determining a tentative plan about the operations that must be performed on the available capacitated resources geographically distributed in each time period all along a determined horizon time. The planning of these operations not only determines inventory levels of certain products in given locations, labor levels or the use of productive resources but must also determines which located operations, called strokes [1; 2] must be performed to implement the operations plan. Generally, SCNs are composed by several facilities located in different sites that must serve a set of end products to different customers [START_REF] Mula | Supply Chain Network Design[END_REF]. Despite belonging to the same SCN or to the same company in some cases, sometimes, the different members themselves do not communicate their exact costs and capacity data [START_REF] Dudek | Negotiation-based collaborative planning between supply chains partners[END_REF]. This implies that central planning is impossible and operations planning must be coordinated in a distributed way between the different members of the SCN. In the literature, lots of mathematical models that simultaneously solve the materials and operations planning problem in a multi-site context are presented and part of them are reviewed in [START_REF] Garcia-Sabater | A new formulation technique to model Materials and Operations Planning: the Generic Materials and Operations Planning (GMOP) Problem[END_REF]. The Multi-level Capacitated Lot-Sizing Problem [5; 6] is the most widely covered, but other authors call it the Supply Chain Operations Planning Problem [START_REF] De Kok | Planning Supply Chain Operations: Definition and Comparison of Planning Concepts[END_REF] or they include other adjectives when defining it; for example, dynamic [START_REF] Buschkühl | Dynamic capacitated lotsizing problems: a classification and review of solution approaches[END_REF]. Nevertheless, to the best of our knowledge, GMOP is the only model that simultaneously considers multi-site, multi-level capacitated operations planning problems with lead times, alternative operations (purchasing, transportreplenishment, transshipments and distribution-and production) and returnable packaging. Moreover, the GMOP model that solves in a decentralized way has not yet been studied. In this paper, a parallelizable heuristic method for operations and materials planning is introduced. Its application in a SN of the automotive industry composed by different facilities geographically distributed is presented. The proposed method is to plan operations in a decentralized manner using agents that take decision based on the results of several MILP model variants to solve the GMOP problem [9; 10]. Section 2 introduces the SCN description and the different operations carried out in it. Section 3 describes the proposed system and the proposed heuristic method briefly and partially. Section 4 proposes a description of the implementation process of the planning approach. Finally, Section 5 introduces a conclusion and future research lines. Supply Chain Network Description The SCN considered in this paper is composed by several plants geographically distributed in Spain. Plants are responsible of processing, treating, assembling and transporting metal parts in different returnable packaging to different customers, mainly car assembly plants of the automotive sector in Europe. In this case study, global operations planning tasks is a critical process because some of the different SCN members have grown during the last decade and have currently different plants able to perform the same operations or produce the same products in the different locations considering different constraints and costs. Consequently, one of the main concerns of the SCN is to adapt its plans in order to consider all the feasible ways to serve the customers minimizing costs and respecting due dates. Global operations planning must consider all the operations, tasks that are performed to procure, transform and transport the materials in order to serve a determined end product to the final customer. In the literature, production operations, transport operations and purchasing operations are the most high value added operations considered. Nevertheless, others high value-added operations must be considered like operations considering returnable packaging [11; 12] or alternative opera-tions [13; 14] because they can substantially affect total SCN cost if they are considering. This is, to the best of our knowledge, one of the major concerns for practitioners that the literature has not dealt with extensively. The emergence of alternative operations in this case study is a direct consequence of the different processes that take place in the different plants. Stamping, cutting, chemical treatment, painting, assembling, dismantling, and finally (un)packaging operations are some of the operations performed in the SCN where alternatives can exist. Besides transport between plants is a very important process since it is necessary to consider the return and transshipments of the returnable packaging. This consideration is necessary since customers demand is not only in quantity of products on each due time, but also customers demand requires a specific packaging. In addition, each plant has its own work schedule and capacitated resources, and these factors are usually unknown to the others. Moreover, each plant does not want to share information about inventory levels and costs. 3 Advanced Planning and Scheduling Module Description The designed procedure for collaborative decision making The designed system is an Advanced Planning and Scheduling (APS) system. The SCN planning module consists of different types of agents: one warehouse agent, some plant agents and some supplier agents (Figure 1). Agents do not have any artificial intelligence but are able to communicate and make decisions based on specific criteria established preliminarily. Fig. 1. General scheme of the APS System The warehouse agent knows at all times the inventory levels of products in all the SCN. This agent is the central coordinator and is responsible for transporting finished products between different plants and to the final customers. The operations planning process starts when a new customers' demand forecast is received (extracted for the MRPs of the different SCN members). First, it is asked to the warehouse agent if the customer-requested product is available in stock in one of the various SCN plants. If there is sufficient material in at least one of the site, the agent plans how to transport the material to the customer based on specific criteria (cost, due date, run out time in each plant, etc). The decision is made based on the result of a MILP model that considers transport stroke and some constraints about working calendars and truck fleet. Otherwise, the warehouse agent has to act as coordinator and must achieve to get all the material respecting the due date. To do so, the warehouse agent generates an ordered list of the needed materials. This ordered list is a "bag of material" where there is a quantity of material per request and its due date. For the first product of the list, the warehouse agent asks the different plant agents capable of producing this product. Plant agents can be a plant, a set of resources or even a single specific resource and they are responsible for its assigned internal operations. Each plant agent then executes its MILP model to determine how much and when can be available the amount of products ordered. Each proposal is offered to the warehouse agent. The latter chooses the option with lower costs. If the chosen agent plant needed raw material to produce the product, it transmits the information to warehouse agent and this product enters in the tail of the sorted list of material to order. The agents, before ordering raw material to manufacture an ordered product, will require the product to the warehouse and, if there is not enough, the plant agent of the product will ask the supplier agents the raw material and the possible due dates according to the capacity already assigned. When the bag is empty, the warehouse agent transfers to different SCN members and the suppliers a personalized plan with the operations to be performed with its corresponding due date. Currently the model does not include a specific transport agent but it is planned for future expansion of the system to take it into account, including more specific constraints. The operations plan will be used by the different SCN members to create detailed production plans (due dates, delivery dates and lot size), which will be the starting point for sequencing and temporalize. A screen of the tool designed is introduced in Figure 2. Fig. 2. Some results of the planning tool The MILP Model The MILP models used for each SCN agent are variants that solve the GMOP problem including backlogs. Each time the warehouse agent requests a product, the associated MILP model is executed to check if it has sufficient capacity for the production of goods (in the requested quantity). Each resource has a limited available capacity, so the agent could not have in certain case the sufficient capacity to serve the order. In the case the agent do not have enough capacity, the timing or a new amount of product to be serve on time will be determined. The mathematical models are encapsulated in each agent and they are run whenever the agent is solicited. Procurement strokes are only considered with supplier agent because different alternative procurement operations exist. Because of length constraint, the complete model will not be introduced herein. One generic variant is described in [1; 9; 15]. 4 Advanced Planning and Scheduling Module Implementation Implementation approach Before tool implementation, the company had its own Enterprise System (ES) which managed an MRP System. In practice, MRPs results were limited to advance the major components production and to merely attempt to maintain one day of demand in stock for each one. The biggest problem the company faced was that the number of late deliveries had grown in recent years. The reason for this was that the group had grown considerably and had to face and consider an increasing number of end products and production stages. Besides production processes had become more complex with more loading units types, with different facilities to take into account, and with resources, materials and packaging alternatives to be considered. The existing ES was used to support a certain type of transactions. Plant managers claimed they had sufficient information, and their only complaint was that they did not sufficient resources (in inventory and machines) to deal with sudden changes in demand. During implementation, the structure of the existing information system did not change. XML files were created from the existing database (which was supported by conventional BOM files and Routing Files) and were sent to feed the proposed APS system. During the tool implementation process, the data quality in the ERP systems improved substantially because the facilitator of the new APS (which was in charge of the IT systems) placed pressure on managers to maintain it without our intervention. After each APS execution, users received the operations plans in Excel spreadsheet files based on an XML format which were designed to suit their requirements. Implementation Organizational Aspects Probably one of the major pitfalls in the tool implementation process was that no organizational change occurred. Given the leadership characteristics of the facilitator of IT, we decided to replace the information flow given to users without informing them about the new APS tool. Thus, tool implementation was transparent to most users who never perceived that they were actually making major changes. The only noted change was that users observed that the data were of a much better quality and that minor changes could be applied to spreadsheet files as they received them. It can be stated that the tool was well-accepted since it was not known to exist as such. Results in practice The implementation process comprised two phases. In the first phase (before Christmas), the head of information systems checked the quality of the results. As he was highly committed to data quality, the data improved substantially. This led to a 33% reduction in delay levels, but also to a 50% increase in stock levels. In the second phase (after Christmas), users began to run operations plans. At that time, delays disappeared completely and only delays due to client requests after deadlines were the source of delays. Arguably, this reduction was due not only to the use of GMOP models, but also to the MRP system which, until then, had never executed good data quality. However, the use of GMOP models also allows stakeholders to handle packaging flows and alternative operations by generating feasible operations plans and by cutting delays each time without having to consider more machinery resources. After several years of implementation, the operations planning tool is still executed daily in the company until the present-day. The group's Logistics Manager soon changed after the introduction of the new APS, and the IT facilitator was removed some months afterward. However, the system continues to work, although the company owners now seek a more general (off-the-shelf and state-of-the-art) commercial ERP system. The main problem they now face is to find one that meets their expectations (that considers alternative operations and returnable packaging). Conclusions The proposed system has been successfully implemented in a real SCN. Experiments have been realized to evaluate the different alternatives, taking into account not only the validity of the results in terms of quality but also into account the computation times. The results obtained are practical in the proposed implementation and also revealed to be interesting because it appeared some light features of the system that were not foreseen. The problem has more than 600 end products (considering different types of packaging) and more than 15 agents. A future research line would be to identify other strategies for ordering products in the bag and evaluate the best strategy in terms of total SCN costs against a centralized MILP model. Another future research line would be to introduce fuzziness in some parameter in case of demand or available capacity data uncertainty. Acknowledgements. The work described in this paper has been partially supported by the Spanish Ministry of Science and Innovation within the Program "Proyectos de Investigación Fundamental No Orientada through the project "CORSARI MAGIC DPI2010-18243" and through the project "Progamacioon de produccion en cadenas de suministro sincronizada multietapa con ensamblajes/desemsamblajes con renovacion constante de productos en un contexto de inovacion DPI2011-27633". Julien Maheut holds a VALi+d grant funded by the Generalitat Valenciana (Regional Valencian Government, Spain) (Ref. ACIF/2010/222).
16,670
[ "1001986", "1001987" ]
[ "300772", "300772" ]
01472238
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472238/file/978-3-642-40352-1_22_Chapter.pdf
Maria Holgado email: [email protected] Donatella Corti email: [email protected] Marco Macchi email: [email protected] Padmakshi Rana Samuel Short Steve Evans Business Modelling for Sustainable Manufacturing Keywords: business model, business modelling, sustainable manufacturing, value creation, sustainability à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction The changing business landscape, influenced by the increasing awareness of environmental and social impact of industrial activities, is addressing new challenges that stimulates an on-going transformation process leading towards a sustainable industrial system [START_REF] Evans | Towards a sustainable industrial system[END_REF]). Hence, a broader vision for Sustainable Manufacturing has been suggested in the recent years by many authors. A comprehensive definition reflects on Sustainable Manufacturing as 'the ability to smartly use natural resources for manufacturing, by creating products and solutions that, thanks to new technology, regulatory measures and coherent social behaviours, are able to satisfy economic, environmental and social objectives, thus preserving the environment, while continuing to improve the quality of human life' [START_REF] Garetti | Sustainable manufacturing: trends and research challenges[END_REF]. However, understanding of the term 'sustainability' still varies significantly between manufacturing firms. Some consider mere compliance with environmental legislation to be sustainability; others see waste and cost reduction, or reduction of carbon emissions as sustainability; others view workplace and employee rights or community engagement as sustainability [START_REF] Bonini | McKinsey Global Survey results: How companies manage sustainability[END_REF]. [START_REF] Willard | Next Sustainability Wave: Building Boardroom Buy-In[END_REF] proposes a 'corporate sustainability continuum', through which firms' progress on the path towards sustainability. Walking along this path will imply changes in the firms that will affect several aspects of their organisation, thus an important innovation process could take place in order to integrate sustainability in the core purpose of the firm, i.e. in their business model. This integration will need to address two main issues: (i) the value created by the firm should not be only considered in economic terms, hence there is a need for a more holistic view that integrates social and environmental goals [START_REF] Schaltegger | Business Cases for Sustainability and the Role of Business Model Innovation: Developing a Conceptual Framework[END_REF]); (ii) from a network perspective, the scope of value needs to include a wider range of stakeholders in a much more explicit manner that involves relationships, exchanges and interactions, besides just economic transactions [START_REF] Allee | Value Networks and the true nature of collaboration[END_REF]. This paper makes a proposal to this end. After a state of the art review from literature, a state of practice review is presented (section 2): based on their key findings, the methodology for the development of the business modelling process is shortly highlighted (section 3) and the process itself is described (section 4). A discussion is eventually proposed (section 5), to compare our proposal with other processes in literature and to raise the debate on the issues still open in the research agenda. Review in business modelling State of the art review The term business model (BM) is widely used in academic and business literature [START_REF] Richardson | The business model: an integrative framework for strategy execution[END_REF][START_REF] Zott | The Business Model: Recent Developments and Future Research[END_REF][START_REF] Lee | Business Model Design Methodology for Innovative Product-Service Systems: A Strategic and Structured approach[END_REF]. Although there is a general agreement on its basic definition, considered as a simply description of how a firm does business [START_REF] Richardson | The business model: an integrative framework for strategy execution[END_REF], there is still not theoretical grounding in economic or business studies about this concept [START_REF] Teece | Business models, business strategy and innovation[END_REF]. BMs have diverse utilities within a firm, such as being a design of the value proposition, creation, delivery and capture mechanisms (Teece, 2010, Osterwalder and[START_REF] Osterwalder | Business Model Generation. A Handbook for Visionaries, Game Changers, and Challengers[END_REF] and being a source of innovation (Zott and Amit, 2007[START_REF] Teece | Business models, business strategy and innovation[END_REF], Ludeke-Freund, 2010). Authors such as [START_REF] Chesbrough | The role of the business model in capturing value from innovation: evidence from Xerox Corporation's technology spin-off companies[END_REF], [START_REF] Braet | Business Model Scenarios for Remote Management[END_REF], [START_REF] Richardson | The business model: an integrative framework for strategy execution[END_REF], [START_REF] Zott | Business Model Design: An Activity System Perspective[END_REF], [START_REF] Teece | Business models, business strategy and innovation[END_REF], [START_REF] Osterwalder | Business Model Generation. A Handbook for Visionaries, Game Changers, and Challengers[END_REF] and [START_REF] Romero | Collaborative networked organisations and customer communities: value co-creation and co-innovation in the networking era[END_REF] are key authors in business modelling literature, who have attempted either to describe a business modelling framework or a process. Although without a particular focus on sustainability, their contributions provide a useful overview of the current state of art. Concrete contributions to sustainability-oriented BMs are made by [START_REF] Stubbs | Conceptualizing a Sustainability Business Model[END_REF], Ludeke-Freund (2010) and Tukker and Tischner (2006), the latter having a focus on Product-Service-System (PSS) as a concrete type of BM. ---A process as a cyclic approach; main element of the approach is the categorization of the actors and roles that are active in a given value network [START_REF] Richardson | The business model: an integrative framework for strategy execution[END_REF] A framework organized around the concept of value; main elements are: value proposition, value creation and delivery and value capture --- Stubbs and Cocklin, 2008 A framework for analysis consisting on structural and cultural attributes of sustainable BMs A framework (named as canvas) proposing a set of elements for the design of BMs: customer segments; value proposition; channels; customer relationships; revenue streams; key resources; key activities; key partnerships; cost structure A business design process made of 5 phases: Mobilize, Understand, Design, Implement and Manage Romero and Molina, 2011 A framework providing a multi-value system perspective; multi-stakeholder approach; the role of the customer in the co-creation process --- Table 1. Review of frameworks and processes in business modelling To sum up, the literature is lacking many sustainability related issues, primarily: (i) the integration of a broader range of stakeholders in business modelling, understanding how value might be perceived for them; (ii) a process for exploring other forms of value, rather than solely economic one, and for analyzing related relationships, exchanges and interactions. State of practice review This section provides the results of the state of practice reviewed through 6 case studies spread across norm to extreme examples of sustainability, where norm represents an incremental approach to sustainability innovation, and extreme represents a firm seeking to introduce radical change. A semi-structured interview approach was adopted to explore the current practice in the selected cases. Table 2 highlights the results for the norm cases: three cases (case A, C, D) are multinational companies, one is a start-up (case B). Table 3 highlights the results for the extreme cases: one is a start-up (case E), one is a SME (case F). The factors under analysis are: the key drivers for sustainability initiatives (row number 1); the BM innovation processes employed (row number 2); the value network perspective (row number 3). • Business modelling often has an organic/ad-hoc approach depending on radical leadership rather than tools and techniques. • Sustainability is seen more as a detached or isolated concept with difficulty in embedding it in the business purpose and processes. • Within the stakeholders discussion, their interactions and understanding of value are minimal given the dynamic and complex structure of value networks. • Governance structure influences whether or not sustainability is successfully incorporated. Research Gap The literature and practice review highlights a need for innovation in business modelling process that will assist manufacturing firms in developing and enhancing their BMs to embed sustainability. The existing knowledge in BM development is focused on generating only economic value. To extend the construct of value to include environmental and social benefits through a multi-stakeholder view, a substantial change in the way business are conceived and operated is required. Hence, this paper proposes a business modelling process that assists firms in embedding sustainability into their business, exploring other forms of value (social and environmental) and analysing value exchanges. Methodology The literature and practice reviews on business modelling contributed to the initial development of the proposed business modelling process. Afterwards, the development went through further iterations involving brainstorming sessions, meetings and two exploratory workshops with research and industrial partners. It further involved reviews of EU and international projects and reports and researchers working on other knowledge areas (sustainable manufacturing, value networks) were considered for idea generation and discussion. Proposed business modelling process The business modelling process herein proposed provides a multi-stakeholder view, shared-value creation with different perspectives on value and explicit consideration of environment and society as main elements for developing a sustainable BM. The business modelling process is composed of four steps. Table 4 introduces the description of each step. At the end of the process, it is required to build the governance structure for supporting BM implementation. In particular, the governance structure aims at providing better ways to manage, measure, monitor and control the business activities; hence, it would act as a support for the effective incorporation of sustainability in the BM. STEP DESCRIPTION 1 Purpose of the business This step attempts to clarify the business concepts in order to go on along next steps, understanding the strategic objectives and firms' position towards sustainability; in particular, it aims at discussing business concepts such as products & service bundles, sustainability values, industry-related needs and opportunities. 2 Identify potential stakeholders and select sustainability factors This step aims at identifying (i) the potential stakeholders within the business ecosystem and what they do value and (ii) the sustainability factors leading decisions 3 Develop the value proposition This step pursues to envision the value proposition for a firm and its stakeholders 4 Develop the value creation and delivery system and the value capture mechanism This step aims at developing the value creation and delivery system and the value capture mechanisms by defining in particular the key activities, key resources, key partners, key channels, key mind-set and the value exchanges for the firm and its stakeholders Table 4. Business modelling process -description of the steps. Table 5 presents the four steps of the process identifying the expected outputs as well as several questions that would drive their achievement, as well as the analysis and decisions at each step. Conclusions and future research Firms are attempting to explore BM innovations in order to address the new challenges of sustainable manufacturing and enhance their current BMs by incorporating economic, environmental and social sustainability in a balanced way. A new business modelling process is required that extends consideration to the broader value network, and frames value in terms of economic, social and environmental, rather than just economic aspects. A preliminary business modelling process has then been developed and refined through discussions with industrial partners. Indeed, the business modelling process presented herein has to be understood as a process that may help firms to integrate sustainability fully into the BM and redefine their business logic in order to maximize the value created and delivered through the business ecosystem, while integrating social and environmental value. Further research is recommended for identifying and enhancing existing tools or developing new tools for business modelling, which will assist in identifying and integrating environmental and social value perspectives in addition to economic value, and in including a multiple-stakeholder approach. The proposed business modelling process is also envisioned as a model for guiding the collection of tools. Further need is testing the business modelling process and its tools in use cases taken from real industrial contexts. emphasis on value proposition and mechanisms for value capture, focusing primarily on customers and referring exclusively to longterm economic sustainability Zott and Amit, 2010 A framework with a broader understanding of value creation through interactions along the value network; main elements are: content, structure and governance ---Lüdeke-Freund, 2010 A conceptual framework oriented to sustainability strategies driven by eco-innovations and focused on creating an extended customer value ---Osterwalder and Pigneur 2010 Table 1 1 The processes primarily guide thinking in generating economic value and do not explicitly embed or consider environmental and social concerns and benefits, nor analyse or include a multistakeholder view. Nonetheless, a range of business modelling frameworks is presented that offer a good starting point for developing a business modelling process. The Osterwalder and Pigneur canvas seems the preferable framework for its adaptation to sustainable business modelling: it covers the dominant elements discussed in summarises their main contributions Table 2 . 2 Review of business modelling process in practice -norm E (Personal transportation) F (Home and Office Furniture) 1 Perceived need for environmental friendly personal mobility solution Resource efficiency + long-term view of value opti-mization for the customer and the environment 2 Systematic innovation process + iterative redesign for optimization + current tools available not considered Little formal development of business modelling for sustainability + ad-hoc process of business improve- particularly helpful ment 3 Network of suppliers for technology, hydrogen infra-structure + local council partners for programme roll-out Removed intermediaries from distribution network for closeness to customers + Local manufacturing strat- egy + employees as key resource + Strong ties with customers & suppliers through financing structure + potential for turning firm into employee owned Table 3 . 3 Review of business modelling process in practice -extreme Followings are the overall findings of the state of practice review. Table 6 . 6 Comparison among business modelling processes Braet and Ballon, were not included in the table as they did not develop a process based on steps but a cycle with four phases (Organization, Technology, Service and Financial) Acknowledgment The paper presents some initial results from "Sustainable value creation in manufacturing networks"(SustainValue) project. The research is funded by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n°262931. STEP EXPECTED OUTPUTS AND QUESTIONS AT EACH STEP Table 5. Business modelling process -steps, expected outputs and questions Discussion Comparison with other business modelling processes This section presents a comparison between our proposed business modelling process and other processes from the literature review 1 . Table 6 shows the steps compounding each process as well as, in the bottom part of the table, their sustainability approaches. It should be noted that our proposal emphasises a comprehensive vision of value (including economic, environmental and social aspects) and a broader multi-stakeholder perspective along all the steps. Another advantage is that it does not address any concrete type of BM, remaining then applicable for a higher variety of firms.
17,293
[ "1002134", "1001971", "991431", "1002135", "1002136", "996324" ]
[ "125443", "125443", "125443", "237843", "237843", "237843" ]
01472239
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472239/file/978-3-642-40352-1_23_Chapter.pdf
Samuel W Short Padmakshi Rana Nancy P Bocken Steve Evans N M P Bocken Embedding Sustainability in Business Modelling through Multi-stakeholder Value Innovation Keywords: sustainability, business model, sustainable business model, business model innovation, value creation des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Background Current approaches to industrial sustainability such as cleaner production, ecoinnovation, and Corporate Social Responsibility (CSR) are enabling industry to reduce un-sustainability. However, these approaches assume that business can continue largely as usual by simply making incremental efficiency and emissions improvements, or giving a little back to society in the form of philanthropic initiatives to offset negative impacts of business. These efforts increasingly appear inadequate to address the growing challenges facing industry and society of climate change, resource scarcity, environmental degradation, and escalating concerns over social sustainability. A fundamental paradigm shift appears necessary, in which business activities and consumption patterns are aligned with environmental and social objectives. With careful business model redesign it may be possible for mainstream manufacturers to rad ically improve sustainable performance to deliver greater environmental and social value while at the same time delivering economic sustainability, as suggested by [START_REF] Stubbs | C: Conceptualizing a "Sustainability Business Model[END_REF], [START_REF] Porter | Creating Shared Value[END_REF], [START_REF] Yunus | Building Social Business Models: Lessons from the Grameen Experience[END_REF], and FORA (2010). Business Models A significant number of authors have contributed to the literature on business models and business model innovation. There appears to be reasonably good conceptual understanding, albeit, with several differing perspectives (Teece 2010 [START_REF] Zott | T he Business Model: Recent Developments and Future Research[END_REF]. [START_REF] Richardson | T he business model : an integrative framework for strategy execution[END_REF], based on a wide range of literature, identifies some common themes, and concisely summarises the elements of business models as: the value proposition (i.e. the offer and the target customer segment), the value creation and delivery system, and the value capture system. Zott and Amit (2010), present the business model from an activity system perspective, and hence view the business model as a network. This exemplifies an emerging view that business models need to be developed with a network rather than firm-centric perspective. Furthermore, Chesbrough and Rosenbloom (2002), Zott and Amit (2010), Teece (2010) and Osterwalder and Pigneur (2010), are key authors who have described a business modelling process. Sustainable Business Model A sustainable business model has been defined as 'a business model that creates competitive advantage through superior customer value and contributes to a sustainable development of the company and society' (Lüdeke-Freund 2010). [START_REF] Stubbs | C: Conceptualizing a "Sustainability Business Model[END_REF] assert that sustainable business models use both a systems and firm-level perspective, build on the triple bottom line approach to define the firm's purpose and measure performance, include a wider range of stakeholders, and consider the env ironment and society as stakeholders. Sustainable business models as a prerequisite must be economically sustainable. As such, [START_REF] Schaltegger | Business Cases for Sustainability and the Role of Business Model Innovation Developing a Conceptual Framework[END_REF] suggests the objective in business modelling for sustainability is therefore to identify solutions that allow firms to capture economic value from generating public environmental and social value, thereby establishing the business case for sustainability. Value Innovation and Stakeholders Business model innovation involves changing 'the way you do business', rather than 'what you do' and must go beyond process and products (Amit and Zott 2012). In other words, business model innovation is about changing the overall value propos ition of the firm and reconfiguring the network of stakeholders involved in creating and delivering the value. At the core of business model innovation is re-thinking the value proposition. Conventionally, business model innovation emphasizes almost exclusively on creating new forms of customer value. To create sustainable business, a more holistic view of the value proposition is required that takes a wider stakeholder perspective (Bowman and Ambrosini 2000), and integrates economic, social and environmental value creation. Hence, the value proposition needs to include benefits to other stakeholders and sp ecifically to society and the environment as well as to customers and the firm. Adapting Donaldson and Preston's (1995) view, six stakeholder types can be observed for sustainable business models and modelling -Customers, investors and shareholders, employees, suppliers and partners, the environment, and society. As [START_REF] Allee | Value Networks and the true nature of collaboration Online Edi[END_REF] suggests, the scope of value needs be extended in a much more explicit manner that involves understanding tangible and intangible value flows 1 between stakeholders towards identifying relationships, exchanges and interactions, and opportunities for greater shared-value creation. Tools for Business Model Innovation for Sustainability Few tools if any really assist firms in the practical creation of business models for sustainability. The tools and methods currently used are either conceptual or have not been used widely in industry, and typically rely on a well-trained (external) facilitator. One of the seemingly popular frameworks to support the generic business modelling process is Osterwalder and Pigneur's (2010) business model canvases. While being well-conceived and academically grounded its ability to generate innovative thinking beyond pure economic value creation seems limited due to the narrow view of stakeholder value. Network-centric tools for business model innovation are generally still highly conceptual to date. Tools such as Allee's (2011) Value Network Analysis (VNA) offer an approach to value mapping and understanding shared value creation, which might assist in business modelling. However, VNA maps are complicated and time-consuming to develop, and not specifically intended for business modelling. Practice Review Despite the apparent shortfalls in tools and methods, there are an increasing number of practical examples of firms successfully exploring and innovating for sustainability. A practice review was conducted which consisted of the assessment of case studies in the literature and popular press, augmented with in-depth case studies of five firms that are actively engaging in business model innovation for sustainability. The cases were selected to represent a range of industry sectors, and include start-up, small and medium size enterprises (SMEs) and multinational companies (MNCs). These cases range from an incremental approach to sustainability innovation through to firms seeking to introduce radical change. A semi-structured interview approach was adopted to explore how these firms conceptualise business model innovation for sustainability, and how they are seeking to embed sustainability into the core of their businesses. Several common themes emerged from the practice review:  A common recognition of the need for innovation to embed sustainability in the business by consciously considering environmental and social value.  Innovations specifically target negative impacts of business, and seek to reduce losses and waste. This appears somewhat distinct from mainstream business where emphasis is on seeking opportunities for new customer-orientated value creation.  Innovation has been approached generally in ad-hoc, incremental, and experimental manner, rather than following a prescriptive process or using specific tools.  Innovations often depend strongly on visionary leadership of a few key individuals.  The term 'value' was used often (e.g. customer value, economic value) but there seems to be considerable ambiguity about the use and meaning of the term. Innovation always presents some level of risk and uncertainty for a firm since it requires going beyond what they currently know and do [START_REF] Chesbrough | Business Model Innovation: Opportunities and Barriers[END_REF]). Business model innovation for sustainability seems likely to compound this due to the need to consider additional social and environmental dimensions of value. For this reason, and perhaps because of the lack of systematic tools, business model innovation for sustainability to date has relied on somewhat radical leaders, and has to a large extent been avoided by most mainstream manufacturers. Research Gap This paper identifies a need for a tool to assist firms in better understanding sustainable value creation within their business activities, and assist them in developing new business models with sustainability at their core. Specifically, current tools and methods lack a systematic approach for considering value for multiple stakeholders and for innovating the business model for sustainability. In general, a firm-centric approach rather than a network (system) perspective is taken. This paper investigates the fo llowing question: How can sustainability be embedded in the business modelling process through a better understanding of value? Proposed Solution -A Value Mapping Tool Based on the above review, a value mapping tool is proposed to help companies create value propositions to support business modelling for sustainability. A pilot test of this tool was conducted with a start-up company to develop the approach. The authors conducted further brainstorming against existing industrial examples to further enhance the tool. As a next stage of the research the tool will be comprehensively tested with a wider range of industrial participants. The novel aspects of this tool to address the gap identified in literature and practice are:  Systematic assessment of value based on the observation that business model innovation for sustainability not only needs to seek to create new forms of value, but must also seek to address value that is currently destroyed or missed.  A network-centric perspective for value innovation to ensure optimisation/consideration of value from a total network, or system-wide perspective, rather than narrowly considering a firm-centric view of value.  A multiple stakeholder view of value. Current business modelling processes and tools predominantly focus on customers and partners in the immediate value-chain. This process seeks to expand this range of stakeholders. Underlying Rationale for the Proposed Value Mapping Tool The basis of the proposed for value mapping is the observation (literature and practice) that product/service industrial networks often create a portfolio of opportunities for value innovation for their various stakeholders as illustrated below in figure 1. Fig. 1. Opportunities for Value Innovation At the core of this portfolio is the value proposition of the network. This represents the benefits derived by each stakeholder in the forms of exchange value involved in creating and delivering a product or service offering, and value in use of that product or service [START_REF] Lepak | Value Creation and Value Capture: A Multilevel Perspective[END_REF]). In delivering the value proposition, individual stakeholders and networks collectively may also destroy value through their activities. Value destroyed can take various forms, but in the sustainability context is mostly concerning the damaging environmental and social impacts of business activities. The literature often refers to these as negative externalities, but it is felt that this termino logy may tend to artificially distance these impacts from the firm. Furthermore, networks and individual stakeholders also often squander value within their existing business models. This can be conceived as missed value opportunities, where individual stakeholders fail to capitalise on existing resources and capabilities, are operating below industry best-practice, or fail to receive the benefits they actually seek from the network. This might be due to poorly designed value creation or capture systems, failure to acknowledge the value, or inability to persuade other stakeholders to pay for the benefit. There are also new value opportunities, which tend to be the more usual focus of business model innovation, seeking to expand the business into new markets and introduce new products and services. Value Mapping Tool A preliminary tool has been developed to structure the value mapping process shown in figure 2. Fig. 2. Value Mapping T ool The design of the tool is based on emphasising:  Stakeholder segments. Each segment represents a relevant stakeholder group in the product/service network. To facilitate a network-centric perspective, the firm is represented as employee and owner stakeholder groups, rather than as a discrete stakeholder.  Four representations of value. The four circles represent the forms of value that are of specific interest to the process of business modelling for sustainability proposed in figure 1. Identifying them separately encourages a more thorough and complete exploration of the current business model, and assists in identifying areas requiring change or improvement. The circular form of the tool was developed over a series of workshops, to facilitate a holistic system-perspective of value, to encourage equal consideration of all stakeholder interests, and explore the inter-relatedness between stakeholders. Alternative formats such as tabular data capture were tested, but the circular tool better engaged the participants, facilitated discussion of opportunities for value creation, and better stimulated creative lateral thinking. The Value Mapping Process (Using the Tool) The process follows several steps:  The process begins by defining the unit analysis as the product/service, or portfolio of products/services offered by a business unit, firm, or an industry. The focus is on the offering, rather than the firm, to support a network perspective.  Stakeholders are identified and placed in each segment of the tool. The starting point is generic stakeholder types, but the tool is populated with specific stakeholders to facilitate the analysis. Specifically society and the environment are included as stakeholders. In a workshop setting it was observed to be beneficial for some segments to initially be left blank to allow later addition during the process.  A facilitated brainstorming is then used to populate each stakeholder segment in turn with the various forms of value generated for that stakeholderstarting at the centre of the circle and working outwards. This follows a logical progression from the core value proposition by the current business model, outwards to values further removed from the core offering. By following this progression each step builds upon and is informed by each preceding step as illustrated earlier in figure 1. To maximise the potential of the tool, representatives or suitable proxies for each major stakeholder group should participate in the process to solicit broad perspectives on value. Furthermore, a lifecycle-based approach is introduced to assist participants in identifying all stakeholders and various forms of value throughout each stage in the provision of the product/service from concept through to end-of-life. Discussion Business model innovation seems a key to delivering future sustainability. This paper identifies a gap in current literature and practice for systematic tools to assist firms in business model innovation for sustainability and hence proposes a tool to assist firms and practitioners in mapping value exchanges for sustainability. The tool is intended as a first step in a business modelling process for embedding sustainability into the core purpose of the firm and its network of stakeholders. To further develop business modelling for sustainability, an approach to assist in transforming the value proposition is needed to help eliminate destroyed value or shift destroyed value into positive opportunities; seek solutions to capture missed value; and integrate new opportunities for value creation. It should be reiterated that the pu rpose of such innovation is not simply to reduce negatives, but rather to reconceive the business model to deliver sustainability. Amit and Zott (2012) suggest this might be achieved through introducing new activities, new stakeholders, or reconfiguring the existing activities and network in novel ways. A potential practical approach to facilitate this innovation process might be through building upon knowledge of existing and proven business model innovations for sustainability to date. Preliminary work has collated several potential business model innovation archetypes, and these are under analysis to identify defining patterns and attributes that might facilitate grouping and provide mechanisms for achieving value proposition innovation. Further work with industrial partners is planned to develop the business modelling process to refine and demonstrate the approach. 6 This paper builds on work undertaken on SustainValue, a European Commission's 7 th Framework Programme (FP7/2007-2013). The authors gratefully acknowledge the support of the European Commission, and the contribution of the academic and industrial partners on this project in developing and testing the ideas presented herein. Examples of tangible value include products, services, money, knowledge, and technology while intangible values include market access, product feedback, and corporate reputation (Allee 2011).
18,176
[ "1002135" ]
[ "237843", "237843", "237843", "237843" ]
01472241
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472241/file/978-3-642-40352-1_25_Chapter.pdf
Makarova Liliyana Peter Meulengracht Jørsfeldt Brian Vejrum Jensen Waehrens Implementation of Sustainability in Ongoing Supply Chain Operations Keywords: The need to take the sustainability agenda beyond its technological outset and include supply chain practices is well-established, but still little has happened and the supply chain has remained largely unaffected. This paper asks why this may be the case and investigates what happens in the translation from ambitious strategic goals to operational practices. To do this an exploratory case study is presented detailing the efforts of a large Danish manufacturing company to introduce an ambitious sustainability agenda in its ongoing supply chain operations. The study aims to develop a deeper understanding of the inter-functional coordination and operational practices when the sustainability agenda is introduced into supply chain. The study points to a lack of tangible environmental performance measurements and to incoherent functional logics as the main factors preventing effective implementation. We find support for a lack of formalized sustainability integration into operations and clear systemic approach to cross-functional coordination. Introduction The phenomenon of sustainability has in recent years received a great deal of attention by practitioners and academics alike. Simultaneously, in private business sustainability has slowly been accepted as a strategic agenda [START_REF] Accenture | A New Era of Sustainability -UN Global Compact -Accenture CEO Study[END_REF]. The rise of sustainability as a key strategic priority has been due to a number of changes in the manufacturing environment, namely: global competition for resources and escalating deterioration of the environment [START_REF] Pagell | Building a More Complete Theory of Sustainable Supply Chain Management Using Case Studies of 10 Exemplars[END_REF]; rising supply chain cost -regulation in response to environmental protection has changed the cost structure; growing awareness of sustainability issues creates new markets for sustainable products and increases customer pressure for sustainable supply chains [START_REF] Accenture | A New Era of Sustainability -UN Global Compact -Accenture CEO Study[END_REF]. At the same time due to the phenomenon of globalization in the manufacturing environment, a new approach to competitiveness has emerged: the new idea is that it is not single functional area or even firm that competes, but competitive advantages rests in the firms capability to orchestrate the supply chain as a whole [START_REF] Goldsby | World Class Logistics Performance and Environmentally Responsible Logistics Practices[END_REF][START_REF] Pagell | Building a More Complete Theory of Sustainable Supply Chain Management Using Case Studies of 10 Exemplars[END_REF][START_REF] Dyer | Collaborative Advantage: Winning Through Extended Enterprise Supplier Networks[END_REF]. This in turn brings forward issues related to cross-functional and inter-organizational coordination and integration of which we know very little when it comes to driving key strategic agendas through. With the intensification of the globalization phenomenon, the supply chain of many companies is increasingly complex and dispersed, which also makes the pursuit of emerging strategic agendas inherently difficult. Furthermore, many companies need to respond to a non-coherent strategic demand, i.e. there is no single strategic demand or performance objective, which means that the company needs to balance diverse demands, which translates into several competing or even diverging performance objectives. There are numerous empirical studies that reveal the existence of managerial problems when sustainability is applied in the supply chain context [START_REF] Accenture | A New Era of Sustainability -UN Global Compact -Accenture CEO Study[END_REF]. The literature on sustainable development in operations documents that tools and techniques for implementation sustainability in the supply chain has been developed over the past 25 years [START_REF] Kleindorfer | Sustainable Operations Management[END_REF][START_REF] Seuring | From a Literature Review to a Conceptual Framework for Sustainable Supply Chain Management[END_REF]. Among some of the most dominant and applied techniques lifecycle assessment (LCA), reverse logistics, closed loop supply chains, design for disassembly can be mentioned. All confirm the link between sustainability practices in supply chains and competitive advantage in manufacturing companies [START_REF] Rao | Do Green Supply Chains Lead to Competitiveness and Economic Performance?[END_REF]. 2 Research gap Despite consensus that sustainability is a key competitive parameter and the availability of effective tools, the operational practices in the most companies remain largely unaffected. This is documented in several studies, however, only a few studies empirically investigate this problem on an operational level [START_REF] Holt | An Empirical Study of Green Supply Chain Management Practices amongst UK Manufactures[END_REF][START_REF] Porter | Strategy & Society. The Link between Competitive Advantage and Corporate Social Responsibility[END_REF][START_REF] Bowen | Horses for Courses: Explaining the Gap between the Theory and Practice of Green Supply[END_REF]. Furthermore, these available empirical studies primarily investigate the drivers of environmental behavior and describe the existing practice in order to identify supply chain environmental operational activities, and do not include factors related to the organizational context when implementing sustainability in ongoing supply chain operations and calls have been forwarded to bridge this gap in the literature [START_REF] Pagell | Building a More Complete Theory of Sustainable Supply Chain Management Using Case Studies of 10 Exemplars[END_REF][START_REF] Hald | Sustainable Procurement in Denmark 2011-Nogle Foreløbige Resultater[END_REF][START_REF] Porter | Strategy & Society. The Link between Competitive Advantage and Corporate Social Responsibility[END_REF]. Hence the purpose of this study is to go beyond the strategic and corporate realm and, on an operational level, investigate what barriers are preventing companies from adapting sustainability into their supply chains. This purpose leads to the following research question: What are the current organizational barriers preventing companies from implementing and anchoring sustainability in their supply chain practices? To answer the research question, this study will seek to examine the current organizational set-up and traditional key performance indicators; discuss how sustainable initiatives were approached and motivated in different parts of the supply chain operations; identify challenges of embedding sustainable development in an ongoing supply chain operations; establish the patterns of organizational changes in response to the need of more sustainable manufacturing practice and suggest the solution. Research Design Conceptual Framework for Study As several different approaches to supply chain management exist, for the purposes of our study we define supply chain management as "the systemic, strategic coordination of the traditional business functions and the tactics across these business functions within a particular company and across businesses within the supply chain, for the purposes of improving the long-term performance of the individual companies and the supply chain as whole" [START_REF] Mentzer | Defining Supply Chain Management[END_REF]. The study will take a supply chain governance perspective and the framework underlying the study is presented in Figure 1. The framework rests on system view of supply chain management process where the materials and information flows are coordinated from the market to the suppliers through the company, and then detailed organizational elements were drawn additionally in order to understands how new corporate agenda of sustainability affecting cross functional integration and coordination of the ongoing supply chain. The engagement of all partners is considered in the framework. It is done in order to get deep understanding of how partners from support functions and core supply processes are interacting when striving to carry out the strategic agenda. Strategic agendas that are set by a corporate management strategy are often quite diverse and may draw attention in many different directions. For example cost, quality, responsiveness, and sustainability are each make diverging, but also to some degree mutually reinforcing demands on the organization. Every time a new strategic agenda is set by top management, the need to readdress the supply chain governance form appears: partners from core supply chain processes and support functions have to engage in different ways to carry out tasks set by different strategic agendas and to balance the different strategic agendas. The discussion will be based on the given framework applying Kahn´s division of integration in cross-functional work, which distinguishes between interaction-based integration and collaboration-based interaction [START_REF] Kahn | Interdepartmental Integration: a Definition with Implications for Product Development Performance[END_REF]. Finally, the paper will conclude with a discussion of how roles and coordination are affected by the emerging agenda. Case Selection As a sample for our study we choose an organization that is well-ahead in its industry in terms of social and environmental performance, while still maintaining economic viability. The company has a global presence with regards to all value chain functions and employs more than 10,000 people in more than 45 countries. It has been working with the sustainability agenda for more than fifteen years. In the past five years sustainability has been established as a key competitive requirement for the future and a very ambitious goal has been publicly announced. In the Danish context the case company represents an extreme case [START_REF] Yin | The Case Study Anthology[END_REF] with regards to its focused efforts and ambitious goals to establish sustainable operations, but also with regards to the complexity of implementing the new agenda in the supply chain. The extreme case enables us to study the phenomenon at its edge and is likely to reveal more information [START_REF] Yin | The Case Study Anthology[END_REF]. Data Collection We used a semi-structured interview protocol to interview actors, who were representing different functions and parts of the supply chain which were engaged in the process of sustainability implementation. In this way we had the necessary flexibility to focus on what was unique at each specific process. It was important to understand how sustainability issues were addressed in the supply chain process and what challenges were experienced during the process of implementation. The case consists of seven subcases -mirroring the different stages of supply chain management and key support functions thereof: the sub-cases differ both in terms of the approach applied to sustainability and in terms of interaction with other parties in the supply chain. Environmental Department. The environmental department was developed with a purpose to ensure and support group strategy towards sustainable development. Being a support function, the department does not pose any power of environmental resources at the company sites. However, the department has a mandate to negotiate the yearly environmental targets for the relevant sites in the organization. The department initiates environmental projects in cooperation with local sites and other departments across the global supply chain, but for the present such projects are only allowed to take place when it does not affect the material flow. The main challenges to imple-mentation identified, were the lack of authority to allocate resources and the need for cross-functional integration around the agenda. Purchasing Department. The primarily business objective for the purchasing department is to ensure appropriate suppliers to organization. To live up to the company's value of sustainability suppliers are estimated and monitored in regard to their CSR practice. While the choice of supplier is mainly driven by cost, informally, CSR assessment is used as indicator for quality. The yearly audit for suppliers is based on performance requirements, which are regularly revised in collaboration with production companies, environmental department, and other functions and in accordance to corporate strategy. The following challenges were identified: how to measure the value of CSR in a way it can "make a sense" on the operational level of purchase process; how to integrate CSR mindset in support processes. Production Technology Department. The idea to include the requirement from environmental department as well from other stakeholders in the beginning of the process of implementation of new equipment initially was targeting reducing time of implementation. When setting the sustainability agenda into the process of machinery implementation, the production technology department is following the demands from the corporate level. Supportive functions as environmental department and working conditions are mainly the sources of information and do not have real power to change processes of material flow. The following challenges were identified: lack of technical competencies (knowledge of equipment) from supportive functions; the need to restructure the process to align requirements from all stakeholders in time. Production Facilities. The primary business objectives for production are to manufacture product meeting the delivery time, efficiency and cost. Planning, production technology and quality are the departments that are engaged in decision making regarding processes in production. At present there is no or little formal integration of sustainability in the production process. The governance of sustainability initiatives is perceived as the responsibility of the environmental department and can take place only if it does not interfere with the production process. Because of high priority of cost and other traditional KPIs for the production, the capacity for development remains limited. Few resources can be allocated to sustainability related work and specialized competences have not been developed, and as a consequence the translation from strategic intentions to implementable solutions remains weak. Technology Development Department. In spite of strong strategic intentions towards development of sustainable technology, sustainability is not well integrated in the process. The main reason for this as it was expressed by the environmental engineer is that customers of the technology center (production sites, business development) focus on cost, quality, performance, and environmental criteria are not a key priority. To change the situation, the department has intentions to involve customers and other stakeholders in the early stages of machinery development process. To do so it is planned to formalize procedure of assessment of sustainability on operational level to make it more tangible; to then set-up sustainability targets for new projects while or before specifications are being made an involve customers in this early stage of development; finally when a project is closed to reassess environmental targets for new projects. The main challenges for imbedding sustainability are: how to make sustainability tangible on an operational level, and how to raise awareness of sustainability in the department. Logistics Department. In 2009 the 5 th Climate Change conference took in Denmark. It gave a momentum to the initiative of mapping CO2 emissions from transportation on case-company supply chain. A measurement found that one third of overall CO2 emissions of the company were due to transportation. The challenge the department faces is expressed by a transportation manager: Where we go now? How do we operationalize the sustainability strategy and find solutions towards CO2 reduction without compromising traditional performance criteria? While the existing relationships linkages to transport providers as well as to the productions sites, distribution centers, and warehouse may produce trustworthy information about transportation related CO2 footprint; the logistics function has no influence on flow of components in the operations network. The lack of power to influence the physical flow and the production planning in the operations network remains a key barrier for reaching the ambitious environmental targets stated by corporate strategy. It demands that the logistics department changes its role from simply providing logistics services on demand to them penetrating and influencing the planning system that they respond to. The Production Improvement Function. The primarily business objective of the function is to support and align operations strategy with operations practice in the production sites. With the sustainability agenda as one of the top priorities of corporate strategy, the operations strategy has aimed to incorporate sustainability within its ongoing lean activities. Production improvement is a support function and while it is engaging with group strategy, production sites, environmental department and sales companies to create the focus for production improvements it has limited effect on how the initiatives take effect in the operations flow. Within challenges mentioned is a change in behavior on the shop floor towards practicing a culture and mindset of continuous improvement. Discussion and Conclusions The case company has as one of the Danish frontrunners on sustainability come a long way with its product and process redesign, but it also recognizes that to take the next steps the supply chain needs to be a key contributor of reductions. As can be seen from the case description above the sustainability agenda is met with many challenges once it starts to interfere with the physical flow and its well-established logics and measures of good practice. In the following the framework developed in the section 3.1 will be used as a canvas for discussions as a means of highlighting inter-functional interdependencies related to task performance as well as inter-functional logics and measures of performance. Information flow: sustainability in the case company is driven by two key motives: corporate values regarding responsible behavior and expected growing sustainability demands from customers. Yet, at the present customers do not have direct and concrete demands to sustainability. This means that there is no direct information flow from customers to core operations in the supply chain, but instead this information flow reaches corporate management. As to the information flow between core functions and support functions, it was noted that the sustainability agenda increases the amount of data flowing from operations to the support functions; and to define sustainability targets for different departments' knowledge of specific processes becomes a key priority. Material flow: it is clear from case study that the flow of materials is not as of yet affected by any of sustainability initiatives, due to overriding agendas. Improvements have been made, but mainly with new installations or with technology driven refurbishments where sustainability is a key agenda. Cross-functional involvement: the sustainability agenda brings at least one more new stakeholder in every core supply chain process (the environmental department), but these new stakeholders have only scarcely been involved in these processes. Moreover the nature of involvement of different functions is changing: new technologies are today only certified by the environment department after they have been specified, the environment department does not have the capabilities to be involved in the specification process, and the technology department only has limited knowledge of the environmental impact of new technologies. The analysis of data shows that the sustainability agenda leads to an increase in the number of functional interests involved in the coordination of supply chain processes. Interaction vs. collaborative cross-functional integration: to meet sustainability demands functions are changing the way they work with each other from a sequential interaction based approach to increasing the focus on ongoing collaboration and reciprocal interdependencies. For example, when addressing cost issue, departments communicate with each other using well established and tangible measures. For sustainability few tangible measures exist and the communication is much more directed towards achieving a mutual understanding and forming the basis for collaborating. Although the case company has come a long way towards implementing sustainability initiatives successfully, this had not had a direct influence on the flow of materials or the planning thereof. To meet the highly ambitious goals of rapid global growth and a related neutral CO2 footprint thereof, the current sustainability initiatives need to be implemented together with material flow oriented initiatives. To cope with the complexity of multifunctional cooperation a systematic approach to sustainability goals should be developed at the operations level: the guidance, the measurement of sustainability that will be tangible for day-to-day work on operational level. One of the solutions can be programme management [START_REF] Pellegrinelli | The Importance of Context in Programme Management: An Empirical Review of Programme Practices[END_REF]. The essential purpose of programme management is to direct the numerous and widely dispersed projects so that they not only support the global strategy, but also support a systematic competence and capability build-up in the organization. This is done through balancing between global support and guidelines (e.g. tool-box development, knowledge-sharing platforms, control mechanisms and resource allocation) and local emergent initiatives, incitement and ownership. Fig. 1 . 1 Fig. 1. Organizational Context of Sustainability
22,357
[ "1002140", "1002141" ]
[ "300821", "300821", "300821" ]
01472243
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472243/file/978-3-642-40352-1_27_Chapter.pdf
Lukas Chabada email: [email protected] Heidi Carin Dreyer email: [email protected] Anita Romsdal email: [email protected] Daryl John Powell email: [email protected] Sustainable Food Supply Chains: Towards a Framework for Waste Identification Keywords: fresh food, waste, lean, food supply chain Reduction of waste in food supply chains is an important sustainability issue. More efficient utilisation and management of the resources and values created in food supply chains can contribute to improving competitiveness, and environmental and social responsibility. This study uses the seven wastes approach from lean theory to classify categories of waste in fresh food supply chains and to identify at which stage of the supply chain waste occur. A case is used to illustrate the applicability of the classification. The analysis identifies four categories of waste in the fresh food supply chain; time, distance, energy and mass. The study indicates that the majority of waste is hidden in time, energy and mass categories, related to overproduction, defects and transportation. Introduction The food sector, represented by farmers, food producers, wholesalers and retailers, creates and manages enormous values and resources. Achieving a sustainable production, distribution and consumption of these values is a significant global responsibility since it affects social development, welfare and health, economic development and competitiveness for actors in the food supply chain, and environmental conditions. The expected population explosion puts direct pressure on global food production [START_REF] Parfitt | Food waste within food supply chains: quantification and potential for change to 2050[END_REF][START_REF] Godfray | Food security: the challenge of feeding 9 billion people[END_REF] and stresses the need for addressing sustainability issues related to the level of waste stemming from resource inefficiencies in food supply chains [START_REF] Mena | The causes of food waste in the supplierretailer interface: Evidences from the UK and Spain[END_REF]. Cultivating, processing and distributing food which ends up as waste leads to a major loss in value creation [START_REF] Akkerman | Quality, safety and sustainability in food distribution: a review of quantitative operations management approaches and challenges[END_REF]. Almost one-third of food produced for human consumption is lost or wasted globally and the level is significantly increasing [START_REF] Gustavsson | Global food losses and food waste[END_REF]. One example of food products being wasted are fresh food products with very short shelf life which need to be processed quickly. A critical question from a logistical point of view is what type of waste are we dealing with and where in the supply chain, and how can new ways to control and operate the supply chain contribute to lower the amount of waste. What we know is that a good balance between supply and demand of products in the supply chain will reduce the level of products that will never be sold. We also know that the different stages in the supply chain needs to be integrated, the lead time needs to be kept short, and the inventory level low in order to reduce the risk of creating waste. There is growing interest in research on waste in food supply chains. Previous studies have concluded that food waste can be found at every stage of the food supply chain (see e.g. [START_REF] Gustavsson | Global food losses and food waste[END_REF]. However, lack of studies on systematic and formal definitions and classifications which can assist in defining waste has been identified in research on waste resulting from unnecessary production and distribution activities for high perishable products [START_REF] Rajurkar | Food supply chain management: review, classification and analysis of literature[END_REF]. This study uses the seven wastes approach from lean which has proved to be very useful for identifying waste in manufacturing processes [START_REF] Ohno | Toyota Production System: Beyond large-scale production[END_REF]. The purpose of this paper is therefore to use the seven wastes perspective from lean in order to develop a classification for identifying waste in fresh food supply chains. The classification addresses two research questions (RQ): What types of waste can be identified in food supply chains (RQ1)? And where in the food supply chain can these wastes be found (RQ2)? The paper further describes the research methodology and then briefly characterises fresh food supply chains. Next, the seven wastes perspective is used to develop the classification and a case is used to illustrate the applicability of the classification. Research Methodology This research is part of a project focused on sustainable logistics in Nordic fresh food supply chains. The study is carried out by researchers within supply chain management focusing on different aspects of coordination, collaboration, planning and control, and real-time information. The objective of the paper is to develop a classification for waste identification in fresh food supply chains, with focus on producers, wholesalers and retailers. The classification is based on a study of food supply chain and lean theory literature, with particular focus on waste. In order to demonstrate the applicability and relevance of the classification, a Norwegian salad producer is used as an illustrative case. The case was selected due to the fact that the company addresses waste prevention and sustainability as a strategic priority. Data were collected in interviews, discussions and observations during company visits. Interviews and discussions have been conducted in physical meetings and workshops between company representatives and researchers. Fresh Food Supply Chain Characteristics From a supply chain and logistics perspective, there is number of characteristics particular to fresh food supply chains, see Table 1. In general, market and product characteristics tend to push for shorter lead times and higher responsiveness, while the production system with its focus on economies of scale tends to increase lead times [START_REF] Romsdal | Fresh food supply chains; characteristics and supply chain requirements[END_REF]. Thus, the way the supply chain and its processes are designed and planned can often result in mismatch of demand and supply, creating high stock levels, increasing the time spent on non-value added operations or contributing to overproduction and increased defects rate. 2009) pointed out also other wastes in the food supply chain, including bulk waste, water waste, processing waste, packaging waste and overproduction waste. [START_REF] Dudbridge | Handbook of Lean Manufacturing in the Food Industry[END_REF] discusses six out of the seven wastes of lean theory and uses his experience from the food industry to discuss food and other wastes in the food supply chain. His contribution is, however, missing more consistency structure and connection to academic relevance. The discussion above, therefore, shows the lack of attention that has been given to consistent mapping and classifying of different wastes occurring in fresh food supply chains operations. Identification of Waste in Fresh Food Supply Chains The purpose of this paper is to address the gap mentioned above by using the seven wastes from lean as a basis for constructing a classification for waste identification in fresh food supply chains. The study focuses on food producers, wholesalers and retailers in the supply chain. Next, the seven wastes are explained and related to different activities in fresh food supply chains and four wasted categories are identified. Seven Wastes of Lean and Fresh Food Supply Chains The systematic identification and elimination of waste is known to be a central element of the Lean Production philosophy [START_REF] Ohno | Toyota Production System: Beyond large-scale production[END_REF][START_REF] Liker | The Toyota way: 14 management principles from the world's greatest manufacturer[END_REF][START_REF] Shingo | A study of the Toyota production system from an industrial engineering viewpoint[END_REF]. In order to identify waste in manufacturing processes, [START_REF] Ohno | Toyota Production System: Beyond large-scale production[END_REF] classifies seven types of waste: Transportation, Inventory, Motion, Waiting, Overproduction, Over-processing, and Defects. Though these seven types of waste have proven to be very useful for identifying waste in the production of discrete automotive compo-nents, this type of classification fails to illustrate much of the waste inherent to fresh food supply chains. Lean theory defines waste as any activity that adds cost or consumes time but does not add value to the customer (Womack andJones, 2005, Ohno, 1988). Value added activities describe the best combination of processes and operations which are necessary to make the product, delivering the highest quality, for the lowest cost, on time to the customer [START_REF] Ohno | Toyota Production System: Beyond large-scale production[END_REF]. In order to find out how much is spent on both value added and non-value added activities, lean theory offers different tools, such as Value Stream Mapping. However, this technique places emphasis on time as a main resource which is wasted. This, however, does not precisely shows all the wastes created in the fresh food supply chain, not including wasted food or energy. Therefore, other categories are added in the proposed waste classification. The categories were chosen on a basis of findings from literature studying operation management considering fresh food characteristics. First category is time which materials, products, equipment, machines and people spend in specific process or operation ( Towards a Framework for Waste Identification Below, each of the seven wastes are discussed in terms of the four categories of waste and related to the three actors of the food supply chain; food producers, wholesalers and retailers (based on [START_REF] Ohno | Toyota Production System: Beyond large-scale production[END_REF][START_REF] Jones | Seeing the whole[END_REF][START_REF] Dudbridge | Handbook of Lean Manufacturing in the Food Industry[END_REF]. Transportation as itself is a value-added activity bringing the product closer to the customer. Transportation waste is then an unnecessary transportation of materials and products within or outside the company tiding extra time, distance and energy, and by excessive handling of sensitive fresh food products contributing to food waste [START_REF] Hines | Going lean[END_REF]. For instance, moving material from and to the distant machine at food producer, transporting materials and products from one place to another just to make a place for new inventory at wholesaler, or moving the products from the store inventory to the shelf and back at retailer stage [START_REF] Hallihan | JIT manufacturing: the evolution to animplementation model founded in current practice[END_REF]. A certain amount of inventory is necessary in order to be responsive in fulfilling uncertain customer demand, keeping customer service level high. Inventory waste is therefore characterized as unnecessary inventory in a form of raw materials, work-inprocess, or finished goods that exceed what is required to meet customer needs just in time and to meet needs of the process being smoothly performed [START_REF] Sutherland | The seven deadly wastes of logistics: applying Toyota Production System principles to create logistics value[END_REF]. This ties extra valuable time from shelf life, energy of cooling equipment and often leads to food waste. Examples of inventory waste might be all products that are not sold due the deadline determined by safety regulations, or due to product deterioration at every stage of the food supply chain. Motion waste can be defined as time spent on unnecessary movements of operator such as unnecessary walking, stretching or reaching for a tool or equipment [START_REF] Sutherland | The seven deadly wastes of logistics: applying Toyota Production System principles to create logistics value[END_REF]. Examples are movement of operator to pick the packing tape, reaching for the hammer or walking distance to the pallet while filling the shelf in the store. Motion waste could be reduced by better ergonomics. Waiting waste is an unnecessary waiting of the machines and people spending extra time of labour and energy of idling machines [START_REF] Sutherland | The seven deadly wastes of logistics: applying Toyota Production System principles to create logistics value[END_REF]. It might be seen as time operator is waiting to assemble next product or packing machine waiting for products to be delivered or operator waiting for products which should be loaded into a shelf of retail store. Overproduction means making what is unnecessary, when it is unnecessary, and in unnecessary amounts. It occurs when items for which there are no orders are produced. Overproduction in its original sense occurs only at the food producer level when too many products are produced even there is no order for them. On the wholesaler and retailer level, overproduction can be seen as ordering more than is demanded to buffer against uncertainties in demand timing and quantities. Overproduction increases inventory and spends extra time, distance, energy and mass. Over-processing waste refers to any processes that do not add value to the customer or give more value to the product that is agreed standard. This takes extra time and energy and increases risk of products being wasted. Examples of this waste are excessive quality check of fresh products at every stage of the supply chain. Defects include wasted materials and products during the production and distribution process, and rejected materials and products which have to be reworked [START_REF] Dudbridge | Handbook of Lean Manufacturing in the Food Industry[END_REF][START_REF] Sutherland | The seven deadly wastes of logistics: applying Toyota Production System principles to create logistics value[END_REF][START_REF] Mccarthy | Lean TPM: a blueprint for change[END_REF]. Waste from rework includes resources needed to make repairs, while waste from rejects results in waste of time, energy, and all other resources put into food products during their production and distribution. The cost of inspecting for defects and responding to customer complaints is also waste related to defects. Table 2 summarises the discussion above in terms of time, distance, energy and mass, and shows at which stages in the food supply chain they can be identified. The classification shows that time, energy and mass are the most common wastes across the food supply chain and the main contributors are overproduction, defects and transportation followed by inventory waste. On the other hand, the least waste seems to be related to excessive motion. Table 2. Classification of wastes in fresh food supply chains Case: the Salad Supply Chain In order to illustrate how the waste classification can be used, a case from the Norwegian food sector is used. The case consists of a supply chain for distribution of processed salads from producer, through a distribution point to a typical grocery store [START_REF] Yggeseth | Ferskere produkter og mer effektiv logistikk i dagligvarebransjen[END_REF], Strandhagen et al., 2011). The shelf life of the salad is around 8-10 days from the production date. The supply chain is illustrated in Fig. 1, showing actors, material and information flow, stock points and physical process. The factory produces several variants of salads and the production process for most products consists of quality inspections, cutting, washing, and assembly into various product mixes, before packing and storing. Salads are manufactured to stock, but orders from distributors or wholesalers are used for estimating procurement, production, and supply planning. Production planning uses principle of optimal batch-sizing. Discussion and Conclusion The waste classification from Table 2 can be used as follows. First, on the basis of the seven wastes definitions and proposed wasted categories we try to identify these in the operations of the supply chain. Transportation waste could be spotted as unnecessary transport of products between several stock and distribution points. Deeper investigation could show how many man-hours, and how much fuel and electricity is spent unnecessarily. Inventory waste can be counted as the amount of products wasted in the multiple stock points and buffers along the supply chain as products waiting for orders to arrive. Overproduction waste can be indicated from planning based on forecasting and production planning based on optimal batch-sizes. Over-processing waste could be multiple quality checks of salads at four stages in the supply chain (salad factory, distribution point, wholesaler, and retail store). Finally, defects waste can be spotted during an excessive handling of temperature sensitive products resulting in waste. Identification of motion and waiting waste in this case would require deeper analysis of the shop floor activities. From the above, we see that the classification assisted in identifying a number of wastes beyond the simple perspective of volumes of food being wasted or thrown away in the supply chain. We observe that several of the wastes stem from the way the supply chain is designed and managed. For instance, the lack of sharing of real demand data with upstream actors can lead to excessive use of resources put into producing, moving and storing products with no actual order. The multiple stock and handling points along the supply chain also increase lead time, energy and food waste. This paper has added to the literature on waste in fresh food supply chains by identifying the types of wastes (RQ1) and where these can be found in food supply chains (RQ2) (see Table 2). Different categories of waste (time, distance, energy and mass) have been identified and discussed within each of the seven wastes and for the different supply chain actors. The applicability of the classification has been illustrated in a case. A theoretical contribution is an increased understanding of waste in fresh food supply chains. By applying the seven wastes perspective from lean, the paper contributes with new insights into how waste can be identified not only in production but also in the wholesale and retail operations. In practice, the classification can assist supply chain actors in analysing their processes and thus provide a basis for identifying the causes of to reduce waste. A limitation of the study lies in its limited focus on three supply chain actors and in its limited focus on physical flow only, excluding information flow analysis. The classification suggest the same types of wastes for each actor of the supply chain, However, the priority to reduce these wastes might be different for each of the actors. Further research should therefore focus on conducting more empirical cases which would verify and complement the findings of the waste classification Also, the trade-offs between different measures must be evaluated, where for instance less inventory could result in increased transportation waste. Jones and Womack, 2002, Dudbridge, 2011). Second is distance showing how many meters or kilometres were spent on transportation of materials and products (McCarthy and Rich, 2004, Dudbridge, 2011). Third one is energy represented by electricity and fuel spent by tracks, conveyors, fridges, freezers and other equipment and machines[START_REF] Dudbridge | Handbook of Lean Manufacturing in the Food Industry[END_REF][START_REF] Swink | Managing operations across the supply chain[END_REF]. Last suggested category is mass, representing the amount of materials and products which are processed and distributed across the supply chain[START_REF] Gustavsson | Global food losses and food waste[END_REF][START_REF] Parfitt | Food waste within food supply chains: quantification and potential for change to 2050[END_REF]. Fig. 1 . 1 Fig. 1. The salad supply chain Table 1 . 1 Fresh food supply chain characteristics (based on Romsdal et al., 2011) High perishability (raw materials, intermediate and finished products) • Increasing product variety, packaging sizes and receipts Market • Customers demand frequent deliveries and short response times • Varying and increasing demand uncertainty • Limited ability to keep stock • Cost of lost sales often higher than inventory carrying costs Production system • Long production lead times, long set-up times, high set-up costs • Production adapted to high volume, low variety Previous studies have identified food waste in all stages of the food supply chain and classified it mostly based on different food product types or production, processing and transportation processes (Parfitt et al., 2010, Gustavsson et al., 2011, Mena et al., 2011). Darlington et al. ( Area Characteristics Product • Acknowledgement This study has been made possible by the funding received from Nordic research organization NordForsk via LogiNord project. Actors in the FSC Food producers Wholesalers Retailers
21,378
[ "991801", "991659", "1001994", "999774" ]
[ "476350", "476350", "264312", "476350" ]
01472247
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472247/file/978-3-642-40352-1_2_Chapter.pdf
Ulrich Brandenburg email: [email protected] Sebastian Kropp email: [email protected] Jorge Sunyer Daniel Batalla-Navarro Energy Efficient Production through a Modified -Green-PPC and a Communication Framework for the Energy Supply Chain to Manage Energy Consumption and Information Keywords: Production planning and control, energy efficiency, energy management systems, inter-organisational information systems This paper presents a definition of an energy product model through a holistic approach of energy management that recognizes both the side of the energy provider and energy consumer. Therefore, it is needed to design a model for a "Green PPC" that uses energy consumption as an additional planning and control criterion to enable a producing company to optimise and forecast its energy consumption. Furthermore, an inter-organisational information system will be described that allows an information exchange with the energy supplier to include the energy use in production planning. Introduction The implementation of energy management processes and tools is growing in importance for today's organisations. Especially within the manufacturing industries which are more and more reliant on the availability and efficient management of the scarce resource energy, the sensibility for this topic is heightened also on the decision making level. The decisive factors for this development are the increasing costs of energy, aspects of sustainability and an increasingly tightened regulative environment. Increasing demand for energy coupled with a decreasing supply on the world markets results in continuously increasing prices for energy. This general development as well as the dynamics in price setting creates uncertainty for organisations with respect to accurately calculated energy costs [START_REF] Jucker | Electrical energy -The challenge of the next decades[END_REF]. This demand for energy efficient production is intensified by the demand of customers for sustainable products. Due to this increasing demand, organisations which are successfully able to produce efficiently can differentiate themselves from the competition. Finally organisations are forced by the developments within the regulative environment to adhere to higher standards of efficient production and introduce energy management programs. Within the regulative environment and market a shift from the individual organization towards its supply chain can be observed. Consequently, transparency with respect to all aspects of energy management is of the utmost importance. On the basis of these developments it becomes clear that any activities that are carried out in isolation from relevant third parties or complementary activities with the overall goal to achieve efficiency are likely to fail or lead to suboptimal results. In alignment with [START_REF] Rahimifard | Minimising Embodied Product Energy to support energy efficient manufacturing[END_REF] it must be recognized that improvements in productivity and reduction of energy consumed in the various manufacturing applications and activities may only be reached by an approach that allows for consideration of multiple activities and the involvement of relevant third parties. To allow for this flexibility and to be able to incorporate a variety of approaches, the concept of product modelling is utilized. Energy Product Model The aim of the energy product model is to provide the means to a holistic approach of energy management and hence induce a more efficient and effective utilization of energy. Essential within the context of energy and energy management is information about such (for example: prices, generation etc.). In the future energy as a product can be described through the information attached to it. In that sense the energy product model aims at capturing the necessary information and its streams along the energy supply chain and within the single company with the overall focus to utilize this information to improve overall energy efficiency. To provide a holistic approach that is able to carter to the problems identified above the model recognizes both the side of the energy provider and energy consumer. The following paragraphs review the three levels of production, company and supply chain that encompasses the model (cf. Fig. 1). On the production level (C) the critical variables of production planning and control are analysed. The model to be built on this level is based on the acknowledged Aachen PPC-Model [START_REF]Produktionsplanung und -steuerung. Grundlagen, Gestaltung und Konzepte[END_REF] and extends the four reference views of the model to incorporate the aspects of energy efficient production planning and control. On the basis of the EnergyBlocks [START_REF] Weinert | Methodology for planning and operating energyefficient production systems[END_REF] concept every process will include its energy consumption information. This is necessary to be able to forecast the energy consumption of the planned processes and identify power peaking situations. In a second step the remaining power-consumption peaks caused by overlapping processes have to be timed in a way that they will not overlap anymore and thus achieve a smoother energy consumption curve resulting in overall savings of costly maximum load energy [START_REF] Westkämper | Energiewertstrom. Der Weg zur energieeffizienten Fabrik[END_REF]. On the company level (B) the energy flows within an organization are combined with its information flows. The goal is the recording and assessment of all relevant information and energy flows from the surrounding area (Heating, Ventilation and Air condition, etc.) to the production level. In this context, energy is not perceived anymore as an overhead cost but as one that is directly linked to the single product and production phase and needs to be managed accordingly, as well as all other resources. Finally, the data that is collected within the individual organisations can be aggregated on an overarching level (A) and hence provides a connection to the energy supply chain the organization is enclosed. Intra-Company Communication Framework Although it can be easily argued that manufacturing companies have a strong necessity to be more efficient in their usage of energy it is nonetheless at the moment quite complicated for any company to assess how their processes are performing in terms of energy efficiency. [START_REF] Schlosser | Advanced in Sustainable Manufacturing[END_REF] underline this by establishing that until now it is not possible to classify manufacturing processes or machine tools into standards based on their energy efficiency. Considering that both operations and possible improvement measures are usually implemented at the single process level [START_REF] Li | Eco-efficiency of manufacturing processes: A grinding case[END_REF], the calculation of energy efficiency must be integrated in the bottom layer and then transferred to the upper levels throughout the IT-systems. Fig. 2 gives an overview over the necessary information flows through the different layers of IT-Systems. The main reason to base the model in the Aachener PPC [START_REF]Produktionsplanung und -steuerung. Grundlagen, Gestaltung und Konzepte[END_REF] is its vertical vision of the manufacturing companies. Consequently the proposed model represents an application and extension of the Aachener PPC model, taking energy consumption at the moment of planning the production processes into consideration. Fig. 2. Holistic Company Information View Connecting the Layers -Enterprise Resource Planning and Smart Grid EMS ERP systems need to be prepared to face and anticipate the challenge of the coming "intelligent energy delivery", which will be fully implemented based on smart grid technologies in 2020 [START_REF]California ISO: SMART GRID -Roadmap and Architecture[END_REF].The main task concerning energy management is to define which impacts different decisions concerning energy efficiency have on the other functions of the company (e.g. Finance and Accounting) as well as production planning. In that regard a focus is put on the analysis of the costs involved with respect to energy consumption in the production plan. Additionally, a focus is placed on realtime information flows because of the rising dominance of e-business tools, which have added a higher degree of velocity within all the activities of the industry, having a huge impact on all the systems related with production processes, such as Customer Relations Management (CRM), Machine Execution Systems (MES) and of course the Enterprise Resource Planning (ERP) systems [START_REF] Lee | [END_REF]. Overall in contrast to the Aachener PPC model [START_REF]Produktionsplanung und -steuerung. Grundlagen, Gestaltung und Konzepte[END_REF], in the model proposed within this paper an extra layer is added corresponding to the Smart Grid systems as the communications with the energy providers are not part of the products and goods supply chain. Communication Framework for the Energy Supply Chain On a very basic level energy related problems can be currently categorized within either the realm of forecasting or traceability of energy consumption. The issue of forecasting encompasses the current problem that the various energy providers face with respect to accurate forecasts of energy consumption of their customers. Currently energy providers almost solely base their forecasts on data of energy consumption from the previous year. Naturally this method is rather inaccurate as energy consumption of business fluctuates for a variety of reasons and not all of these factors are stable across time. With a cloud-based service, forecasting can tremendously be improved and the single organisations that are members can feed data into it in real-time and the energy providers can then aggregate it. Basically, cloud computing describes different types of services in a layer model (infrastructure, platform, software) and distinguishes private, public, community and hybrid clouds depending on the exclusiveness of the service model [START_REF] Schubert | Cloud Computing for standard ERP systems: Reference framework[END_REF]. Because of its abilities to facilitate data exchange between even physically and structurally different parties the cloud computing concept can play a fundamental role to solve energy related problems within the realm of both B2B and B2C related issues. The Communication Model The communication within the cloud is based on interconnections between the different actors. In order to provide a good balance of energy, it is mandatory to have an accurate and reliable forecast. Communication plays a fundamental role in accomplishing this objective and must therefore be carried out with a high level of speed and reliability. Having a cloud system running the communications allows the Transmission System Operator (TSO) to better forecast energy consumption since each industry will be able to report how much energy is going to be needed in advance. Therefore, real-time usage and generation of the electricity will be monitored. The following information model (Fig. 3) contains all the interconnections of the main actors to provide a good balance of the electricity. Renewable generators with knowledge of the weather conditions for the following days can make a first estimation of how much electricity they can provide. Based on the maximum power capability, a calculation is made by the renewable generators and forwarded to the TSO. This information will also be accessible to industries. The same procedure is carried out by non-renewable power plants that do not rely on weather conditions to make their forecast. The industries are able to access information on forecasted energy generation through the public cloud, and, given this data, they can forecast their own energy requirements. Fig. 3. Energy Supply Chain Information Model The model offers the possibility to introduce self-generation devices to the industries. The only difference between renewable generators and the industries lies in the information; the information is kept in a private cloud and the industries can determine whether or not to use it. Before the industry forecast is sent to the TSO, the possibility to change the forecast using storage mechanisms is offered. This model also offers the possibility of making an alliance between different industries in the same physical area. Once the TSO has all of the forecast consumption reports from the industries and the generation forecast reports, it is able to make a prediction for balancing the electricity more accurately. In order to balance the electricity load, the most important information is the quarter-hourly data regarding how much electricity is being consumed in each industry. With these three sets of reports, the TSO can better balance the electrical grid and give more accurate orders to the generators. This kind of communication system automatically results in a more efficient generation-consumption relationship. Balance in the Cloud To provide a good balance and performance of the needs and usage of the electrical grid, the TSO needs to use different EMS systems to perform the calculations. Most of this data that the different EMS systems need is pushed through the cloud. All the EMS systems being set-up in the cloud have many interconnections and different functions, but the most important interconnections and the main utilities that will be in use can be set up in a unique model map (Fig. 4). Fig. 4. EMS and Utilities Furthermore, in the Communication Model Web-posted services are also considered. In order to provide good information of the electrical network, some data should be posted to provide clear and transparent information about the electricity or facilities. Some of these reports are mandatory by the European Union, such as Real-time demand, Forecasted demand or Real-time generation. These reports will also gather the information from the different devices used to measure or to control the demand and the status of the network. Conclusions and Outlook The approach to create an energy product model that gives an overall view on the intra-company and corporate information flows concerning energy efficiency is the first step towards an energy-efficient PPC. It enables an energy-efficient manufacturing by reducing costly power peaks through on optimized PPC and a more transparent consumption forecast that allows operating at a more efficient operation point. The accumulated real-time consumption-data and the consumption-forecast that are processed through the company`s system and shared with the TSO allows a network load of higher accuracy. Further steps towards an energy-efficient production planning is the inclusion of energy-efficiency within existing target systems for production planning and control. Hence, the interdependencies between variables affecting economic efficiency and energy efficiency have to be analysed. Based on that it will be possible to define settings of parameters (e.g. order-release, lot sizes etc.) that increase the overall energyefficiency of a production system while assuring a favourable degree of economic efficiency. These settings will be highly company-dependent and have to be defined for each company individually. Fig. 1 . 1 Fig. 1. Layers of the Energy Product Model
15,525
[ "991575", "1002148" ]
[ "303510", "303510", "303510", "303510" ]
01472249
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472249/file/978-3-642-40352-1_31_Chapter.pdf
Eryk Głodziński email: [email protected] Design of controlling supported sustainability of manufacturing enterprises Keywords: controlling, design, modelling, management tools In the paper the controlling as a management concept supporting sustainability of enterprises was characterised. Controlling is treated as a very important element of managerial accounting system in manufacturing companies. In the first part of the paper the company sustainability was presented. Next the stages of controlling design were characterised. In the third part of the paper the controlling model was shown. Finally, various controlling supporting instruments enabling the balanced growth of manufacturing enterprises were listed. The relationships between them were presented. Introduction On global economic market the enterprises should operate basing on the criteria of efficiency, effectiveness and social responsibility. These criteria are the basic assessment factors in decision-taking process of companies applied strategy of balanced growth. In such case, the weight of each criterion is depended on many determinants e.g. economic situation of the company and on the market, competitor's position of company or expectations of society and other stakeholders. The decisiontaking process is the major element of management process and should be supported by creating proper information using complementary various management tools. These tools can be divided into three groups: concepts, methods and techniques. The paper presents the results of research connected with design of controlling as a concept, which supports production management. The research is based on the analysis of selected, existing controlling systems in international companies present on the European market and the application proposal of universal model of controlling design created by S. Marciniak [START_REF] Marciniak | Controlling. Filozofia, projektowanie[END_REF] to the manufacturing companies. The main thesis of the paper is the assumption, that controlling will support sustainability of manufacturing enterprises if the design process follows the right methodics. A proposal of the proper methodics will be presented in the paper. The article does not take into consideration the issue of sustainability of products, but analyses sustainability of company by developing the products. Sustainability of the company with controlling Controlling in paper's meaning should deliver proper information for decision-taking process and is associated with the early-warning-system against threats. The controlling connects many planning and control methods/techniques. One of the most important stage of designing the controlling is to choose and integrate these tools in one system targeting on the strategic goals of company. Sustainability means being able to operate on the market, overcoming the problems which could threaten the normal existence of enterprise, ensuring the company development in long-term period. Regarding the manufacturing enterprise, sustainability in current market conditions can be associated with: • assembling of goods avoiding the production interruptions (like accidents at work place, equipment failures, delays in material delivery etc.), • reducing waste (by assembling the goods according to the technological documentation, ensuring the quality of services and the supplied materials, maintenance management etc.), • recycling the waste when it occurs (re-use the material in other technological processes, store the waste in an environment-friendly way), • developing all company: units, function and processes, especially manufacturing processes and other supporting processes like R&D, supplying etc. It will be possible, if the company is able to control its activity, especially in the above mentioned tasks. Control of activity has to be preceded by planning. Both processes should be integrated in the company as one controlling system. In practice, the integration means collaboration of various methods and techniques of management. The issue is to choose the ones which could be used complementary. It is obvious that the system should simultaneously use the results (information) of the above mentioned processes. Aggregation of selected management methods and techniques can proceed by controlling concept. Its main principles are: establishing measurable goals, continuous analysis of target and current data, predicting the future, providing cost account in accountability centres, applying of feedback loops. It is important, that controlling system should not use especially the tools, which can provide contradictory results or cost of providing data is too high in comparison to their decision value. In the assembled sector of economy planning and control processes are based mainly on the measurable (quantitative) indicators e.g.: degree of the use of production capacity, amount of sales revenue, amount of production costs, number of finished goods in comparing to the waste. However, ensuring the sustainability of company with controlling requires including not only quantitative assessment but qualitative as well. The main issue is applying and aggregating both results in one system. The solution of such a problem could be a design of special assessment system unique for the company or basing on existing tools e.g. created in 1996 by R.Kaplan and D.Norton the Balance Scorecard [START_REF] Kaplan | Using the Balance Scorecard as a Strategic Management System[END_REF], which divides the goals of company into four perspectives: financial, internal business process, learning and growth, customer. The above mentioned management tool translates a business unit's strategy into tangible objectives and measures. One of the major problem of design process is the measure the factors which are not quantitative. These issues have not been solved satisfactorily by researchers and practicans till today. Nevertheless Balance Scorecard is currently very popular in business and in research works as well. Regarding the works of research, especially Nobel prize winners in economics (J. Stiglitz [START_REF] Stiglitz | Globalization and Its Discontents[END_REF] and P. Krugman [START_REF] Krugman | Economics[END_REF]) and after the start of worldwide crisis in 2008 the approach to assessment of company activity (object of planning and control) should be partly revised. Now the most important goals of company could be divided into three groups: achieving satisfactory financial result (economic indicators), doing the business regarding the social aspects (connected with employee and society -social indicators) and protecting the natural environment (environment-friendly indicators). Selected groups of criteria are a part of the economics balanced growth theory. The theory is especially important for middle and big enterprises which want to operate and develop on the market in next several years. In such case they have to measure not only financial indicators but non-financial ones as well. Small companies characterise by different range of problems. Vital operating factor for most of them is financial result and sustainability is associated mainly with low-cost operating. Even thought keeping the sustainability in developing economy requires having good social relationship with the society. The relationship is created especially by brand imagekey customer care, good supplier network, environmental-friendly behaviours. The last mentioned factor is built minimizing the number of waste and emissions. Conditions and stages of controlling design Before the process of controlling creation starts, the conditions of designing, modelling, implementation, and using the system in specific application area should be researched. Manufacturing enterprises differ from service companies mainly by: • tangibility of products (the tangibility level is much higher), • storage possibility, • level and degree of contact between producers and consumers (the contact between the production force and clients is occasional), • length of response time between the order and execution (depends on the length of production processes), • labour intensity and the use of machinery (predominately the high level of automation), • frequency of replacing the machinery (very often, which influence high level of amortization/depreciation). In the controlling design process, the efficiency and effectiveness of company operating should have the some priority. It has to be taken into consideration that price and next the quality of products are for customer most often more important than the manufacturing technology or place. The controlling's measurement system should include the formulas, which analyse and asses these indicators. From the perspective of manufacturing enterprise is vital relationship between sales figures, brand image, product launch or product placement. These indicators are hardly measurable (it is necessary to use indirect measures). Regarding the above mentioned issues the controlling design process can be executed in following stages: 1. Analysing business processes regarding the economic, social and environmentfriendly goals. 2. Analysing the existing frameworks of enterprise. 3. Choosing the areas supported by controlling and sequence of implementation. 4. Designing new or redesigning business processes regarding the concepts management by processes and targets. 5. Presenting controlling system in the form of model to enable better understanding. 6. Redesigning frameworks. 7. Integration management functions esp. planning and control of production in one system. 8. Designing measurement system of enterprise goals basing on universal model of controlling design [START_REF] Marciniak | Controlling. Filozofia, projektowanie[END_REF]. 9. Updating the model to the more developed form. 10. Designing motivation system based on responsibility account and results achieved by company staff. 11. Completing the model before test phase. 12. Testing the model. 13. Updating the model after the completed test phase. The designing process in management generally bases on the principals, which were created for engineering science and economics. They could be adopted in major points in management science. Technical rules of designing the controlling are as follows: 1. Designing should encompass all the structural elements included in the system. 2. Designing should be implemented in two layers: a) globally, where all organizational units are the units subject to the design, b) structurally, where we design the internal structure of organizational units. 3. Connections should be designed (dependences, coupling) between units of the structure without specifying the mode of implementation of the connection; It is necessary to verify the connections (so-called static verification of relationship between the elements of the system). 4. The time and the sequences of execution of the connections (dependences, coupling) need to be specified. 5. Verification of carefully time identified connections (so-called dynamic verification of te relationship) should be executed. 6. The implementation procedures should be designed in accordance with the system that allows the transformation of the dynamic model of the system into real specificity of the company; It is necessary to complete the implementation procedures with analysis and model management methods. 7. After the procedures have been designed it is necessary to check again the regularity of information flow and decision-taking (simulation); It is necessary to design technical tools supporting the functioning (operating) of the system , mainly of IT nature, 8. The test of practical functioning of the designed system is performed (verification) [START_REF] Marciniak | Design and modeling of the controlling in manufacturing enterprise[END_REF]. Controlling department, situated as a unit supporting top management in access to information, is responsible for protecting and developing the controlling philosophy (methodology consistent and common for each unit of company). Other tasks include: developing the controlling tools, data aggregation, data benchmarking, reporting, decision recommending. These tasks arise from controlling functions: • planning and evaluating measured areas, • planning targets figures, • monitoring business processes, • collecting data describing operational activity of company, • analysing data from data base, • assessing information, • creating financial and managerial reports, • delivering reports to the right people. Regarding the above mentioned processes and functions, controlling should be focused on following areas (in sequence of applying): 1. Finance (major issues: cost and revenue accounting, analysing incoming and outgoing payments, measuring assets and equity, analysing liabilities etc.). 2. Manufacturing (major issues: providing production capacity, measuring quantity of outputs -especially prime products, checking quality of goods and subcontractor services, checking margins, analysing time schedule etc.). 3. Sale and after sale services (major issues: targeting margins and production capacity, measuring and analysing satisfaction of clients, pointing out key clients etc.). 4. Procurement (major issues: analysing efficiency of deliveries, checking quality of materials, measuring and analysing satisfaction from cooperation with suppliers etc.). 5. Storage (major issues: measuring and analysing quantity of inventories). 6. Maintenance (major issues: checking natural depreciation of machinery, checking productivity of machinery, measuring and analysing satisfaction from cooperation with machinery producers and services etc.). 7. Environment protection (major issues: checking quantity of emissions and nonrecycled waste, measuring and analysing impact of company operation on natural environment and economic development etc.). 8. Human resources (major issues: measuring achievement of the person's goals, analysing work performance, targeting gained experience, knowledge and skills etc.). The above mentioned sequence bases on the criteria: the specifics of the company (due to assembling goods), value of benefits connected with implementation (additional profits or reduction resource consumptions) and opportunity to use other management concept (when it is not necessary to apply controlling in the selected area). Implementation of controlling in selected areas should accompany application of the company controlling. Its main targets are assessing and raising the stakeholder value, protecting the company against threats (in perspectives of economy, society and environment) and planning and controlling activity on the highest level (most aggregated) of the company. In designing process the vertical integration of planning and control is one of the most difficult issues. In practice it means using the same measures in both processes, which gives opportunity for benchmarking and variance (deviation) analysis. The second form of integration is horizontal approach. It means using measurement system which enables data aggregation. It is possible when all financial and non-financial indicators can be presented in monetary unit. A valuation of non-financial indicators, like environmental-friendly behaviours, is quite a difficult problem to overcome. It is a big challenge for somebody who designs the controlling. To ensure both forms of integration and fulfil the established targets all designed elements of controlling should be evaluated before the real application. One of the best methods to achieve it is creating the conceptual model of controlling. It is a theoretical framework that can be used to present in a simple way all elements of controlling and relationship between them. The controlling model consists at least of: 1 . Various structural elements (objects like company and product strategy, data base or using management tools and subjects like key users and end users of controlling system and receivers the information). 2 . Relationship between elements constituting this model (in form of mathematical formulas or form presented various data described state of the research subject), 3 . Controlling process description (guidelines outlining methodology of planning and controlling). For better transparency the controlling model should be presented with graph techniques e.g. used in IT the methods of block scheme (fig. 1). decision-taking rules in approach to assessing process, explanations (remarks to exceeded limitations). All the above mentioned elements should be obtained during the designing process of controlling. One of the most important stages by designing process is selecting management tools (methods/techniques) supporting the system. Selected tools applied in controlling The management tools can be described as intangible objects which help to achieve management goals of the company. The literature describes many examples of methods and techniques used by manufacturing companies [START_REF] Harrison | Systems for Planning and Control in Manufacturing. Systems and management for competitive manufacture[END_REF]. Not all of them could be applied in controlling system. It should be focused especially on: • quantitative presenting of operation activity (core processes) of the enterprise (in approach to: materials, goods in progress, finishing goods, storage goods, goods sold etc.), • continuous analysis of the company results on operational (especially manufacturing and investment activity) and financial (especially working capital analysis) levels, • valuating of qualitative aspects of company activity regarding the balance growth theory (especially satisfaction from using goods by the clients), • the use of variance (deviation) analysis as a leading approach to assessment of the business results, • providing the feedback loops of information during the manufacturing process, • connecting various tools of management in one system. Selection of methods/techniques should take into consideration their cooperation (plan data can be measured and collected next analysed and aggregated, finally evaluated on various level of aggregation, and reported). The tools applied in controlling can be presented regarding the functions of controlling and their usefulness for achieving various goals (fig. 2). These goals, as mentioned previously, results from balance growth development. Such approach requires doing analysis regarding effectiveness and efficiency in economic, social and environmental-friendly areas. Regarding the controlling philosophy, it is important to use the result from assessment process to improve the sustainability of company. Company staff should be enough motivated to follow the strategy and resulting from it activities. From operational point of view it means, that the above mentioned behaviour is characterised mainly by proper use of management tools, right activity in right time, and continuous improvement of management system. It results in applying the motivation system as an element of controlling. The bonuses (benefits for employee calculated from controlling system) have in this case a financial form. If controlling informs about the level of indicators describing company activity (economic, social and environmental aspects), the motivation system will base on measured data. It helps to avoid increasing the additional costs. Bonus system can base on: • financial result of company, • financial results of company units, • financial results of projects/orders etc., • change of share value (by public companies), • level of meeting the budget, • level of employee absence, • level of employee innovation, • share in establishing targets. In controlling the motivation system should be easy to understand by employees. The formula of bonus calculation should not change during the assessment period of time. Conclusions Sustainability of enterprise can be assessed using the controlling as a "going concern" principle, which is known from financial accounting. This approach means treating controlling as an early warning system, which consists of proper measurements and implemented procedures (decision rules) in action needed situations. Without controlling concept the management tools generally are used separately. In this case achieving established goals of enterprise is at risk, because each unit of company tries to maximize its own targets. To avoid such situations controlling principles require cooperation and joint use of all enterprise resources. The success of controlling, in meaning of establishing the company sustainability, depends on many factors. Two of them, namely the applied design methodics and supporting management tools have been analysed in the paper. The major assessment areas like: finance, manufacturing, sale, after sale services and procurement have been presented. Summarising, existing and proven (tested) tolls supporting controlling in such system should be used. All elements of the system and the relationship between them have to be expressed in form of conceptual model which helps in its understanding, evaluation and application. For company management is important to link controlling system with motivation system because it results with better efficiency and effectiveness. Fig. 1 . 1 Fig. 1. Model of controlling system (Source: own study) Fig. 2 . 2 Fig. 2. Selected management tools supported controlling in manufacturing enterprise (Source: own study) Acknowledgements. This work partly presents the results of research from a project number N N115 294338, which has been financed by the Polish Ministry of Science and Higher Education in years 2010-2013. The author wants to acknowledge other members of the project from Warsaw University of Technology for their collaboration.
21,943
[ "1002149" ]
[ "235985" ]
01472251
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472251/file/978-3-642-40352-1_33_Chapter.pdf
Peter Nielsen Thomas Ditlev Brunoe Comparison of criticality of configuration choices for market price and product cost Keywords: Mass customization, backward elimination, linear model The paper presents a quantitative method for determining the criticality of components to costs and sales price in mass customization environments. The method is based information of historically sold configurations and uses backward elimination to arrive at a reduced linear model. The variables included in this model are then the most significant describing variation of (material/salary/total) costs, sales price or profit margin. The method is tested on data from a large manufacturer of mass customized products. Introduction In companies offering customized products, manufacturing costs and sales prices will depend on a particular configuration of a product and those companies often experience that it is not obvious which product properties drive cost and which properties allow for a high sales price. This leads to the fact that it is nontrivial to identify which products are profitable and which are not. In any company it is crucial to continuously evaluate the profitability of the product range, however in companies with a significant product variety such as mass customization or engineer to order companies, this is a challenging task. This evaluation and the resulting development of a product portfolio in mass customizing companies is referred to as solution space development, which is one of three fundamental organizational capabilities which differentiate successful mass customizers from the non successful [START_REF] Salvador | Cracking the Code of Mass Customization[END_REF]. In mass customization where it is uncommon to sell and produce more than a few identical products but rather sell high numbers of individually customized products, it makes little sense to evaluate the profitability of a single product. Instead the solution space must be evaluated as a whole. Evaluating the profitability of the solution space can be approached in several different ways. Fundamentally, a qualitative or quantitative approach can be chosen. However, due to the vast complexity of a mass customization solution space in terms of the number of product features and modules, usually leaving a practically infinite solution space, a qualitative approach seems unfeasible indicating that a quantitative approach should be pursued. A number of manufacturing processes dependent methods have been developed for cost estimation, implying that the particular method of estimating cost can only be applied to certain processes e.g. casting or welding [START_REF] García-Crespo | A Review of Conventional and Knowledge Based Systems for Machining Price Quotation[END_REF], [START_REF] H'mida | Cost Estimation in Mechanical Production: The Cost Entity Approach Applied to Integrated Product Engineering[END_REF], [START_REF] Shehab | An Intelligent Knowledge-Based System for Product Cost Modelling[END_REF], [START_REF] Shtub | Estimating the Cost of Steel Pipe Bending, a Comparison between Neural Networks and Regression Analysis[END_REF], [START_REF] Shtub | Estimating the Cost of Steel Pipe Bending, a Comparison between Neural Networks and Regression Analysis[END_REF], [START_REF] Walpole | Probability and statistics for engineers and scientists[END_REF]. Cost estimation methods dependent on the specific product type also exist. In particular much research has been presented within the area of estimating cost during product development for both the finished product and the development process [START_REF] Ben-Arieh | Activity-Based Cost Management for Design and Development Stage[END_REF], [START_REF] Layer | Recent and Future Trends in Cost Estimation[END_REF], [START_REF] Niazi | Product Cost Estimation: Technique Classification and Methodology Review[END_REF], [START_REF] Verlinden | Cost Estimation for Sheet Metal Parts using Multiple Regression and Artificial Neural Networks: A Case Study[END_REF], [19]. Kingsman & de Souza [START_REF] Kingsman | A Knowledge-Based Decision Support System for Cost Estimation and Pricing Decisions in Versatile Manufacturing Companies[END_REF] introduced a general framework for cost estimation and pricing decisions, but no practical methods for estimating cost are presented. Other studies focus primarily on describing mathematically using synthetic models how customized products can be priced, however primarily compared to similar noncustomized products [START_REF] Alptekinoglu | Mass Customization Vs. Mass Production: Variety and Price Competition[END_REF], [START_REF] Syam | On Customized Goods, Standard Goods, and Competition[END_REF]. One of the deficiencies of the approaches found in literature is that most are not specific to mass customization and do thus not take into account a large solution space but focus rather on a single product. Furthermore most are synthetic, meaning that a cost and pricing model must be developed in order to evaluate the product profitability which is complicated by a high variety. Finally a number of the approaches described in literature are product specific rather than generic. The research objective of this paper is to create an analytic method which can assist companies in evaluating the profitability of a product family based on historical configuration data. By basing the method on historical configuration data, the combinations of modules and features actually sold are evaluated in contrast to a synthetic approach. The research questions of this research presented in this paper are: 1. Which method should be applied to identify which configuration variables are Critical to cost, sales prices and profit margin based on historical configurations? 2. How can the output be analyzed and utilized to identify relations between configuration variables, cost and sales price and profit margin? 3. How may the criticality of configuration variables with respect to cost and sales price be interpreted and utilized to develop the solution space to produce more profitable products? Methodology The aim is in a simple quantitative manner to establish which variables are critical for various costs (typically material and salary costs) and which variables are critical for sales price and high profit margin. The aim is then to identify where there is a gap between these models. I.e. which variables are critical for cost aspects, but not critical for price and profitability in the form of net margin (sales price -variable costs). The paper utilizes the method for determining the critical of various parameters criticality to cost presented in Brunø and Nielsen [START_REF] Brunoe | A Case of Cost Estimation in an Engineer-to-order Company Moving Towards Mass Customisation[END_REF], developed for Engineer-to-Order cost estimation, and adapts this to a method for comparing the cost aspects and the profit margin per sold product. Due to the complexity of the problem and the number of variables (large number of components and resources involved) considered multi- of critical ch the destered salon margin in Fig. 1. The proposed method has been tested on data from a manufacturing company. The case concerns a medium sized company in Denmark producing technical products for domestic water installations which are configured within a predefined solution space. The products share a common structure. The company produces products configured with in a given set of fixed options. The company registers material and salary costs as well as net margin for all sold products. Furthermore a full route and Bill of material is available for all sold configurations. The initial model contains 194 variables. This is through use of the method presented in Brunø and Nielsen [START_REF] Brunoe | A Case of Cost Estimation in an Engineer-to-order Company Moving Towards Mass Customisation[END_REF] reduced to 22 variables. These results match results achieved by the authors when applying the method to a number of other cases. To simplify the case application of the method the 10 most significant variables for salary, material and profit margin respectively have then subsequently been identified using the method from Brunø and Nielsen (forthcoming). The results are illustrated in Table 1 From Table 1 it is easy to conclude that there is a relative large overlap between the variables that cause a high material and salary cost (6 out of 10 are the same) although the significance of the variables varies between the two cost structures. It is also easy to conclude that only 5 out of 10 variables that are significant for profit margin are critical for material or salary costs or both. It is interesting also to note that the two most critical variables for profit margin (i.e. the two features most critical to establish the profit margin) are not even on the list of the ten most significant variables for material or salary costs. It is also interesting to note that 6 out of 10 variables that are critical for the sales price (i.e. the price a customer is willing to pay for a piece) are in fact also critical for determining the costs. However, in the same sense it is also very important to note that the variables that are critical for the sales price are only critical for the profit margin 4 out of 10 times. This could indicate that the company is unable to transfer the features that are critical for sales price in to features that are critical for the profit margin. Discussion The following discussion is based on a single case application. This of course somewhat limits the ability to generalize the results. An analysis identifying variables critical to cost and sales price can be utilized for a number of different purposes, however the most obvious opportunities are development of the solution space. Within this area the method could more specifically be applied in cost reduction projects to reduce the cost of expensive features, in adaption of pricing schemes for different features and for identifying non profitable features which should likely be removed from the solution space. However, the method has a main limitation, namely the requirement for a high volume of structured historical data including configuration variables, cost and sales price. As a result, this method cannot be applied for new products which have not yet been sold. Previous research indicates that the model fit increases the less variation is found in the product structure for different configurations. Finally, the method cannot react to changes in cost and pricing structure before a sufficient number of configurations are produced. Fig. 2 illustrates different scenarios for results regarding a specific variable. In scenario 1 the variable is critical to the sales price but not the cost, indicating that this particular feature increases the sales price without adding extra cost to the product meaning that this feature can be exploited by e.g. increasing the sales effort for this feature. In scenario 2 the variable is critical to both sales price and cost indicating that the feature is probably necessary since it drives the sales price, however as it is also significant to the cost, efforts for reducing manufacturing costs should be focused here. In scenario 3, the variable is non critical to both sales price and cost, which indicates that these features are not influencing the profitability of the variety in the solution space. However, cost reduction efforts are still relevant in this scenario, but could presumably be addressed as cost reduction of standard product as little variation is found for these variables. In scenario 4, the variable is critical to cost but not to sales price, indicating that attention is required, since this feature basically being priced unacceptably low or the implementation of the feature is too expensive. This suggest that the feature should either be removed, priced higher or be significantly cost reduced to avoid decline in profit margins. The conclusions and suggested actions for the 4 scenarios are to be perceived as indications only. Although for example a variable in scenario 4 may not seem profitable, externalities may imply that it cannot be removed from the solution space or priced differently and thus a qualitative assessment should be performed for each variable. One simple way to apply the information generated and displayed in table 1 is to plot the variables in the matrix illustrated in figure 2. A display of this type comparing salary costs to sales price is presented in figure 3 below. Fig. 3. Scenarios for variable criticality comparing salary costs and sales price As seen from figure 3, there are no variables in the third quadrant. In reality all features not found to be significant by the method and displayed in table 1, are in the quadrant. From the example we can see that we can place 14 variables in the matrix out of a total of 22 variables to begin with. This matches the concept of value engineering seen in e.g. [START_REF] Ibusuki | Product Development Process with Focus on Value Engineering and Target-Costing: A Case Study in an Automotive Company[END_REF], where experiences on value engineering from the automotive industry are presented. The remaining variables then belong to third quadrant. The critical variables from this example then become the five in quadrant 4, i.e. Var1, Var2, Var3, Var6 and Var8. These five variables are insignificant for the sales price, but significant for the salary costs. When redesigning the product (family) it would then be relevant to first focus on these five and try to move them (at first) to quadrant 3 or even 1 or 2 if at all possible. The second step would then be to focus on the variables (features) found in quadrant 2 and investigate if they can somehow be moved to quadrant 1. However, there will always be some variables that are critical for cost variation, so better that they are placed in quadrant 2 than in quadrant 4. It is noteworthy that the variables seem to be equally distributed between the quadrants. It is important to keep in mind that the approach presented in this paper only addresses the variation in sales price and cost and does thus not address the base cost, i.e. the intercept of the linear model. This cost and sales price is however expected to be defined by a common product platform, i.e. product properties and components which are part of all configured products and can thus be addressed as a non customized product. Conclusion It can be concluded that the method can be used to identify which variables are critical for the costs, sales price and profit margin. It is also noteworthy that in this particular case there is a significant difference between which variables are critical for sales price and costs. It is also possible to conclude that the variables associated with a given feature can be categorized using the developed matrix and that this can serve as a first quantitative step in a redesign process. Future work will focus on two aspects. First, to further refine the quantitative method for identifying critical components / features. Second, to investigate and implement the method in several redesign processes in practice. Fig. 2 . 2 Fig. 2. Scenarios for variable criticality Table 1 . 1 below. Overview of the 10 most significant variables for material cost, salary costs, sales price and profit margin. The variables highlighted with bold occur more than once. All dependent variables are calculated per piece. Rank of significance Salary costs Material costs Sales Price Profit margin 1 Number of pieces Var6 Var13 Var15 2 Var1 Var1 Var10 Var17 3 Var2 Var10 Var7 Var12 4 Var3 Var5 Var5 Var1 5 Var4 Var4 Var4 Var8 6 Var5 Var3 Var14 Var5 7 Var6 Var9 Var15 Var4 8 Var7 Var11 Var16 Var16 9 Var8 Var7 Number of pieces Var18 10 Var9 Var12 Var9 Var19
15,947
[ "991697", "991462" ]
[ "300821", "300821" ]
01472254
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472254/file/978-3-642-40352-1_36_Chapter.pdf
Guenther Schuh email: [email protected] Till Potente email: [email protected] Christina Thomas email: [email protected] Stephan Schmitz email: [email protected] Designing Rotationally Symmetric Products for Multi-variant Mass Production by Using Production-Technical Solution Space Keywords: production-oriented design, product development process, constituent product features, constituent process characteristics, order processing process 1 des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Engineering, process planning and mechanical manufacturing of single and small series manufacturer are facing a high variance of products, labile batch sizes and customer requirements today. Especially companies in the machinery and plant engineering industry producing powertrain and transmission technology are faced with highly individual customer requirements within rotational symmetric products. The companies challenge is to develop competitive products at cost-effective conditions based on existing production processes and products. The aim of this paper is to present an approach, which identifies constituent process features in manufacturing and matches it to the product features. These constituent process features are derived from critical process characteristics. Combined with the constituent product features they are used to establish product development guidelines. The solution is a basis for realizing a cost-effective product design and stable manufacturing. 2 Complexity and interface problems in order fulfillment and product development process Technical challenge due to product complexity Structural marginally different individual product components -with up to 1000 variations -lead to a high product and process diversity, causing huge fluctuations at the assignment of operation facilities. Figure 1 shows the analysis of process variance regarding the processing time. WZL (Werkzeugmaschinenlabor -Laboratory for Machine Tools and Production Engineering) Industry cases show companies which have more than 90.000 components and over 35.000 products in their portfolio [START_REF] Brecher | Integrative Production Technology for High-Wage Countries[END_REF]. Even if there are similarities at the product components, cost savings within the product development process and a cost-effective order processing process is not realized. The technical challenge is that due to the complexity and the lack of an evaluation engineers do not have the possibility to systematically chose a more cost-effective and production-oriented constructive solution. The expense and cost influence of the process characteristics in manufacturing because of customized product features is not obvious to the engineer within the design phase. Interfaces and semantic problems in product development and order processing process as research question In the order processing product engineers, process engineers and manufacturer have different information and views on the product [START_REF] Wiendahl | Betriebsorganisation für Ingenieure[END_REF]. The high level of division of labor in the industry led to a partition of planning, executing and controlling functions and finally to a very sequential working method, which is mostly based on expert knowledge [START_REF] Brecher | Integrative Production Technology for High-Wage Countries[END_REF]. The research challenge is about a common understanding of designing product features and their impact on planning and manufacturing. The semantic is actually not able to link engineers knowledge with the needs and requirements of production engineers and manufacturer. Engineers, production engineers and manufacturer all together do not have a common understanding of the product features and the process characteristics because it is insufficiently defined which properties fully describe a rotational symmetric product in their single view. Not realized potential in the product development and order processing process In single and small series manufacturing of machinery and plant engineering, shop fabrication is the dominant manufacturing principle. A various number, often more than 100 different resources, especially the mechanical fabrication such as turning, drilling and milling, is at disposal for the product creation [START_REF] Hahn | Produktionswirtschaft -Controlling industrieller Produktion Heidelberg[END_REF]. The increased productcomplexity and partly nominal varying product features lead to a high amount of planning and coordinating activities within the order fulfillment process. This results in strewing machining times and total process times within the manufacturing process. Potential analysis and valuations by the industry clarify, that a reduction of product construction by 11%, planning duration by 30% and cutting of the processing time in mechanical manufacturing by 60% [START_REF] Schuh | Wertstromorientierte Produktionssteuerung -Interaktive Visualisierung durch IT-tools zur Bewertung der Logistik-und Produktionsleistung[END_REF], are possible saving potentials. This potential demonstrates the industrial relevance of the addressed problem. 3 State of the art in product development and supporting the order processing process Deficits in existing approaches in product design Existing product design methods start at the market needs and do not or not sufficiently consider manufacturing process needs and their influence on the product development.  Most existing approaches stay on a very functional level in describing product development processes. For designing more cost-effective and innovative products mostly platform and modularity of product architecture approaches are used. Existing approaches disregard the needs of manufacturing and production processes [START_REF] Cai | Platform Differentiation Plan for Platform Leverage Across Market Niches[END_REF][START_REF] Farrell | Product platform design to improve commonality in custom products[END_REF][START_REF] Gao | Module-scale-based Product Platform Planning[END_REF][START_REF] Lindemann | Structural Complexity Management -An Approach for the Field of Product Design[END_REF][START_REF] Pimmler | Integration Analysis of product decompositions[END_REF][START_REF] Ericsson | Controlling Design Variants: Modular Product Platforms[END_REF][START_REF] Pahl | Konstruktionslehre, Grundlagen[END_REF][START_REF] Kusiak | Development of Modular Products[END_REF].  Market-driven approaches develop methods to design product families and architectures. Concerning market needs and the product communality these approaches aim at profitable product platform regarding manufacturing and redesign costs. Nevertheless these methods do not consider the impact of product features on process features. [13,14]  The approach of technology push bases the product design on the existing technologies within a company. Generally these approaches are on a more strategic level and do not give detailed solutions to design a product using the productiontechnical solution space. [START_REF] Herstatt | Management von technologiegetriebenen Entwicklungsprojekten, Hamburg 16[END_REF]16]  Another approach integrates the different company divisions systematically in the platform development process. This method focusses on variance sensitive manufacturing processes to derive constituent product features for the product design. Nevertheless this method is not specific enough to state definite process features and their influence on product design. [START_REF] Schuh | Entwicklung eines Fahrwerkbaukastens für die stückzahlenvariable Produktion von Elektrofahrzeugen[END_REF][START_REF] Arnoscht | Beherrschung von Komplexität bei der Gestaltung von Baukastensystemen[END_REF] Deficits in supporting engineering and development and industrial engineering To support the product development process and the order fulfillment process there are a lot of IT-Tools, such as ERP-, PDM-and CAx-systems [START_REF] Thome | Business Software. ERP, SCM, APS, MES -was steckt hinter dem Begriffsdschungel der Business-Software-Lösungen[END_REF][START_REF] Schuh | High Resolution Supply Chain Management -Optimised Processes based on Self-Optimizing Control Loops and Real Time Data[END_REF][START_REF] Spur | Das virtuelle Produkt[END_REF]. It can be stated that there is a certain discharge of routine jobs through the use of current IT-support within the order procedure process. Although single IT-solutions work fairly well, today's IT-support with its interface problems amplify isolated applications and a solid structure of company's divisions [START_REF] Wiendahl | Betriebsorganisation für Ingenieure[END_REF]22]. Important information about product design and its impact on the process chain in manufacturing are not sufficient presented by these tools. For example product design is supported by using CAD-systems, which are also used to create drawings for manufacturing. Nevertheless these drawings are not synchronized with the needs of manufacturing. After all important manufacturing information is not included in CAD-systems. Deficits of existing approaches to control the complexity within the product development process Existing methods of product design and IT-systems do not support the product development and order processing processes in that way, that the stated potentials (see chapter 2.3). Critical process characteristics are not systematically detected and linked to product features yet. The reason for that is a very large number of different process parameters and their interaction. It is necessary to identify the main process characteristics e.g. diameter of components, machine tools, which have the greatest impact on administrative expenses and cost-effectiveness in manufacturing. Quantifying the influence of changing product features at the process chain in manufacturing is a perquisite to set product design guidelines for engineers. Using production-technical solution space to design a product structure As one can see existing tools increase division's product and process knowledge in the product development process insufficiently. Therefore the actual challenge is not to design another supporting tool rather than to develop an approach, which increases a holistic understanding of the process chain. This holistic approach requires cybernetic models with emergent properties. Hence complexity in product design and order processing process is no longer controllable; the problem has to be reduced on a com-plexity level, where the main challenges still exist. By choosing a focus on the problem's complexity the technical challenge (see chapter 2.1) is addressed. The approach presented in this paper starts the development of a product structure at the manufacturing environment. Therefore challenges in manufacturing processes first have to be solved so they are working stable and reliable. A product structure is the key to handling product complexity, whereas it can be treated on three levels: the product range level, product level and component level [START_REF] Ericsson | Controlling Design Variants: Modular Product Platforms[END_REF]. The understanding of the manufacturing processes and their link to product features helps to avoid complexity within the product design and order processing process. The cost and expense influence in manufacturing of a product feature has to be considered when designing a product. For example a variation of a product feature that causes a high cost impact because of different processes in the production should have a rather small range of variance [START_REF] Cai | Platform Differentiation Plan for Platform Leverage Across Market Niches[END_REF]. This effect is called the variance sensitiveness and is a degree for the cost impact of a process in dependency of the product feature variance. The degree of variance sensitiveness is high if additional features lead to a high increase of process costs [START_REF] Schuh | Lean Innovation mit Ähnlichkeitsmodellen[END_REF]. The presented approach will be used for the identification and detection of these process characteristics and their connection on the product features. The approach is divided in the following steps: 1. A target system for the product design has to be defined. Possible target systems can be product flexibility, increasing product economic efficiency or a higher degree of standardization. 2. A product range within an existing product program has to be chosen for the starting phase. This simplification limits the solution space and therefore reduces the necessary complexity. 3. The process chain of the selected products in manufacturing has to be analyzed. As a result main process chains are identified in production. 4. Within the main process chain a stable working process chain has to be selected. This process chain has to be able to manufacture a significant part of a product range and be insusceptible to changing process requirements due to varying products. The selected process chain is the basis for standardized manufacturing processes, which is characterized by short throughput times and leveled process times [START_REF] Schuh | Interactive visualization in production control[END_REF]. 5. The selected process chain is used to identify the product components that are produced on this selected process chain by analyzing the corresponding working schedules. 6. A CAD-model based comparison of the selected components and the overall product components of the product program is used to determine further product components that can be produced on this process chain. The CAD-model comparison is supported by a knowledge based classification system for product data [START_REF] Weisskopf | Automatische Produktdatenklassifikation in heterogenen Datenbestän[END_REF]. 7. The overall received products are used to identify those product features, which make the products producible on the selected process chain. These product features have to be directly linked to the critical process characteristics. This step creates the connection between product and process feature which leads to the constituent product and process features. 8. Deriving design rules for a product structure that can be manufactured on a stable process chain. 9. Consolidation of customer and market needs and design guidelines to develop a product structure which is based on a production-technical solution space. Fig. 2. Sequences of the presented approach The common understanding from engineers, production engineers and manufacturer of the dependency of product and process is the prerequisite to solve the research challenge. The aim is to develop a product structure for rotational products that is adapted for manufacturing processes and therefore enabling a cost-effective production. The presented method identifies product features that have a large impact on production costs by quantifying the expenses in manufacturing processes. The developed product structure has the aim to provide a maximum of product individuality by simultaneously standardize the manufacturing processes. After all the product development process is able to provide customized product solutions by using standardized and stabile processes. Elements of mass production such as in-line production are applied to the production of highly customized products. This puts companies into the position to design cost-effective products and production. A technological and production-related flexibility is important to fulfill the growing needs of global markets and to deal with the increased dynamic in the product development process as well as the supply chain within the order process. By applying this concept product and process know-how is constantly available from manufacturing to engineering and development. As a result employees got a persistent view on the product development process as they can now control the complexity of product features and process characteristics. Fig. 1 . 1 Fig. 1. Process Commonalities and Spread of Lead Time of a Production System (Source: WZL RWTH industry cases) Seite 2 © 2 WZL/Fraunhofer IPT Set Target System Analyze process chain Chose product range Derive design rules Compare CAD-models Select stable process chain Identify product components Identify constituent product features and process characteristics Develop product structure Seite 3 © WZL/Fraunhofer IPT Image 3: Process Commonalities and Spread of Lead Time of a Production System Low Process Commonalities Lead to Unpredictable High Variance in the Production Processes Leads to a Process Sequences Work Station Highly Spreading Lead Time and an Unpredictable Capacity Load Number of Work Steps Lead Time in Days Possible Following Work Stations in the Work Schedule Frequency Acknowledgements The research and development project "Design of innovative modular product platforms and value added structures-GiBWert" is funded by the German Federal Ministry of Education and Research (BMBF) within the Framework Concept "Research for Tomorrow's Production" and managed by the Project Management Agency Karlsruhe (PTKA). The main research objective is to develop a product development process for modular product platforms and a holistic method to evaluate possible design scenarios. The author is responsible for the contents of this publication. Conclusions and outlook The stated approach develops a product-structure that is aligned on the company's production. The technical challenge in dealing with product complexity is addressed by focusing on a definite process chain in manufacturing. The research challenge is addressed by the understanding of the link between process characteristics and product features. This is the basis to develop standardized processes in administration and stabile structures in manufacturing. Within the project "Design of innovative modular product platform and value added structures -GiBWert" (see www.gibwert.de), funded by the German Federal Ministry of Education and Research (BMBF), a method to systematically link product and process data will be implemented. The aim of this project is a constant product and process tracking system, which realizes complexity restrictions in the product development process. A fundamental part of the research project is to develop a method of detecting and evaluate the variance sensitivity of the production processes. A validation is still going on at a small business of the machinery and plant engineering that is developing technical solutions for power train and transmission technologies. Process characteristics and product features are now to identify, to classify and to validate. Next exemplary products are used to identify their product features such as Diameter, bore holes, construction materials and surface qualities. After that proper process characteristics such as process times and tool insert have to be analyzed and then linked to the product features. The creation of a constituent product feature model existing of product features and process characteristics will be the next step.
19,291
[ "999780", "999781", "999782", "996234" ]
[ "303510", "303510", "303510", "303510" ]
01472255
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472255/file/978-3-642-40352-1_37_Chapter.pdf
Ingrid Paoletti email: [email protected] Roberto Stefano Naboni email: [email protected] Robotics in the Construction Industry. Mass Customization or Digital Crafting? Keywords: Mass-Customization, Digital Fabrication, Parametric Design, Robot-Assisted Manufacturing, Additive Production The paper discusses the advancement in the mass-customization of building components referring to Robot-Assisted Manufacturing. It is presented how the contemporary employment of Robotics offers a perspective of flexible alternative to traditional serial production system. Different Robot-Assisted fabrication methods are discussed through built experimental case studies at different scales. It is finally argued how Robotic production in architecture is significantly shifting the approach in design towards a model including material and fabrication constraints. Component customization in the construction industry Typically construction methods in building industry are based on the principles of mass-production: standardization, modularization and production lines. Compared to other fields, construction industry has not been characterized by the process of evolution which invested most of the manufacturing industry over the last decades with the massive introduction of innovative technologies, such as CNC machines. While the use of these tools has been introduced in architecture long time ago, they have not determined a productive paradigm shift. Indeed, buildings are in most of the cases assembled from sets of components which are gathered from catalogues provided by the industries. Starting from the 90s with the introduction of digital tools, the architectural designer's needs increased dramatically in terms of form personalization. Due to the fact that architecture cannot be built in one piece, and requires the prefabrication of components, it is automatically determined the crucial importance of the discretization of complex shapes characterizing the typical design workflow, in which architects conceive buildings just in terms of shape, and later on are facing problems related to construction feasibility. Within this scenario, robotic fabrication has recently been discovered in the construction industry, after pioneering researches in the early 80s. Architects are especially interested in the robot ability to perform different tasks and in their low price compared to other machines. 2 Robotic Fabrication in Architecture. From component customization to process customization Typically the customization of building components is reached through design post-adaptation, in order to fit industrial requirements. As a consequence of studies in the field of emerging technologies and bottom-up design the use of industrial robots has spread dynamically. Robots are multifunctional machines for mass-production, able to accomplish a wide range of works, but mainly used in the automotive industry where they usually do a single routine. Robotic Fabrication in the construction field has by opposite the implicit advantage to be able of performing a multitude of tasks controlled from a common programming platform. With its employment, the logic of production and mass-customization can change radically for the construction field, shifting from specialized industrial media of production to versatile machine. This has the potential to revolutionize the current understanding of masscustomization, moving the focus from geometrical post-optimization, to integration of robotic performance within the design process. The use of algorithms to control fabrication tools is natural extension of parametric modeling, helping the understanding of specific fabrication processes and in simulating the kinematics of the machine tool. In the past few years, architectural faculties such as the ETH Zürich, TU Vienna, University of Stuttgart and the University of Michigan have acquired industrial robots and are actively researching the use of robots in architecture. Nowadays more than twenty architecture faculties in the world are experimenting in the use of industrial robots 1 . However, this research is not limited to academia, with architectural offices like Snøhetta in Norway using robots in-house. Parametrically controlled Robots The use of industrial robots would normally demand architects to migrate geometries from a CAD software to machines with a linear workflow, essentially converting geometries into working paths. This procedure is essentially transferring construction trajectories, and can be intended as limited in terms of interactions between design and production phases. This lack has recently been overcome by the development of new specific software tools, providing architects with an easier control from design environment to physical production and vice versa. The use of plug-ins such as KUKA PRC and Super KUKA for Grasshopper favors the integration into modeling environments of the dynamic of machines, simulating movements within the CAD space. Architects gained an increased capacity to control robots, and upcoming researches are moving in the direction of strengthening the interaction between design software and machines, in a full integration of the two systems. This paper proposes an overview on the different robotic fabrication procedures nowadays in development, proving how the flexibility of these machines enables architects to experiment a wide range of technical and aesthetic solutions. Robotic Additive Fabrication Additive Fabrication represents the typical construction method employed in architecture and this is the main reason for the recent deep investigation which has been conducted for 3D-Printing at large scale, such as the Italian technology D-Shape and the American-based Contour Crafting. Additive processes can generally offer a wider potential over already established subtractive or formative digital fabrication processes, and in the last years various attempts in this direction have been made using robotic tools. Bricklaying Robotic Fabrication A popular example of additive fabrication is the Bricklaying Robot (2006) by Gramazio and Kohler which provides the architects with the possibility to generate brick walls designed parametrically. This system uses Kuka KR150 Robotic Arms to deposit bricks in simple and articulated wall compositions, basically replacing the traditional construction process based on manual work. In this case the robotic fabrication is improving fabrication speed, precision and expanding the formal freedom of the aesthetics of the construction. This technique has been applied for the façade of the Gantenbein Vineyard. In future projection, the authors plan to install the robots directly on building sites, combining the advantages of prefabrication, such as precision and high quality, with the advantages of short transport routes and just-in-time production on the building site. Making use of computer methodologies in the design and fabrication process allows for manufacturing building elements with highly specific forms, which could not be built manually. 5 Robotic Transformative Procedures RoboFold Technology for metal sheets RoboFold is a patented method employed to form metal components with 6-axis industrial robots, especially referring to façade panels. The forming process is achieved by simultaneous work of robots folding sheet metal along curved crease lines, defined by project. The metal sheets are directly transformed by pressure applied on different directions, and no mold-tools are involved in the process. The system was invented by Gregory Epps in 2008, initially conceived for automotive industry and later transferred to construction field. The entire process is simulated in a CAD environment, this means that the design is performed with manufacturing constraints already taken into account since the design process. Curve Folding is referring almost directly to the art of origami, paper engineering and industrial robot manufacturing. RoboFold uses Rhino3D software with the integration of Grasshopper parametric extension. Currently this system is developed through workshop and education system in various architectural faculties, and through the realization of façade mock-ups and furniture. Cold bending of steel rods. An alternative transformative technique has been developed by Supermanouvre in collaboration with Matter Design, in order to produce an installation called "Clouds of Venice", exposed in Venice Biennale of Architecture 2012. This installation is composed by a sequence of bended steel rods, in combination with a custom robotically-assisted CNC bender has been constructed and tested as an alternative to dedicated CNC forming equipment. This combination allowed for the serial creation of accurately defined bends, intervals and rotations. The robot provides the positioning while the bender applies the large forces required. The operation is based on the proper disposition of three clamps: a collet-gripper mounted on the robot and two die clamps on the bender. One die clamp is used to provide pressure during actual bending, while the other two clamps alternate holding the stock in order to allow the robot to feed and rotate it into the appropriate position. The clamps must be capable of providing enough pressure and friction to counteract the torque resulting from the self-weight of the cantilevering rod. This fabrication method can be easily implemented for the production of complex steel structures and for more standard applications such as steel reinforced concrete, where the precision and speed of robots can speed up operations which typically require longer time on site. Robot Subtractive Fabrication Hot Wire Cutting Fabrication A hotwire cutter consists of a thin wire that is heated via an electric current to approximately 200 degrees Celsius and used to cut polystyrene or similar types of thermoplastic foam. The foam vaporizes just ahead of the wire and a minimal energy is required to cut through the stock. The robot-mounted bow-type hotwire has a series of specific constraints that must be embedded within the control software. The primary constraint is speed, while the primary simulation feedback is the positioning of the bow of the tool. The kerf width is directly proportional to the speed, so control over motion is extremely important. The input geometry is most often described by an upper and lower curve, generating a ruled loft between curves, finally generating ruled surfaces. By utilizing 7 axes of the robot simultaneously, large parts can be machined in one step. A built example of this procedure is The Periscope Tower by Brandon Clifford and Wes Mcgee of Matter Design Studio, a temporary installation designed for the Young Architects Forum Atlanta. For the production a hotwire cutter was mounted onto the robot, creating a multi-axis CNC hotwire-cutter capable of processing 4 meters long EPS blocks. The designers sought a way to eliminate the excess waste beyond the efficiency of unit nesting, since the methods of fabrication produces no kerf waste and minimal waste which is 100% recyclable material. The fourteen meters height tower was assembled in only six hours. Conclusion and Outlook Despite the potential offered by digital fabrication with CNC machines, construction industry has rarely integrated innovative processes on a large scale, usually due to high costs products, machine low flexibility and time-related concerns. As consequence, complex shapes in architecture are often re-designed after project to fit the construction standard of particular machines. This paper shows an alternative to this typical workflow, focusing on how in the last years the development of robotic fabrication is consistently changing the production perspective towards a new professional paradigm where design software can be integrated with fabrication tools directly in the project. Robots as universal and programmable machines offer a chance of high flexibility, and the analysis of the reported case studies highlight how the use of industrial robots move the attention from shape-oriented design, to material production system. The use of robotic fabrication presents different advantages: flexible functionality, changing from a milling machine to a 3D-scanner just by switching its end-effectors; enlarged and geometrically customizable working space; affordable prices if compared with multi-axis Computer Numeric Control (CNC) machines. This system has yet to compete with mass production and its associated economies of scale in fabricating widely distributed products, but they have already shown the potential to empower customized solutions. These devices can be tailored to local or personal needs in ways that are not practical or economical using mass production lines. The actual use of robotics under the experiments of several pioneers highlight promising potential and results, but still without a clear vision on how this new tools can be implemented on industrial scale. Within an industrial perspective, the implicit risk of these experiments is to create a new design/fabrication niche, unable to determine an effective impact on the production systems. In this sense the contemporary use of robotics is limited to the idea of advanced crafting, and the integration on wider scale seems currently hard to be achieved. The analyzed case studies open the idea of "robotic crafting", which could be further adapted to be performed within the context of the building site, more than as a media of industrial prefabrication. Within this scenario it appears fundamental the implementation of more accessible software, in order to be used by less specialized operators.
13,793
[ "1002151", "1002152" ]
[ "125443", "125443" ]
01472257
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472257/file/978-3-642-40352-1_39_Chapter.pdf
Golboo Pourabdollahian email: [email protected] Donatella Corti email: [email protected] Chiara Galbusera Julio Cesar Kostycz An Empirical Based Proposal for Mass Customization Business Model in Footwear Industry Keywords: Mass Customization, Business Model, Footwear industry This research aims at developing a business model for companies in the footwear industry interested in implementing Mass Customization with the goal of offering to the market products which perfectly match customers' needs. The studies on mass customization are actually mostly focused on product development and production system aspects. This study extends the business modeling including also Supply Chain aspects. The research is based on analyzing Mass Customization application in reality, within some companies operating in footwear industry. Through the real cases of Mass Customization implementation, a business model proposal is developed as an attempt to generalize the empirical findings. Introduction Nowadays globalization has radically changed the industrial environment not only by creating a higher market turbulence and competition but also by increasing number of demanding customers which ask for unique products that perfectly match their needs and preferences. In this regard the adoption of a mass customization (MC) approach has been considered as a proper solution since it provides customers with individualized goods while being efficient at the same time. Considering the increasing interest of a higher number of companies to offer mass customized products, it is crucial to provide companies with a proper business model enabling them to implement MC in a successful manner. Going through literature, we found out that there is no proposal for a MC business model; hence this research aims at developing an empirical based MC business model for footwear industry to support companies since this sector in successful implementation of this strategy. The research is limited to footwear industry due to the fact that business model is highly sector dependent; therefore it is not easy to define a general business model which can be applied in all sectors. Moreover considering the fact that this research is an empirical based study, footwear industry was selected since it is a popular sector for implementation of mass customization with considerable amount of existing and emerging actors in the sector. Business model: Definition and Reference Structure From the very early emergence of the term "Business model" by Jones [START_REF] Jones | Educators, Electrons, and Business Models: A Problem in Synthesis[END_REF] different definitions have been suggested in literature to define the term and its role. These definitions reflect different perspectives which can be targeted by a business model such as value creation, simplification of a complex system, money generation, company behavior representation and etc. In this study we refer to Osterwalder to define business model as "a conceptual tool that contains a set of elements and their relationships and allows expressing a company's logic of earning money." [START_REF] Osterwalder | The Business Model Ontology -a proposition in a design science approach[END_REF] The reference structure for the analysis of a business model in this study is the one proposed by Osterwlader and Pigneur's (called business model canvas) with minor modifications needed to adopt it to the context of interest [START_REF] Osterwalder | Model Generation: A Handbook for Visionaries, Game Changers, and Challengers[END_REF]. The initial business model canvas of Osterwlader and Pigneur includes 9 building blocks that can be logically grouped into 3 areas: Left side relates to efficiency (Key partners, key activities, key resources, and cost structure), the right side relates to value delivery (Customer segment, customer relationship, channels, and revenue streams) and finally the value proposition which is in between. The proposed change is the merging of the costs and revenues blocks into a single one named performance. This is mainly due to the fact that in a mass customization business not only cost and revenue are considered as critical issues but also evaluation of customization and efficiency level of the firm is important. Therefore the final structure of the business model is based on eight blocks illustrated in figure 1. Key Research Methodology In order to come out with the empirical based MC business model for the footwear industry, we selected five companies in different countries operating in footwear industry that propose customized shoes to their customers. The analysis comprehends both cases of small companies and cases of medium/large companies, also already established companies with standard products and start-up mass customized born companies. Data were collected through different primary and secondary sources including: questionnaire, personal interview, papers, releases and publications on scientific magazines, official company website, official financial reports, blogs, forums, communities and online sector magazine release. Table 1 presents a summary of information to introduce the five analyzed cases. For each case study the analysis of the business model in place has been carried out based on the use of the abovementioned canvas. Cross Analysis Based on the cases analysis a set of different alternatives for each block of business model were identified and mapped (Table 2). These alternatives are mainly based on best practices of the analyzed companies. Obviously some of them have been applied by only one company while some others are applied by more companies. This is due to the fact that the analyzed cases vary in some factors such as size, customer segment and the level of customization they offer to their customers. In order to better demonstrate the position of each alternative in a MC solution space we defined three pillars for solution space naming product (PR), production system (PS) and supply chain (SC) and we allocated each alternative to the most suitable solution space pillar. This might facilitate for a company the act of focusing on a preferred pillar of solution space without compromising other important aspects of solution space. Going through different alternatives applied in each case study three main points should be noticed. In following sections we describe each of these pints. Implementation of Key Alternatives for MC Analysis of collected data show that there are some alternatives applied by all five cases involved in this study. This emphasizes the fact that these alternatives should be considered as main attributes of a MC business model in footwear industry and possibly other industries. One of the most notable examples in this regard is "style customization" which is offered as a value proposition by all studied companies. This highlights the point that coming to a mass customization point, aesthetic/style is always a main aspect of customization in footwear industry. The same story is true for product modularization and components standardization which is a critical activity to increase efficiency in mass customization. Other examples in this regard are: Use of online configurator, Customers' requirement elicitation, Web-design and online store. Lack of some MC Alternatives Proposed in Literature One of the notable results of data analysis relates to lack of some MC attributes which are proposed in different studies in literature but have not been implemented in none of the analyzed case studies. A clear example in this regard is knowledge management and knowledge creation. There are numerous studies mention knowledge management and creation as a key issue in mass customization. Franke and Piller point out the importance of acquired knowledge to create a barrier against switching suppliers while Wu et al. emphasize on role of knowledge management in level of service and quality [START_REF] Franke | Configuration toolkits for mass customization setting a research agenda[END_REF], [START_REF] Wu | The impact of a customer profile and customer participation on customer relationship management performance[END_REF]. Surprisingly no company in this study implements knowledge management as a key activity. Another example extracted from analysis is integration of partners in supply chain in order to increase efficiency which has not been followed by analyzed cases. Integration of supplier means the extent to which a supplier could collaborate and manage some inter-organizational activities with manufacturer. In mass customization operations where standardize modularization has been implemented, the role of integrated suppliers are more tangible due to the need of long-term collaboration between manufacturer and supplier. Implementation of flexible manufacturing systems is another neglected alternative which is considered only by company C. In this case it is not difficult to discover the reason since it is mainly due to the fact that only company C produces shoes in-house and consequently flexible manufacturing systems are considered as a main key resource for them, while the other four companies outsource the whole production which makes them independent to any agile production system. However the story is not so simple when it is related to integrated information system as a key resource. Based on our analysis company A is the only company using integrated information system to facilitate MC implementation. This can be due to many reasons such as high investment, non-readiness of supply chain for information integration, etc. Table 3 illustrates some of the main neglected alternatives by companies. Table 3. Implementation mapping of MC alternatives in literature in analyzed cases MC alternative Company Process modularization [START_REF] Blecker | The Development of a Component Commonality Metric for Mass Customization[END_REF] None Implement postponement [START_REF] Xuan | Positioning of customer order decoupling point in mass customization[END_REF] C Web-platform and interaction system management [START_REF] Piller | Mass customization: reflection on the state of the concept[END_REF], [START_REF] Blecker | The Development of a Component Commonality Metric for Mass Customization[END_REF] None Flexible manufacturing system [START_REF] Blecker | The Development of a Component Commonality Metric for Mass Customization[END_REF], [START_REF] Pollard | Strategies for Mass Customization[END_REF], [START_REF] Qiao | Flexible Manufacturing System for Mass Customization Manufacturing[END_REF] C Integrating partners [START_REF] Piller | Mass customization: reflection on the state of the concept[END_REF] A Knowledge management and knowledge creation [START_REF] Franke | Configuration toolkits for mass customization setting a research agenda[END_REF], [START_REF] Schreier | The Value Increment of Mass-customized Products: An Empirical Assessment[END_REF] None Support customers during co-design [START_REF] Abend | Custom-made for the masses: is it time yet?[END_REF],[5] D Lack of MC Performance Measurement As any other company, a mass customization company needs to use metrics in order to keep under control mass customization strategy and in particular to identify commonality and modularity level of products. Although monitoring and performance measurement is considered as a critical issue in MC but only one company uses a few metrics to measure the mass customization level while others never included it as a crucial step in their business model. Proposal of MC Business Model Taking into consideration all the previous considerations, a MC business model for footwear industry is proposed that could support a company in this sector to identify a possible path to implement MC. In order to develop the business model we tried to take into account best practices applied by each company however the proposed business model is not complete since as it has already been mentioned in previous section there are crucial MC alternatives in literature which have not been applied by none of the firms in this study. In this regard a complete business model can be developed through integration of current business model and a literature based MC business model. The following MC business model is a step forward to this aim since it clarifies the most important MC alternatives in a real industrial environment and possible challenges to implement mass customization. This might bring us one step closer to support companies in successful implementation of mass customization. The novelty of the proposed business model is not only based on what mentioned above but also on including supply chain elements in development of business model. Conclusion The offer of mass customized shoes is a recent trend in the footwear industry and seems to be a promising business for the coming years that could fulfill evolving customer needs. Some brands have already developed the mass customized line and have entered the business since a few years ago, yet potentialities of mass customization could be further exploited being an opportunity for a higher number of companies. In this paper we propose a framework to support companies operating in shoe sector to develop a MC oriented business model. The proposal is a supporting tool for practitioners during the development of the business model. The decisional process can be more efficient since the framework provides not only a check-list of elements that need to be considered, but also a list of options that have been tested to be successful in the same context. On the other hand, this work adds also insights to the mass customization literature providing a work that take into account at the same time all the elements that need to be configured when a business model has to be developed. Given the high number of variables, the proposed model can be hardly generalized to other sectors, so it is a contribution to the footwear industry. Nonetheless, the applied methodology can be replicated in other industries where mass customization is an opportunity of growth. Next step of this research is the implementation of the proposed framework to support a company not yet mass customized to extend its offer in this direction. Fig. 1 . 1 Fig. 1. Business model structure Table 1 . 1 Analyzed case studies Company Country Foundation Size Mass Type of shoes year production beside MC A Germany 1924 Large Yes Sport B USA 1978 Large Yes Sport C Brazil 2011 SME No Sneakers D Germany 2001 SME No luxury shoes E Australia 2009 SME No Women's shoes Table 2 . 2 MC alternatives applied in case studies BM SS Pillar Company A Company B Company C Company D Company E Block Customization Customizati Customization Customization Customization Value proposition PR (Style, function, fit) on (Style, function) (Style, packaging), Customer involvement (Style, fit), Customers' feedback on raw material (Style), Customized reusable packaging in parts design quality Key activiti es PR PS Product modularization & components standardization, solution space definition, customers' requirements elicitation Implement Key Partners
15,324
[ "1002155", "1001971" ]
[ "125443", "125443", "125443", "125443" ]
01472264
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472264/file/978-3-642-40352-1_46_Chapter.pdf
Robert Furian email: [email protected] Frank Von Lacroix email: [email protected] Dragan Stokic email: [email protected] Ana Correia email: [email protected] Cristina Grama email: [email protected] Stefan Faltus email: [email protected] Maksim Maksimovic email: [email protected] Karl-Heinrich Grote email: [email protected] Christiane Beyer email: [email protected] Knowledge Management in Set Based Lean Product Development Process Keywords: Knowledge management, knowledge based environment, product development process, LeanPPD, Lean development The objective of the research is to examine and develop new methods and tools for management of knowledge in Lean Product development. Lean Product development attempts to apply lean philosophy and principles within product development process. Special emphasis is given to the so-called Set Based Lean Design principles. Such product development process requires innovative methodologies and tools for capturing, reuse and provision of knowledge needed for decision making, as well as advanced ICT environment for Knowledge Management (KM). A Set Based Lean Design toolkit is developed, aiming to support the product developer in making decisions during the development process. This toolkit includes the Lean Knowledge Life Cycle methodology and a set of software tools for KM. The application of the methods and tools is investigated within large automotive industry and its supplier. Introduction In modern manufacturing industry, such as automotive industry, shorter product life cycles and strong competition demand more efficiency in the product development process. Therefore, the product models have to be adapted to the particular market requirements and have to be released fast and cost-efficiently on the markets. However, the product development costs are increasing because of rising diversity of models, fast technology progress and incremental complexity of the automobile [START_REF] Pahl | Konstruktionslehre -Grundlagen erfolgreicher Produktentwicklung; Methoden und Anwendung[END_REF]. Lean development transforms the philosophy of lean thinking into the product development and the product emerge process. The identification and reduction of wastes and the boosting of value adding is much more complicated in development because of unique and new project themes with innovative character and development cycles of often about several years. This is contrary to the production, where always similar products in short cycles are produced in exactly defined process chains [START_REF] Schuh | Lean Innovation -Ein Widerspruch in sich?[END_REF]. In [START_REF] Al-Ashaab | The Conceptual LeanPPD Model[END_REF] a new paradigm -Lean Product development -is proposed, which takes the lean thinking from waste elimination into value creation. The aim of the research is to develop a comprehensive set of lean methods, methodologies, design techniques and tools to ensure the development of lean product design. Special emphasis is given to the Set Based Design principles. Set Based Lean Design (SBLD) is a methodology similar to DFx and DFMA (Design for Manufacturing and Assembly) [START_REF] Huthwaite | The Lean Design Solution[END_REF]. SBLD supports the generation of lean product design and its simultaneous consideration of lean manufacturing required for the physical realization of the product. The following main aspects have to be implemented to the SBLD [START_REF] Al-Ashaab | The Conceptual LeanPPD Model[END_REF]:  A method to capture the customer values into a set of designs  A mechanism to break down these sets to get an optimized final lean design  The identification of features and tools of the key product Knowledge reuse is one of the most important factors in increasing efficiency in product development and one of the key factors of the proposed Lean product development. However, due to the inherently unstructured form of knowledge, currently there are obstacles to finding the right knowledge at the right time. In this paper a Knowledge Based Environment (KBE) is proposed, supporting Set Based Lean Product development process, including support for knowledge acquisition and structuring, as well as timely and efficient re-use of previously acquired knowledge. Especially in very large Extended Enterprises, such as the ones typical of the automotive industry, where the knowledge and expertise of a variety of people needs to be used efficiently, such an approach is expected to lead to big improvements in knowledge management. This KBE is the main source of knowledge, from which a set of new designs for a new product is going to be defined [START_REF] Al-Ashaab | The Conceptual LeanPPD Model[END_REF]. The paper presents a Set Based Lean Design toolkit developed, aiming to support the product developer in making decisions during the development process. This toolkit includes the Lean Knowledge Life Cycle methodology and a set of software tools for KM. The toolkit is under testing within the Volkswagen Group, which is one of the world's leading automobile manufacturers and the largest car maker in Europe. The toolkit and concepts of Lean Product Development and KBE will the transferred, implemented, tested and evaluated in a development department in the component division of Volkswagen to support the knowledge management in product development and support the product designers in decision making. Lean Knowledge Life Cycle The Lean Knowledge Life Cycle (LeanKLC), developed by [START_REF] Maksimovic | A Lean Knowledge Life Cycle Methodology in Product Development[END_REF], was defined as an outcome of the LeanPPD project and provides a methodology for knowledge capture, re-use and creation in product development. The significance of Knowledge is well recognized in the Lean Product Development literature [START_REF] Morgan | The Toyota Product Development System: Integrating People, Process, and Technology[END_REF][START_REF] Kennedy | Ready, Set, Dominate: Implement Toyota's Set-Based Learning for Developing Products[END_REF][START_REF] Sobek | Toyota's Principles of Set-Based Concurrent Engineering[END_REF][START_REF] Ward | Lean Product and Process Development[END_REF][START_REF] Mascitelli | The Lean Product Development Guidebook: Everything your design team needs to improve efficiency and slash time-to-market[END_REF][START_REF] Oosterwal | The Lean Machine: How Harley-Davidson drove top-line growth and profitability with revolutionary lean product development[END_REF]]. However, it was observed that current knowledge life cycles methodologies, such as [START_REF] Bukowitz | Knowledge Management Fieldbook[END_REF][START_REF] Dalkir | Knowledge management in theory and practice[END_REF][START_REF] Firestone | Key issues in the new knowledge management[END_REF][START_REF] Jashapara | Knowledge Management: An integrated approach[END_REF], lack supporting tools to be integrated in product development activities in order to provide a Knowledge based environment, which is an important element in LeanPPD [START_REF] Jashapara | Knowledge Management: An integrated approach[END_REF]. The LeanKLC is addressing both the previous project, as well as the domain knowledge in product development. The LeanKLC follows a sequence of seven stages: Fig. 1. Lean Knowledge Life Cycle [START_REF] Maksimovic | A Lean Knowledge Life Cycle Methodology in Product Development[END_REF] In the first stage, the knowledge relevant in the product development process has to be found and identified. This knowledge is captured and structured in the second phase and categorized into knowledge from previous projects, like lessons learned, or into domain specific knowledge, e.g. design rules. In the next stage, the knowledge has to be represented in a generic way (e.g. OWL) in order to make it usable for ICT tools (e.g. Protégé). The fourth stage is the sharing of knowledge, which requires the storage in a centralized database with a clear structure, before getting implemented in the fifth step in the knowledge based environment (KBE). In the sixth step the knowledge from the KBE is dynamically used and provided to the product development engineers. In the seventh and final stage engineers will dynamically capture new knowledge already while it is created. The previously described stages require different practices, tools and templates, which are currently under development in cooperation with industrial partners within the LeanPPD project and envisioned to be tested in different business cases [START_REF] Maksimovic | A Lean Knowledge Life Cycle Methodology in Product Development[END_REF]. The analysis of the existing ICT tools indicated that there are no specific SW tools to support Set Based Lean Design. 'Conventional' ICT tools for 'classical' design can (and are) used to support set based design. However, a clear need is identified to develop innovative tools specifically aiming to support KM for SBLD during various phases of the development process. Requirements Working as a development engineer in the automotive industry often means working in teams on different parts. Keeping an eye on design changes which have an impact on the design of related parts can lead to a lot of extra work. Especially in setbased design, it is even more complex since the number of possible designs grows significantly with an increasing number of parts sets and parallel designs. The main requirements of the knowledge management software tool, identified by several large industrial companies, are [START_REF] Maksimovic | A Lean Knowledge Life Cycle Methodology in Product Development[END_REF]:  Support knowledge management, i.e. management of product data/ knowledge for SBCE, for different sets of solutions, provision of design rules, including lean design rules  Decision making support tools, specifically decisions regarding costs, i.e. tool to explore system sets and evaluate sets for lean production  Support the re-use of existing designs (sets of solutions), e.g. from previous projects/designs Implementation of the SW tool In order to support the handling of set-based design, based on the above described requirements, a SW tool was implemented to help engineers with the set-based design method. The detailed functions of the tool, for use in product development, were defined by the following:  Management of parallel design sets: different part sets management with support for set-based lean design  Part history: documentation of the complete "evolution" of the part, including the management of parallel designs in the different design phases of the part and visualization how the part looked before the change and after the change  Support in provision of knowledge (rules) including lean design rules  Support in acquisition of knowledge and report creation including the requests of changes in designs as well as the reasons for accepted and rejected requests  Re-use of knowledge from previous projects and design-sets  Support of collaborative work to make department specific knowledge about the requested changes available across all involved departments and automatically inform them about design changes and request their feedback  Enable search with semantic indexing of keywords to find the knowledge from previous projects The key concept of the tool is presented in Fig. 2. The tool can be applied at various stages of product developmentat system level (whole product) or component level. Its current implementation is focused on a component level, assuming that the SBLD might be applied also at the whole product level. The tool intends to support set based design of components supporting the management of knowledge for sets of design, relations between options etc. Fig. 2. Concept of the SBLD Product Data/Knowledge Management Tool As mentioned before, the main task of the tool is to help the designer in the management of the sets of design of parts and components. In product development, changes to assemblies and parts have to be done to increase customer value but they also have impact on the shape and function of their nearby assemblies and parts. The knowledge management tool is easy to integrate in the product development process and can be used during different stages within it:  In the concept phase where different sets of design have to be evaluated  In the construction phase of the part, where the 3D-CAD-model and the part list is created  In the tooling, prototyping and testing phase, where changes from the tool shop and testing have to be implemented in the CAD-drawings It also provides design knowledge, e.g. design rules of welding plastics or metals, directly to the product developer. In the module which shows history of the part ("evolution"), the SW tool allows to automatically create reports which contain all changes which have been applied on the part as well as all general information about the part itself. Since design changes result in higher project costs from a certain stage of the project, the SW tool allows to create reports, describing the change, which will be distributed among all involved engineers and departments (planning, tool shop, finances, etc.) to support the decision making if a certain change in the design is necessary or not. Each involved engineer / department can use the SW tool to approve or disapprove the change in the design. Before the final request to change the design of the part, which has impact on other departments like the planning department or the tool shop, the SW tool supports the designer in providing information about previous or parallel projects which have a similar focus. This provision of knowledge from previous projects and other relevant topics is realized by suggesting other reports, design rules or related projects of possible interest. The SW tool retrieves those documents by querying a search engine whose index is built by an indexing engine (see 3.3). This indexing engine works in the background and automatically indexes all reports and design change requests that are added to the database of the SW tool. No matter if the change in the design of part is applied or not, the new knowledge about this change is annotated and added to the database. In case of rejection, the cause is also annotated and added to prevent other designers from requesting the same change again. Furthermore a tag cloud showing the most associated keywords to this part allows the designer to directly switch to the related domain knowledge where design and construction knowledge can be obtained. Knowledge search engine and ontological structure The tool provides a special search mechanism to easily find knowledge from previous projects, which includes search by product & part, search by project phase/ date, free text search etc. Furthermore, this can be used to find design rules which are related to the parts and in this way support users in applying lean manufacturing principles. The approach is to combine the knowledge management tool with an ontological based structuring of data/knowledge for context sensitive enhancement, where product and process knowledge is stored. This ontology is also used to store all extracted context for future reuse. The process is supported by modular services which allow docking onto different systems to monitor and analyze user's interactions and support subsequent services (Context Extraction and Knowledge Search) through monitoring data. The search is therefore divided into several sub-services. These are:  Knowledge Monitoring and Indexing  Context Extraction and Context Model  Search User Interface The knowledge monitoring is used to monitor human-computer-interaction in order to extract the actual context to further enhance it with according knowledge. For that, user interaction is being closely monitored, "raw data" collected and enriched with available knowledge. At significantly changed circumstances in the context, configurable defined, all gathered data is forwarded to context extraction services to extract actual context. In a search application, the objective of such monitoring services is to observe user interactions in order to enable context extraction and knowledge enhancing services to provide assistance in refining the search query. In providing e.g. keywords that have been used in similar situations or filter allowing to narrow down search results in an appropriate thematically way, the user is contextually supported in his current situation. Beside the monitored situations, this service also persist metainformation about the knowledge-items which are associated with monitored events. This data is extracted by an appropriate parser and analyser of the Knowledge Monitoring and Indexing services and stored into a Lucene-Index to allow a fast and efficient reuse [START_REF]Apache Lucene Search Engine[END_REF]. Knowledge Items to be index for reuse may reside in any external system, which offers an interface to the outside (e.g. file system, database or WIKI). To enhance application functionality, monitoring services are monitoring states and conditions in order to trigger other services, for instance "auto-completion services" to give instantaneous feedback to the user during text input. To accomplish that, a monitoring service observes a text box, forwarding text input to a backend service for further processing. This approach enables direct feedback and does not require the user to explicitly trigger the service (e.g. by clicking a button). The overall structure to comprehend the described solution is a Service-oriented Monitoring Architecture (So-MA) within Networked Enterprises [START_REF] Ziplies | Service-based Knowledge Monitoring of Collaborative Environments for User-context Sensitive Enhancement[END_REF]. In the product development the use of the search engine and the ontology results in a faster access to information and knowledge that helps the designer in his work. He can faster find structured knowledge from previous projects and search for lean design rules and construction criteria regarding the component he is working on. Conclusion The paper describes a SBLD toolkit developed, aiming to support the product developer in making decisions during the development process. The emphasis is put upon the requirements and the implementation of a knowledge management software tool, which is applied in product development to support SBLD. It is a part of a knowledge based environment and provides set-based lean design rules and domain and specific knowledge to the product designer. This results in a better knowledge management, decision support and communication in product design. Less improvement-and optimization loops are needed because the developer now gets the support of the knowledge management tool. This can save development costs and time in the product development process. The application of the methods and tools is investigated within Volkswagen and its suppliers. Acknowledgments: The work presented is partly carried out in the scope of the current RTD project LeanPPD supported by the Commission of European Community, under NMP -Nanosciences, Nanotechnologies, Materials and new Production Technologies Program under the contract NMP-2008-214090. This document does not represent the opinion of the European Community, and the European Community is not responsible for any use that might be made of its content.
19,572
[ "1002163", "1002164", "1002165", "1002166", "1002167", "1002168", "1002169", "1002014", "1002170" ]
[ "47316", "47316", "486154", "486154", "486154", "486154", "301006", "162211", "466782" ]
01472266
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472266/file/978-3-642-40352-1_48_Chapter.pdf
Monica Rossi email: [email protected] Sergio Terzi email: [email protected] Marco Garetti email: [email protected] Proposal of an Assessment Model for New Product Development Keywords: New Product Development (NPD), Assessment Maturity Model, Best Practices, Benchmarking de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction During the last ten years, New Product Development not only has been recognized as one of the corporate core functions [START_REF] Huang | Measuring new product success: an empirical investigation of Australian SMEs[END_REF], but also as a critical driver for company's survival [START_REF] Biemans | A picture paints a thousand numbers: a critical look at b2b product development research[END_REF] and prosperity [START_REF] Lam | Self-assessment of conflict management in client-supplier collaborative new product development[END_REF]. The actual uncertain and turbulent marketplace represents a tough challenge to the NPD process, which is often wasteful and not efficiently performed [START_REF] Rossi | Proposal of a method to systematically identify wastes in New Product Development Process[END_REF]. Companies are trying to come out with new efficient methods and techniques, able to guarantee successful products (Gonzales, 2002) in terms of quality, performance and cost. But a standardized framework, able to lead companies through an efficient and effective NPD process is very hard to introduce, due to the complexity and the variability from company to company of the NPD process itself. The first thing to do in order to improve NPD, is to perfectly understand and correctly address the object of the improvement. The problem is that literature state of the art lacks methodologies and tools capable to assess and evaluate how actually companies manage their whole NPD process. In fact the existing tools are only focused on one single aspect of the NPD process, missing the 360° perspective. This research aims to fill this gap, proposing a reference model able to entirely evaluate the NPD process performance. State of the Art of Assessment Tools Over the years several assessment tools have been introduced to evaluate specific aspects of the NPD process. Even if they miss the global perspective, they represent a good starting point to be considered in order to develop a comprehensive method. They are listed in the following.  Project management maturity assessment methodology: this method allows comparing the performance gained by similar organizations, evaluating the ratio PM/ROI (project management/ return on investments). Data are collected through a proper questionnaire [START_REF] Ibbs | Assessing Project Management Maturity[END_REF].  RACE (Readiness Assessment for Concurrent Engineering): this tool was developed at the beginning of the 1990s at the West Virginia University and it is used in software design and in the mechanical sector to assess the level of application of Concurrent Engineering within NPD. The model assesses two main areas, the organizational part (evaluated in 9 maturity levels) and the information technology part (5 levels are considered) [START_REF] Wognum | PMO-RACE: A Combined Method for Assessing Organisations for CE, Advances in Concurrent Engineering[END_REF]. RACE is based on a questionnaire, whose data are represented through a radar chart.  CERAM Model (Concurrent Engineering Readiness Assessment Model for Construction): this method derives from RACE model; it only differs in some contents, being suited for the construction field. CERAM considers two main perspectives, the process (which is evaluated through eighth levels) and the technology (assessed in four levels) [START_REF] Khalfan | Development of a readiness assessment model for concurrent engineering in construction[END_REF].  BEACON Model (Benchmarking and Readiness Assessment for Concurrent Engineering in Construction): this model has been introduced as a complement to the CERAM model. In fact it is able to assess not only process and technology, but also external elements, such as project and people. The efficiency of the organization in project management, the performance of the staff and the efficiency of the technology used in the company are evaluated with a 5 grades scale [START_REF] Anumba | Concurrent Engineering in Construction Projects[END_REF].  CMMI (Capability Maturity Model Integration): this model was developed in 1987 by SEI (Software Engineering Institute) in order to define the maturity level of the development process. It integrates best practices on improving development process with product maintenance. Five maturity levels are assessed, Ad hoc, Repeatable, Characteristic, Managed and Optimising [START_REF] Mark | Capability Maturity Model, Version l.1[END_REF]).  Mis/PyME: this model is able to assess the processes providing the organization with tools able to facilitate the fulfilment of company's objectives. This assessment model is based on the software indicators of the small and medium enterprises. It focuses on: data, people and performance (Díaz-Ley, 2010). These assessment tools are considered the most relevant in literature. The visual representation of RACE and BEACON through a radar chart makes them simple and intuitive in representing the AS-IS status. CMMI is valuable for its five maturity levels. The questionnaires used by the models are useful to understand which are the main criticalities and peculiarities of each of the assessed area. But a global model for assessing NPD in its whole is still missing. Basing on the analysed contributions and on empirical experiences, this research aims to fill this gap. 3 The Proposed Assessment Model for NPD The aim of the proposed model is to provide a "picture" of the AS-IS status of the NPD inside a company. To define the NPD maturity is a very tough task, because of the high number of elements concurring in the system, such as people, tools, and methods. For each of these area within NPD, five possible maturity levels, under the acronyms CLIMB, are considered:  Chaos: the area is usually chaotic and slightly structured.  Low: the area has a simple formalization and it is barely planned and controlled.  Intermediate: the area is structured and planned. Standard solutions are normally applied.  Mature: the area is structured, planned, controlled and measured at its different layers, often through specific quantitative techniques.  Best practice: the organization reached all the previous stages and the area continuously improves thanks to the analysis of variance of its results. The improvement of NPD performance is reached through incremental and innovative actions. In order to evaluate the proper maturity level of a company, a questionnaire has been developed for collecting the relevant information within the technical department and a radar chart has been created for the visual representation. They are detailed in the following sections. 3.1 The Questionnaire The questionnaire includes 33 multiple choice questions and tables, used to analyse 3 main perspectives of NPD: Organization, Knowledge Management, Process. These are arranged in 9-areasrespectively 3, 4, and 2. Each area is then evaluated through a variable number of questions. The structure is summarized in Table 1.  Organization. This is a huge topic that concerns all the people involved in everyday company's activities. Core elements are division of labour and tasks (Work Organization); coordination of people and activities, roles of engineers and designers (Roles and Coordination); practitioners skills and expertise (Skills and Competencies). When considering NPD, designers assume relevant importance, since the coordination and cooperation between them imply the goodness of the work environment. Moreover, well defined roles and responsibilities result in better organized NPD. Finally, enhancement of individual skills and competences determine a more agile and mature organization and better product performance.  Process. NPD is realized through amore or lessformalized process, described as a series of steps, activities and tasks to be accomplished in order to define the specifications of a new product, or the upgrade of an existing product. This process can be supported by a huge variety of tools and methods (Methods), such as Design for X techniques, Life Cycle Analysis, etc. The strict control of the NPD process is crucial, such as its continuous monitoring and improvement (Process Management). Moreover the process requires a large number of decisions to be taken every day: a chain of linked choices made considering both internal (Decision Making Factors) and external (Activities and Value) elements.  Knowledge Management. To maintain and protect the know-how of a company is crucial within any kind of industry. Everyday knowledge is created, shared, retrieved, and displayed; huge amount of data should be handled effectively. The better information are stored, represented, captured, and reused, the more efficient is the NPD. In order to preserve data, these should be formalized and represented in a way understandable by each practitioner inside the company, and easy to be re-used (Formalization). The higher the level of computerization, the faster and more precise the knowledge management process and the communication between people and departments are (Computerization). In order to achieve these results PLM (Product Lifecycle Management) / PDM (Product Data Management) software are suitable to be implemented. All the 9 areas are numerically evaluated through a proper score given to the related questions, as explained in the next section. Thanks to this score, it is possible to define the maturity level reached by the company in the different areas and it is possible to represent the maturity using a radar chart. The Radar Chart The questionnaire is composed by multiple choice questions and tables, associated to a conveniently defined score, used to state the maturity level achieved in NPD by the analysed organization. The Radar Chart (cf. Figure 2) is the way to graphically represent this maturity level. A group of questions determines the score of the area. Each question is answered with multiple choice descriptive options, which correspond to a numerical value, varying from 0 to 3. The minimum maturity value achievable for the area is obtained when all the answers generate 0 as a reply. Vice versa the best practice level is obtained when all the answers assume value 3. For intermediate answers the value is calculated as normalized score (% value). An example of score calculation is given in following Table (cf. Table 2). Skills and Competencies Answer Score 10. Product design is heavily based on skills and competence of the actors involved (technicians, designers, managers, etc.) c. Yes, there is a one-to-one correspondence for tutoring (a junior designer is assigned with a more experienced designer, as a tutor, coach, or mentor). Preliminary Results Since February 2012 until now a sample of 30 companies has been analyzed. The variety of the sample is quite relevant, has shown in Table 3. Table 3. The Sample Radar chart in Figure 3 displays the average trend of the whole sample. The major criticalities are linked to the definition of the customer value, which is rarely well defined and communicated within the organization. On the contrary the attention paid to knowledge formalization is high. On average, the maturity level of the market is varying between intermediate and mature. Number of employees Conclusions and Future Developments The aim of the proposed assessment model is to give the possibility to a company to assess its NPD process. Actually companies know the problems they have to face when introducing new products to the market, but they not always consider these criticalities in a whole picture, resulting in a bad focusing of the required improvement efforts. The proposed method gives companies the opportunity to assess themselves, and also to benchmark with competitors. Further researches will be based on the application of this method in as much companies as possible, in order to test the validity of the model. effectiveness of training is evaluated in terms of the learning outcomes? a. Using 'visual' evaluation of individual behaviors. X 1 b. Using a test before and after the training session. 2 c. Using KPIs to assess the impact of training on business performancesExample of Scoring of Area Following this procedure, for each of the 9 areas, is it possible to state the reached maturity level, considering the profiles proposed in Figure 1, and to represent the global results in the radar chart (cf. Figure 2). FigureFigure 2 . 2 Figure 1. Maturity Levels Profiles FigureFigure 3 3 Figure 3. Global Average Figure 4 . 4 Figure 4. Medium Enterprises vs Global AverageApart for the formalization area, in which they are aligned to the global trend, Very large enterprises are over average for all the considered perspectives (Cf. Figure5). Figure 5 . 5 Figure 5. Very large Enterprises vs Global Average Table 1 . 1 Structure of the questionnaireThe chosen areas are suitable to describe the NPD as a whole, overpassing the gap identified in the literature review. A brief description of the selected areas follows: Macro Area Area # Question/ matrix Work Organization 1-5 Organization Roles and Coordination 6-9 Skills and Competencies 10-12 Process Management 13-16 Process Activities and Value Decision Making Factors 17-20 21-24 Methods 25 Knowledge Management Formalization Computerization 26-30 31-33 Acknowledgments This work was partly funded by the European Commission through the project Lean Product and Process Development -LeanPPD (NMP-2007-214090, www.leanppd.eu). The authors wish to acknowledge their gratitude and appreciation to the rest of the LeanPPD project partners for their contributions during the development of various ideas and concepts presented in this paper.
14,204
[ "990007", "831167", "837065" ]
[ "125443", "308253", "125443" ]
01472267
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472267/file/978-3-642-40352-1_49_Chapter.pdf
Daniele Cerri email: [email protected]@[email protected] Marco Taisch Sergio Terzi Multi-objective Optimization of Product Life-Cycle Costs and Environmental Impacts Keywords: LCA, Life Cycle Assessment, LCC, Life Cycle Costing, Multi Optimization, Genetic Algorithms, Product Life Cycle des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Today, in the modern global world, European companies need to find new competitive factors for facing the low-cost pressure of emerging countries. Sustainability, more and more pushed by international regulations (e.g. Kyoto Protocols, European Directives, etc.), could provide one of these factors: being able to develop ecofriendly, energy-efficient and green products before the others could give a competitive advantage to European industries for the next years. In this research, companies can be supported by methodologies already well-known in literature, like Life Cycle Costing (LCC) and Life Cycle Assessment (LCA), which permit to perform cost and environmental analysis for developing more sustainable products. However, most of the researches available in literature are not able to guarantee the reaching of the optimal solution, but in most of the experiences LCC and LCA are just used for simple evaluations. This paper aims to fill this gap, proposing a model that optimizes product life-cycle costs and environmental impacts at the same time. The paper is organized as follows: Section 2 illustrates the current state of the art of LCC, LCA and optimization methods; Section 3 describes the proposed model, with real application; finally, Section 4 concludes the paper. 2 State of the Art Life Cycle Cost and Life Cycle Assessment are well-known methodologies in the relevant literature. Both have been developed since the '60ies: LCC are "cradle-tograve" costs summarized as an economic model of evaluating alternatives for equipment and projects [START_REF] Barringer | A Life Cycle Cost Summary[END_REF]; LCA is a technique to assess environmental impacts associated with all the stages of a product's life from-cradle-to-grave [START_REF]Life Cycle Assessment: Principles and Practice[END_REF]. For this paper, it is interesting to analyze the state of the art of optimization applied to LCC and LCA. 39 papers for LCC and 40 papers for LCA, from the last 15 years, have been analyzed. We have classified the contributions in three clusters: (i) simple application of the methodology, (ii) use of a software, (iii) optimization. The first cluster considers papers that barely apply the methodology (LCC or LCA). The second cluster includes contributions that use a software to calculate costs and / or environmental impacts. The third cluster takes into account papers that optimize product life-cycle's costs and / or environmental impacts. We observe that only few papers consider optimization issues. In percentage, only 20.51% of LCC literature treats about optimization, reduced to 10% in LCA. Evidently, this research area is still emerging and this paper can give a contribution. Another interesting information is the massive use of software in LCA, while in LCC is practically missing. Use of software in LCA is justified by the increased complexity of the methodology, compared to LCC. The most popular LCA software are: SimaPro, GaBi Software and LCAiT. Focusing on papers dealing with optimization, the used optimization methods are: Linear Programming, Genetic Algorithm and Particle Swarm Optimization. In [1] Azapagic and Clift applied 2 algorithms: a single objective linear programming model to optimize LCA and a multi-objective linear programming model to optimize LCA and profit of a chemical plant that produces thermoplastic materials. In [2] the previous authors developed a multi-objective linear programming to optimize LCA (understood as Global Warming Potential), costs and total production of a chemical system that produces 5 boron products. Cattaneo Genetic Algorithm is a subclass of evolutionary algorithms, where the elements of the search space G are binary strings (G=B*) or arrays of other elementary types [START_REF] Weise | Global Optimization Algorithms -Theory and Application[END_REF]. Genetic Algorithms are widely used in LCC and LCA applications for the following reasons: -They are more efficient than others when number of variables increase; -They don't have any problem with multi-objective optimization; -They are suitable for applications dealing with component-based systems (a product could be seen as a chromosome and its components as genes). Proposed Model In this section a model to optimize product life-cycle costs and environmental impacts together is proposed. The model is based on NSGA-2 (Non dominated Sorting Genetic Algorithm). NSGA-2 is one of the most popular and tested Genetic Algorithms. It has three special characteristics: (i) fast non-dominated sorting approach, (ii) fast crowded distance estimation procedure and (iii) simple crowded comparison operator [START_REF] Deb | A fast and elitist multiobjective genetic algorithm: NSGA-II[END_REF]. To perform NSGA-2 it was used GANetXL [START_REF] Savić | A DSS Generator for Multiobjective Optimisation of Spreadsheet-Based Models[END_REF], an add-in for Microsoft Excel. We have applied NSGA-2 to LCC / LCA optimization. We have defined an experimental scenario composed by a preliminary set of 3 simplified test cases (Table 3), plus the application to a real industrial case, concerning the design of a production line. The performances of NSGA-2 in the introductory test cases have been compared with other two optimization methods, based on linear programming, in order to check the goodness of the proposed model. In the introductory tests, the life-cycle of a generic product made of 10 subgroups is evaluated. Each subgroup has two alternatives. The data input is composed by 4 types of cost and of 3 kinds of environmental impact. The models must optimize two objectives: minimize the life-cycle-cost and minimize the product life-cycle environmental impact. Test A was used to see if all the models reach the unique optimal solution, while Test B and Test C were used to see the behaviour of the 3 models with multiple optimal solutions. Test B has no constraints, while Test C has a constraint. In Test A, all the models reached the optimal solution. In Test B and in Test C, the behaviour of the models were different. NSGA-2 returns a number of solutions greater than the others two. Also its solutions are not dominated and some of them are surely optimal (compared to WSM, Weighted Sum Model). Then, it is possible to say that NSGA-2 is better than linear programming-based models. The NSGA-2 model was then applied to a real case, a fraction of an assembly line, designed and manufactured by an Italian company. This line assemblies a small car diesel engine. Five stations were considered: the first is for silicon coating, the second assemblies the base, in the third screws are filled in, the fourth fills screws and rotates the pallets, the last screws the under base. All of these stations can have automatic, semi-automatic or manual alternatives. Each station has 6 alternatives: 3 automatic, 2 semi-automatic and 1 manual. 8 costs and 2 environmental impacts were considered for being optimized. In this case, the algorithm chromosome represents the line and stations represent the genes. The model has two types of constraints: the availability of the fraction of the assembly line must be greater than 0.95; all the stations must have an alternative. Below we report the model written in analytical form. min (𝐶𝑖𝑛 * 𝑥 ! + 𝐶𝑒 * !" !!! 𝑥 ! + 𝐶𝑟𝑖𝑐 * 𝑥 ! + 𝐶𝑜𝑝 * 𝑥 ! + 𝐶𝑐𝑜𝑛 * 𝑥 ! + 𝐶𝑎𝑖𝑟 * 𝑥 ! + 𝐶𝑚𝑜 * 𝑥 ! + 𝐶𝑚𝑜𝑟𝑖𝑝 * 𝑥 ! )(1) min (𝐸𝐼𝑠𝑡 * !" !!! 𝑥 ! + 𝐸𝐼𝑒𝑙 * 𝑥 ! )(2) Subject to 𝐴 ! 𝑥 ! ! !!! * 𝐴 ! 𝑥 ! !" !!! * 𝐴 ! 𝑥 ! !" !!!" * 𝐴 ! 𝑥 ! !" !!!" * 𝐴 ! 𝑥 ! !" !!!" ≥ 0.95(3) 𝑥 ! ! !!! = 1(4) 𝑥 ! !" !!! = 1(5) 𝑥 ! !" !!!" = 1 6 𝑥 ! !" !!!" = 1(7) 𝑥 ! !" !!!" = 1(8) 𝑥 ! ∈ 0,1 𝑖 = 1,2, … ,30 where: Cin is initial cost, Ce is energy cost, Cric is spare parts cost, Cop is labor cost, Ccon is consumable cost, Cair is air cost, Cmo is preventive maintenance cost, Cmorip is corrective maintenance cost, EIst is environmental impact of the station and EIel is environmental impact of electric energy. A is availability and x i is binary variable. So the model must optimize costs and environmental impacts along the product's life cycle, or rather the model must find the best combination of station to optimize the two objective. Two scenarios were studied: one where the line is installed in Eastern Europe and one where the line is installed in Western Europe. The differences are the labor and maintenance staff costs: in fact in Western Europe they are 3 or 4 times greater than to Eastern Europe. Fig. 1 and 2 report the results in Eastern Europe and Western Europe scenario. In Eastern Europe Scenario the solution, that minimizes life-cycle cost, is composed of all manual stations, while in Western Europe the solution is composed of all automatic stations. This happens for differences in labor costs: in Eastern Europe, where labor costs are lower, there's convenience to install manual stations, instead in Western Europe it agrees automatic stations. To validate the obtained results, they were subjected to company. It has considered correct the solutions. Conclusion In this paper, a model is proposed to optimize product life-cycle costs and environmental impacts together. It was tested and compared to other two models. Then it was applied to a real case and the results have been validated. This presents some advantages, as (i) relevance for real case and (ii) comparison with other two models, and some criticism, as (i) performances are not evaluated and (ii) LCA not well investigated. The future developments can be (i) the inclusion of / performance equations and (ii) the deepening of LCA. [ 4 ] 4 instead use a single objective linear programming model to optimize LCC of a train traction system. Other 6 papers use genetic algorithm instead of linear programming. Gitzel and Herbort [8] applied a genetic algorithm to optimize LCC of a DCS (Distributed Control System), using different GA variants. Hinow and Mevissen [9] use genetic algorithm to optimize LCC of a substation, improving the maintenance activities. In Kaveh et al. [10] genetic algorithms, exactly NSGA-2, is used to perform a multi-objective optimization of LCC and initial costs of large steel structures. Here it is possible to see the strong trade-off between initial costs and life cycle costs. Frangopol and Liu [7] and Okasha and Frangopol [12] applied a multi-objective genetic algorithm to optimize three different objectives: LCC, lifetime condition index value and lifetime safety index value for [7]; LCC, minimum redundancy index and maxi-mum probability of failure for [12]. They are applied on structural maintenance. Dufo-Lopez et al. [6] instead applied the Strength Pareto Evolutionary Algorithm (SPEA) to the multi-objective optimization of a stand-alone PV-wind-diesel system with batteries storage. The objectives to be minimized are the levelized cost of energy (LCOE) and the equivalent CO 2 life cycle emissions (LCE). LCE can be viewed as LCA.Finally, two papers use the particle swarm optimization. Kornekalis [11] use a multi-objective particle swarm optimization to the optimal design of photovoltaic grid-connected systems (PVGCSs), maximizing Net Present Value (NPV) and the pollutants gas emissions avoided due to use PVGCSs (this can be compared to LCA); Wang et al.[START_REF] Wang | Applying Particle Swarm Optimization (PSO) in Product Life Cycle Cost Optimization[END_REF], instead, applied a Particle Swarm Optimization to minimize LCC of a personal computer. Fig. 1 .Fig. 2 . 12 Fig. 1. Results (Eastern Europe Scenario) Fig. 2. Results (Western Europe Scenario) Table 1 . 1 Test parameters Data Input Optimal Solutions Constraint Test A A Unique No Test B B Pareto Front No Test C B Pareto Front Yes (1)
12,190
[ "990110", "931901", "831167" ]
[ "125443", "125443", "308253" ]
01472270
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472270/file/978-3-642-40352-1_51_Chapter.pdf
Joaquín Bautista email: [email protected] Rocío Alfaro email: [email protected] Alberto Cano email: [email protected] Incorporating Regularity of Required Workload to the MMSP-W with Serial Workstations and Free Interruption of the Operations Keywords: Manufacturing, Sequencing, Work overload, Linear programming We propose a mathematical model to solve an extension to the mixed-model sequencing problem with work overload minimization (MMSP-W) for production lines with serial workstations and parallel homogeneous processors and regularizing the required workload. We performed a computational experience with a case study of the Nissan engine plant in Barcelona. Introduction Manufacturing lines with mixed products are very common in Just in Time (JIT) and Douky Seisan (DS) environments. These lines, composed of multiple workstations must be flexible enough to treat different product types. These lines usually consist of a set ( K ) of workstations laid out in series. Each workstation ( k = 1,…, K ) is characterized by the use of the human resources, tools and automated systems necessary to carry out the work assigned to the workstation. The set of tasks assigned to the workstation is called workload, and the average time required to process these tasks at normal activity rates is called workload time or processing time. An important attribute of these production lines is flexibility. The products (such as engines or car bodies) that circulate through the lines are not completely identical. Although some of the products may be similar or of the same type, they may require different resources and components and therefore may require different processing times. The desired flexibility of these mixed-product lines requires that the sequence in which the product types are manufactured follow two general principles: [START_REF] Boysen | Sequencing mixed-model assembly lines: Survey, classification and model critique[END_REF] to minimize the stock of components and semi-processed products and (2) to maximize the efficiency of the line, manufacturing the products in the least amount of time possible. A classification of sequencing problems arising in this context was given in [START_REF] Boysen | Sequencing mixed-model assembly lines: Survey, classification and model critique[END_REF]: 1. Mixed-model sequencing. The aim in this problem is to obtain sequences that complete the maximum work required by the work schedule. 2. Car sequencing. These problems are designed to obtain sequences that meet a set of constraints related to the frequency in which the workstations are required to incorporate special options (e.g., sunroof, special seats or a larger engine) within the products. 3. Level scheduling. These problems focus on obtaining level sequences for the production and usage of components. The MMSP-W [START_REF] Yano | Sequencing to minimize work overload in assembly lines with product options[END_REF][START_REF] Scholl | Pattern based vocabulary building for effectively sequencing mixed-model assembly lines[END_REF] consists of sequencing T products, grouped into a set of I prod- uct types, of which d i are of type i ( i = 1,…, I ). A unit of product type i ( i = 1,…, I ), when is at workstation k ( k = 1,…, K ), requires a processing time equal to p i, k for each homogeneous processor (e.g., operator, robot or human-machine system) at normal activity, whereas the standard time granted at each station to work on an output unit is the cycle time, c . Sometimes a workstation, k , can work on any product a maximum time l k , which is called time window, and is longer than the cycle time ( l k > c ), which causes that the time available to process the next unit is reduced. When it is not possible to complete all of the work required, it is said that an overload is generated. The objective of MMSP-W is to maximize the total work completed, which is equivalent to minimize the total work overload generated (see Theorem 1 in [START_REF] Bautista | Solving mixed model sequencing problem in assembly lines with serial workstations with work overload minimisation and interruption rules[END_REF]), sequencing the units on the line, considering the interruption of the operations at any time between the time of completion of one cycle and the time of termination marked by the time window associated with that cycle [START_REF] Bautista | A bounded dynamic programming algorithm for the MMSP-W considering workstation dependencies and unrestricted interruption of the operations[END_REF]. In addition, in our proposal we will maintain constant the cumulative time of work required at the workstations in all positions of the product sequence. 2 Models for the MMSP-W Reference Models For the MMSP-W with serial workstations, free interruption of the operations and homogeneity of required workload, we begin with several models as reference (see table 1). Table 1. Comparison of the major differences of models M1 to M4 and M4∪3. M1 M2 M3 M4 M_4∪3 Objective Max V Min W Max V Min W Min W/ Max V Start instants Absolute s k,t Relative ˆ s k,t Absolute s k,t Relative ˆ s k,t Relative ˆ s k,t Variables v k,t w k,t v k,t w k,t w k,t , v k,t Time window l k !k c !k l k !k l k !k l k !k Rank for bk b k ! 1 b k = 1 b k ! 1 b k = 1 b k ! 1 Links between stations No No Yes Yes Yes The models from the literature, M1 [START_REF] Yano | Sequencing to minimize work overload in assembly lines with product options[END_REF] and M2 [START_REF] Scholl | Pattern based vocabulary building for effectively sequencing mixed-model assembly lines[END_REF], do not consider links between workstations. M1 is focused on maximize the total work performed, using an absolute time scale at each station and considering more than one homogeneous processor at each workstation. M2 is focused on minimize the total work overload with relative time scale at each station corresponding to each processed product unit and only considers one processor at each workstation. An extension of these models, considering links between consecutive stations, are models M3 (M1 extended) and M4 (M2 extended) proposed by [START_REF] Bautista | Solving mixed model sequencing problem in assembly lines with serial workstations with work overload minimisation and interruption rules[END_REF]. Moreover, considering the equivalence of the objective functions of M3 and M4, we can combine them and obtain the M_4∪3 [START_REF] Bautista | Modeling and solving a variant of the mixed-model sequencing problem with work overload minimisation and regularity constraints. An application in Nissan's Barcelona Plant[END_REF] model that considers the relative times scales used in M4. Regularity of Required Workload The overload concentrations at certain times during the workday may be undesirable. One way to avoid this occurrence is to obtain product sequences that regulate the cumulative time of required work at the workstations in all positions of the product sequence. To do this, first we consider the average time required at the k th workstation to process a product unit, which is the processing time for an ideal unit at workstation k . If ˙ p k is the average time, then the ideal work rate for station k k = 1,…, K ( ) is de- termined as follows: ˙ p k = b k T p i, k ! d i i=1 I " k = 1,…, K (1) Consequently, the ideal total work needed to complete t output units at workstation k is: P k, t * = t ! ˙ p k k = 1,…, K ; t = 1,…,T (2) Moreover, if we consider the actual total work required at the k th workstation to process a total of t product units, of which X i, t = x i,! ! =1 t " are of type i i = 1,…, I ( ), then we have: P k, t = b k p i, k ! X i, t i=1 I " = b k p i, k x i,# # =1 t " ( ) i=1 I " k = 1,…, K ; t = 1,…,T (3) Where x i,t ( i = 1,…,| I | ; t = 1,…,T ) is a binary variable that is equal to 1 if a product unit i is assigned to the position t th of the sequence, and 0 otherwise. One way to measure the irregularity of the required workload at a set of workstations over the workday is to cumulate the difference between the actual and ideal work required to each output unit at each workstation: ! Q P ( ) = " k,t 2 P ( ) k =1 K # t =1 T # , where ! k,t P ( ) = P k, t " P k, t * (4) If we consider the properties derived from maintaining a production mix when manufacturing product units over time, we can define the number of units of product type i , of a total of t units, which should ideally be manufactured to maintain the production mix as: X i, t * = d i T ! t i = 1,…, I ; t = 1,…,T (5) Therefore, the ideal point ! X * = X 1,1 * ,…, X I ,T * ( ) presents the property of leveling the required workload, because at that point, the non-regularity of the required work is optimal, P k, t ! P k, t * = " k,t P ( ) = 0 and then ! Q P ( ) = 0 , as shown in [START_REF] Bautista | Modeling and solving a variant of the mixed-model sequencing problem with work overload minimisation and regularity constraints. An application in Nissan's Barcelona Plant[END_REF] (see theorem 1 in [START_REF] Bautista | Modeling and solving a variant of the mixed-model sequencing problem with work overload minimisation and regularity constraints. An application in Nissan's Barcelona Plant[END_REF]): P k, t = b k p i, k ! X i, t * i=1 I " # P k, t = b k p i, k ! d i ! t T i=1 I " = t ! b k T p i, k ! d i i=1 I " $ % & ' ( ) = t ! ˙ p k = P k, t * (6) MMSP-W Model for Workload Regularity Considering the properties described above and the reference model M_4∪3 [START_REF] Bautista | Modeling and solving a variant of the mixed-model sequencing problem with work overload minimisation and regularity constraints. An application in Nissan's Barcelona Plant[END_REF], we limit the values of the cumulative production variables, X i, t Binary variable equal to 1 if a product unit i ( i = 1,…, I ) is assigned to the position t ( t = 1,…,T ) of the sequence, and to 0 otherwise ( i = 1,…, I ; t = 1,…,T ), s k,t Start instant for the t th unit of the sequence of products at station k ( k = 1,…, K ) ˆ s k,t Positive difference between the start instant and the minimum start instant of the t th operation at station k. ˆ s k,t = s k,t ! (t + k ! 2)c [ ] + (with x [ ] + = max{0,x}). v k,t Processing time applied to the t th unit of the product sequence at station k for each homogeneous processor (at normal activity) w k,t Overload generated for the t th unit of the product sequence at station k for each homogeneous processor (at normal activity); measured in time. Model M_4∪3_pmr: Min W = b k w k,t t=1 T ! " # $ $ % & ' ' k=1 K ! ( Max V = b k v k,t t=1 T ! " # $ $ % & ' ' k=1 K ! (7) Subject i,t t=1 T ! = d i i = 1,…, I (8) x i,t i=1 I ! = 1 t = 1,…,T (9) v k,t + w k,t = p i,k x i,t i=1 I ! k = 1,…, K ; t = 1,…,T (10) ˆ s k,t ! ˆ s k,t"1 + v k,t"1 " c k = 1,…, K ; t = 2,…,T (11) ˆ s k,t ! ˆ s k"1,t + v k"1,t " c k = 2,…, K ; t = 1,…,T (12) ˆ s k,t + v k,t ! l k k = 1,…, K ; t = 1,…,T (13) ˆ s k,t ! 0 k = 1,…, K ; t = 1,…,T (14) v k,t ! 0 k = 1,…, K ; t = 1,…,T (15) w k,t ! 0 k = 1,…, K ; t = 1,…,T (16) x i,t ! 0,1 { } i = 1,…, I ; t = 1,…,T (17) ˆ s 1, 1 = 0 (18) x i,! ! =1 t " # t$ d i T % & % ' ( ' i = 1,…, I ; t = 1,…,T (19) x i,! ! =1 t " # t$ d i T % & & ' ( ( i = 1,…, I ; t = 1,…,T (20) In the model, the equivalent objective functions (7) are represented by the total work performed (V) and the total work overload (W). Constraint (8) requires that the programmed demand be satisfied. Constraint (9) indicates that only one product unit can be assigned to each position of the sequence. Constraint (10) establishes the relation between the processing times applied to each unit at each workstation and the overload generated in each unit at each workstation. Constraints (11)-(14) constitute the set of possible solutions for the start instants of the operations at the workstations and the processing times applied to the products in the sequence for each processor. Constraints (15) and ( 16) indicate that the processing times applied to the products and the generated overloads, respectively, are not negative. Constraint (17) requires the assigned variables to be binary. Constraint (18) establishes the earliest instant in which the assembly line can start its operations. Finally, the constraints (19) and ( 20) are those that incorporate, indirectly, the regularity of required workload to the MMSP-W. Computational experience To study the behavior of the incorporation of the regularity restrictions of work required into the M_4∪3, we performed a case study of the Nissan powertrain plant in Barcelona. This plant has an assembly line with twenty-one workstations ( m 1 ,…, m 21 ) assembling nine types of engines ( p 1 ,…, p 9 ) that are grouped into three families (4x4, vans and trucks) whose processing times at stations ranging between 89 and 185 s. For the experiment, we considered a set ! of 23 ( ! = 1,…,23) instances associated to a demand plan of 270 engines, an effective cycle time c = 175 s and an identical time window for all workstations l k = 195 s ( k = 1,…,21) (see tables 5 and 6 in [4]). To implement the models, the Gurobi v4.5.0 solver was used on Apple Macintosh iMac computer with an Intel Core i7 2.93 GHz processor and 8 GB of RAM using MAC OS X 10.6.7. The solutions from this solver were obtained by allowing a maximum CPU time of 7200 s for each model and for each of the 23 demand plans in the NISSAN-9ENG set. To estimate the quality of the experimental results, we use the following indicators: RPD f ,! ( ) = f S 4 "3 * ! ( ) ( ) # f S 4 "3 _ pmr * ! ( ) ( ) f S 4 "3 * ! ( ) ( ) $100 f !" = {W ,# Q (P) ; $ !% ( ) (21) RPD f ( ) = RPD f ,! ( ) ! =1 " # " f !" = {W ,# Q (P) ( ) (22) Table 2 and figure 1 show the results obtained. ( ), RPD(! Q (P ))) for the 23 instances of the NISSAN-9ENG set. According to the results (see table 2 and figure 1) we can conclude the following: ! W ! Q P ( ) ! W ! Q P ( ) ! W ! Q P ( ) ! W ! Q P ( ) 1 • We can only guarantee the optimal solutions for instances 10 and 19, with the limitation of a run time of 7200 s, • The reference model M_4∪3 achieves a better average overload than M_4∪3_pmr (a difference of 5.79% in RPD W ( )) on the set of 23 instances. • The incorporation of constraints (8) and (9) into the reference model M_4∪3 produces a significant improvement in the regularity of the required work ( RPD ! Q P ( ) ( ) = 92,54% ). Conclusions We have formulated a model for the MMSP-W, M_4∪3_pmr, that minimizes the total work overload or maximizes the total work completed, considering serial workstations, parallel processors, free interruption of the operations and with restrictions to regulate the required work. A case study of the Nissan engine plant in Barcelona has been realized to compare the new model with the reference model M_4∪3. The case study includes the overall production of 270 units of 9 different types of engines, for a workday divided into two shifts, and assuming that the particular demands of each type of engine may vary over time. This is reflected in 23 instances, each of them representing a different demand plan. For the computational experience, the solver Gurobi 4.5.0 was used. The solutions have been found for the 23 instances, allowing a maximum CPU time of 7200 s for each instance. Using this CPU time, we can only guarantee the optimal solutions for the instances 10 and 19. The results show that the incorporation of the restrictions to regulate the required work into the reference model M_4∪3 produces an average gain of 92,54%, in terms of regularity of required work, while gets worse by an average of 5,79%, in terms of work overload. We propose as future research lines: (1) to design and to implement heuristics and exact procedures to solve the problem under study; [START_REF] Yano | Sequencing to minimize work overload in assembly lines with product options[END_REF] to consider the minimization of the work overload and maximizing the regularity of the work required as simultaneous objectives of the problem; and (3) to incorporate to the proposed models, other desirable productive attributes such as maintenance of the production mix and the regular consumption of products parts, for example. t to the integers closest to the ideal values of production, X i, t * = d i ! t T , and then we obtain a new model, the M_4∪3_pmr. The parameters and variables are presented below: Parameters K Set of workstations ( k = 1,…, K ) b k Number of homogeneous processors at workstation k I Set of product types ( i = 1,…, I ) d i Programmed demand of product type i p i,k Processing time required by a unit of type i at workstation k for each homogeneous processor (at normal activity) T Total demand; obviously, d i = T i=1 I ! Position index in the sequence ( t = 1,…,T ) c Cycle time, the standard time assigned to workstations to process any product unit l k Time window, the maximum time that each processor at workstation k is allowed to work on any product unit, where l kc > 0 is the maximum time that the work in progress (WIP) is held at workstation k Variables x i,t Fig. 1 . 1 Fig. 1. Values of RPD for the functions W (dark grey), ! Q P ( ) (grey) and average values Table 2 . 2 Values of RPD for the functions W , ! Q P ( ) and average values Acknowledgements. The authors greatly appreciate the collaboration of Nissan Spanish Industrial Operations (NSIO). This work was funded by project PROTHIUS-III, DPI2010-16759, including EDRF funding from the Spanish government.
17,605
[ "1002172", "1002173", "1002174" ]
[ "85878", "85878", "85878" ]
01472271
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472271/file/978-3-642-40352-1_52_Chapter.pdf
Joaquín Bautista email: [email protected] Cristina Batalla email: [email protected] Rocío Alfaro email: [email protected] Incorporating Ergonomics Factors into the TSALBP Keywords: Manufacturing, Assembly Line Balancing, Somatic and Psychic Factors. 1 Preliminaries Mixed-product assembly lines have ergonomic risks that can affect the worker productivity and lines. This work proposes to incorporate ergonomic factors to the TSALBP (Time and Space constrained Assembly Line Balancing Problem). Therefore, we present several elements for new models to assign the tasks to a workstation considering technological, management and ergonomic factors. 1. Cumulative constraints, associated with the available time of work in the stations. 2. Precedence constraints, established by the order in which the tasks can be executed. Other problems with additional considerations are included in the GALBP class [START_REF] Becker | A survey on problems and methods in generalized assembly line balancing[END_REF], like the case in which the assignment of tasks is restricted [START_REF] Scholl | Absalom: Balancing assembly lines with assignment restrictions[END_REF] or when certain tasks must be assigned in block [START_REF] Battaïa | Reduction approaches for a generalized line balancing problem[END_REF]. Some of the limitations in literature [START_REF] Chica | Multiobjective constructive heuristics for the 1/3 variant of the time and space assembly line balancing problem: ACO and random greedy search[END_REF][START_REF] Chica | Including different kinds of preferences in a multiobjective and algorithm for time and space assembly line balancing on different Nissan scenarios[END_REF] take into account factors such as: the number of workstations (m); the standard time assigned to each workstation (c), which is calculated through an average of the processing times of all tasks according to the proportions, of each type of product, that are present in the demand plan, and the available space or area (A) to materials and tools to each workstation. In these conditions we can define a family of problems under the acronym TSALBP (Time and Space constrained Assembly Line Balancing Problems) [START_REF] Chica | Multiobjective constructive heuristics for the 1/3 variant of the time and space assembly line balancing problem: ACO and random greedy search[END_REF][START_REF] Chica | Including different kinds of preferences in a multiobjective and algorithm for time and space assembly line balancing on different Nissan scenarios[END_REF] that consist on: given a set J of J tasks with their temporal t j and spatial a j attrib- utes ( j = 1,…, J ) and a precedence graph, each task must be assigned to a single station, such that: • All the precedence constraints are satisfied. • No station workload time is greater than the cycle time (c). • No area required by the station is greater than the available area per station (A). Then, if we consider the types of limitations defined above, we have eight types of problems, according to the objective of each one of them [START_REF] Chica | Multiobjective constructive heuristics for the 1/3 variant of the time and space assembly line balancing problem: ACO and random greedy search[END_REF]. For example, the model to the TSALBP-1 is the following: Min z 1 = m (1) Subject to: m - k x j,k k =1 m max ∑ ≥ 0 (2) t j x j,k j =1 J ∑ ≤ c k = 1,…, m max ( ) (3) a j x j,k j =1 J ∑ ≤ A k = 1,…, m max ( ) (4 ) x j,k k =1 m max ∑ = 1 j = 1,…, J ( ) (5) k x j, k -x i, k ( ) k =1 m max ∑ ≥ 0 1 ≤ i , j ≤ J : i ∈P j ( ) (6) x j,k ∈ 0,1 { } j = 1,…, J ( )∧ k = 1,…, m max ( ) (7) Where, x j,k is a binary variable that is equal to 1 if a task j ( j = 1,…,| J |) is as- signed to the workstation k ( k = 1,…,m max ), and 0 otherwise; P j is a parameter that indicates the set of precedent tasks of the task j ( j = 1,…,| J |) and the objective is to minimize the number of workstations ( m = K ). 2 The Ergonomic in assembly lines One of the main objectives of the ergonomic is to adapt the operations that the workers must perform to guarantee their safety, welfare and to improve their efficiency. Although the problems of a poor design of a workstation, in ergonomic terms, affect all areas of employment, manufacturing is one of the most affected. Specifically, the ergonomic risk is present and may affect the performance of workers and the line, in manufacturing assembly lines with mixed-products. In such environments, ergonomic risk is given basically by the components related to somatic comfort and psychological comfort. The somatic comfort determinates the set of physical demands to which a worker is exposed throughout the working day. To analyze this type of ergonomic risk, three factors, among others, can be analyzed. These are: • Postural load: During working hours the workers may adopt repeatedly, inappropriate or awkward postures that can result in fatigue and musculoskeletal disorders in the long run [START_REF] Mcatamney | RULA: a survey method for investigation of workrelated upper limb disorders[END_REF]. • Repetitive movements: A workstation may involve a set of repeated upper-limb movements by the worker. This may cause musculoskeletal injuries at long term [START_REF]Risk Assessment and Management of Repetitive Movements and Exertions of Upper LimbsJob Analysis, Ocra Risk Indices, Prevention Strategies and Design Principles[END_REF]. • Manual handling: Some tasks involve the lifting, moving, pushing, grasping and transporting objects [START_REF] Waters | Accuracy of measurements for the revised NIOSH lifting equation[END_REF]. By the other hand, the psychological comfort refers to the set of necessary mental conditions that the workers must have to develop their tasks. These conditions are: autonomy, social support, acceptable workloads and a favorable work environment. The TSALBP with ergonomic Our proposal is to incorporate into the TSALBP or in other assembly lines problems the factors that imply these ergonomic problems. Otto and Scholl [START_REF] Otto | Incorporating ergonomic risks into assembly line balancing[END_REF] employ several techniques to incorporate the ergonomic risks to the problem SALB-1. In a first approximation, given the set K of stations, to each workload S k assigned at workstation k ( k = 1,…, K ), the ergonomic risk F (S k ) is determined. Moreover, a maximum value is established for that ergonomic risk, Erg . Consequently, we can add to the original models the following constraints, satisfying: F (S k ) ≤ F (S k ∪ { j}) ( ∀S k , ∀j ∈J ). F (S k ) ≤ Erg k = 1,…, K ( ) (8) Alternatively to the conditions (8), Otto and Scholl propose the ErgoSALBP-1 with a new objective function composed by two terms [START_REF] Otto | Incorporating ergonomic risks into assembly line balancing[END_REF]; that is: Min K ' x ( ) = K x ( ) + ω ⋅ξ F S k ( ) ( ) (9) Where K x ( ) is the number of workstations; ω is a weight non-negative and ξ F S k ( ) ( ) is a function that includes the ergonomic risk factors F (S k ) k = 1,…, K ( ). Logically, the constraints (8), presented by [START_REF] Otto | Incorporating ergonomic risks into assembly line balancing[END_REF], can be completed if we take into account, in the design of the line, a minimum value to the ergonomic risk. In addition we can consider that this risk depends on the factor (somatic or psychic) that we want. In this situation, we have: F φ min ≤ F φ S k ( ) ≤ F φ max k = 1,…, K ( ); ∀φ ∈Φ ( 10 ) Where Φ is the set of factors, F φ min and F φ max correspond to the minimum and maximum ergonomic risk to the factor φ ∈Φ , and F φ S k ( ) is the ergonomic risk at workstation k ∈K . Other way to treat the problem is to classify the workstations in several categories (e.g. from 1 to 4) depending on different factors, such as movements, loads, duration, etc. From this point, we can condition the design of the line to the different categories of workstations that are present in a minimum and maximum percentage. Then, if we define H as the set of ergonomic risk components, in our case, somatics ( σ ), psychics ( ϕ ) or both ( σ ∪ ϕ ), we can find a new classification for the TSALBP, that is (see table 1): Table 1. TSALBP_erg typology. The suffixes 1, 2, and 3 refer to the minimization of m, c and A, respectively. The suffix F refers to a feasibility problem. The post-suffix η refers to the type of the restriction linked to the human aspects, psychic and somatic, being the element η ∈ Η where η = {∅,σ ,ϕ,σ ∪ ϕ}. The column "Type" indicates if the problem is one of feasibility (F), mono-objective (OP) or multi-objective (MOP). In addition to the above proposals, the assembly line balancing problems with ergonomic conditions can be treated as multi-objective problems. An example To illustrate the SALBP-1, the TSALBP-1 and the TSALBP-1-σ , we present the following example. Given a set of eight tasks ( J = 8 ), whose operation times, t j ( j = 1,…, J ), re- quired space, a j ( j = 1,…, J ), ergonomic risk F ({ j}) ( ∀j ∈ J ) and which prece- dence graph are shown in figure 1, each task must be assigned to a single stations satisfying the limitations: (1) c = 20 s; (2) A = 20 m; and (3) F max = 60 e-s (ergo- seconds). Solving the SALBP-1, TSALBP-1, TSALBP-1-σ we obtain the following results (see Figure 2, Figure 3 and Figure 4, respectively). Considering the SALBP-1, the obtained result is (see figure 2) a number of workstations of 5. By other hand, considering the TSALBP-1 the obtained result is one workstation more that with the SALBP-1 (figure 3). Finally, if we consider that the ergonomic factor are additive, we can group tasks taking into account, in addition to the cycle time and the area, this factor. Then, we can obtain a result for the TSALBP-1-σ (figure 4). As we can see from the examples, depending on the limiting factors that we consider, the resulting number of stations will be one or other. Obviously, a greater number of conditional factors, means a greater number of workstations. Case study To evaluate the proposed model and to contrast the influence of constrains relative to the ergonomic factors on the number of workstations of the line, required for SALBP-1 and TSALBP-1, we have chosen a case study that corresponds to an assembly line from Nissan's plant in Barcelona. In fact, the 378 tasks (including the rapid test), that are required in the assembly of a motor (Pathfinder), have been grouped into 36 operations. After to set consistently the potential links, predecessors and successors, between the 36 operations, considering the potential links of the 378 original tasks, and taking into account a cycle time of 180 s; an available longitudinal area of 400 cm; and a maximum ergonomic risk of 400 e-s, we have solved, using the CPLEX solver, the three problems that are the focus of this study (SALBP-1, TSALBP-1 and TSALBP-1-σ). In table 2 we can see the optimal solutions obtained, and the need of more workers when are taken into account more realistic conditions in the assembly line problems. In addition we can see the process time of the operations (t), the required area (a), the risk factor (F) and the workstation where each task has been assigned, for each problem. In our case, 19 work teams are necessary when only is taken into account the limitation of the cycle time, 21 when the constraints of area are included and 24 when a maximum ergonomic risk must be respected at each workstation. Conclusions From the family of problems TSALBP, we propose an extension to these problems attending to the need to improve working conditions of workers in production and assembly lines. The result of this extension is the family of problems TSALBP_erg. Specifically, we formulate the problem TSALBP-1-σ, corresponding to the somatic risks, that considers the constraints of cycle time, available area and, in addition, the maximum ergonomic risk to which the workers, assigned to each station, may be subjected. Through a case study linked to Nissan, we observe that the improvement of the working conditions increases the minimum number of required workers to carry out the same work. By other hand, the reduction of the maximum ergonomic risk admissible, supposes a reduction of the labor cost due to injuries and absenteeism, whose valuation will be object studied in future research. Fig. 1 . 1 Fig. 1. Precedence graph of tasks. At each vertex we can see the tuple t j / a j / F ({ j}) corre- sponding to the task. Fig. 2 . 2 Fig. 2. Solution obtained by SALBP-1 (m = 5). Fig. 3 . 3 Fig. 3. Solution obtained by TSALBP-1 (m = 6). Fig. 4 . 4 Fig. 4. Solution obtained by TSALBP-1-σ (m = 7). Table 2 . 2 Obtained solutions by CPLEX from SALBP-1, TSALBP-1 and TSALBP-1-σ. j t a F P SALBP-1 TSALBP-1 TSALBP-1-σ 1 100 400 200 - 1 1 1 2 105 400 210 1 2 2 2 3 45 100 90 1 3 3 3 4 113 300 226 1, 2 3 3 3 5 168 400 336 1, 2, 4 4 4 4 6 17 150 34 2, 4, 5 5 5 5 7 97 250 194 6 5 5 5 8 50 200 100 2, 3, 7 5 6 6 9 75 200 150 2, 8 19 6 6 10 30 100 90 8 6 7 7 11 65 300 195 8, 10 6 7 7 12 35 350 105 10, 11 6 8 8 13 65 50 195 11, 12 7 8 8 14 115 300 345 12, 13 7 9 9 15 60 50 180 14 8 9 10 16 115 100 345 14, 15 8 10 11 17 60 150 120 13, 14, 16 9 10 12 18 105 250 210 16, 17 9 11 12 19 60 150 120 18 10 11 13 20 100 400 200 18, 19 10 12 14 21 100 400 200 19, 20 11 13 15 22 75 200 150 21, 22 11 14 16 23 75 175 225 21, 22 12 14 16 24 105 150 315 23 12 15 17 25 15 100 45 23, 24 17 15 17 26 35 150 105 24, 25 19 15 20 27 175 250 350 24 13 16 18 28 5 0 15 27 14 17 18 29 165 250 330 27, 28 14 17 19 30 5 0 15 27, 28 14 17 19 31 115 150 230 5, 29 15 18 20 32 60 200 120 29, 30, 31 15 18 21 33 85 200 170 5, 31 16 19 22 34 70 200 140 32 16 19 21 35 160 375 320 31, 33, 34 17 20 23 36 165 150 330 35 18 21 24 Acknowledgment The authors appreciate the collaboration of Nissan (NSIO). This work was funded by project PROTHIUS-III, DPI2010-16759, including EDRF funding from the Spanish government.
14,071
[ "1002172", "1002175", "1002173" ]
[ "85878", "85878", "85878" ]
01472272
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472272/file/978-3-642-40352-1_53_Chapter.pdf
Jessica Bruch email: [email protected] Monica Bellgran email: [email protected] Critical Factors for Successful User-Supplier Integration in the Production System Design Process Keywords: Jessica Bruch, Monica Bellgran Production system, manufacturing industry, equipment supplier, integration 1 ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction For manufacturing companies active on the global market, high-performance production systems that contribute to the growth and competitiveness of the company are essential. Among a wide range of industries it is increasingly acknowledged that superior production system capabilities are crucial for competitive success. However, today's focus is on the operations phase, i.e. the serial production of products, rather than on the previous design of the production system. As a result, production systems are generally designed relatively shortly before their installation [START_REF] Bruch | Management of Design Information in the Production System Design Process[END_REF]. That is also the reason why implementation of lean production in manufacturing industry is largely directed towards improving the operational performance of the production system. This is today true for the global manufacturing industry worldwide. Lean production is the combined set of philosophy, principles and tools for managing their production systems. In the end of the day, however, the real root-cause for many problems and losses in production ends-up with issues that emanate from the design process of either the product or the corresponding production system [START_REF] Bellgran | Säfsten: Production Development: Design and Operation of Production Systems[END_REF]. The potential of gaining a competitive edge by improving the way the production system is designed is hence ignored, although it is a well-known fact that the production system design process is the foundation for achieving a high-performance production system [START_REF] Slack | Operations Management[END_REF]. Consequently, if the production system is not designed in a proper way, this will eventually end up with disturbances during both start-up and serial productio hence low When means of to actuall of new pr tainability and built facturing designing critical fa duction s solutions 2 Fr Us The proc phases co the select tion) syn design pr tion of c [START_REF] Wiktorsson | Consideration of Legacy Structures enabling a Double Helix Development of Production Systems and Products[END_REF]. As a supplier ( in each p different to empha cess affec ral distinct design of ysis, (solungineering ts, generaon system y) and the t activities hough the important cyclic pron system ated userbe considion as "the innovation 1] and the nternal and external knowledge, ideas and paths to market, when they seek to maximize returns from the development activities. The study of Enkel et al. [START_REF] Enkel | Open R&D and open innovation: exploring the phenomenon[END_REF] shows, that loss of knowledge, higher coordination costs and loss of control and higher complexity are mentioned as frequent risks connected to open innovation activities. Thus, by working together with an equipment supplier, a manufacturing company faces the risk that knowledge about core production processes is transferred to competitors via the equipment suppliers [START_REF] Lager | Equipment supplier/user collaboration in the process industries: In search of enhanced operating performance[END_REF]. As such, it is useful to review critical factors for successful user-supplier integration when designing the production system. Critical factors Yin and Ning [START_REF] Yeo | Managing uncertainty in major equipment procurement in engineering projects[END_REF] developed a framework suggesting that an inter-enterprise information system, a partnering relationship among the participating organizations and an integrated dynamic planning process are critical factors. Focusing on the collaboration between equipment suppliers and users Rönneberg Sjödin et al. [START_REF] Rönnberg Sjödin | Open innovation in process industries: a lifecycle perspective on development of process equipment[END_REF] argue that more resources should be spent in the early phases as it is important to facilitate intensive collaboration with equipment suppliers. Otherwise, mechanisms important for integration such as meetings, workshops, and teambuilding are limited, which may result in strained relationships, which cannot be recovered at later stages. Another factor important to consider is an adequate management of information flow between the equipment supplier and the user [START_REF] Bruch | Design information for efficient equipment supplier/buyer integration[END_REF]. In order to realize the benefits of collaboration with an equipment supplier, it is important that the exchanged information is cautiously tailored to the specific needs of the partner. Organizing the collaboration in a formal process and appointing a skilled contact person supports an effective flow of information [START_REF] Bruch | Design information for efficient equipment supplier/buyer integration[END_REF]. Further, even in situations where the equipment supplier has a major role in the design of the production equipment, it seems necessary to maintain certain competencies also within the manufacturing company. Hobday et al. [START_REF] Hobday | Introduction[END_REF] point out that the trend towards outsourcing has made it even more essential to keep in mind the required in-house competences for system integration. Von Haartman and Bengtsson [START_REF] Von Haartman | Manufacturing competence: a key to successful supplier integration[END_REF] conclude that to be able to benefit from supplier integration, manufacturing companies have to possess corresponding in-house competences. When integrating the supplier in the design work, an appropriate use and understanding of contracts and governance mechanisms is important. Lager and Frishammer [START_REF] Lager | Equipment supplier/user collaboration in the process industries: In search of enhanced operating performance[END_REF] emphasize the need to consider the "nature" dimensions (proprietary/non-proprietary) in order to avoid the unintentional diffusion of important inhouse know-how to competitors. Incentives, authority and trust are important tools to govern complex procurement situations involving several actors [START_REF] Olsen | Governance of complex procurements in the oil and gas industry[END_REF]. Research Methodology The research is founded on one real time case study at a Swedish manufacturing company, which gives the possibility of being close to the data, thus enabling a close-up view on patterns, and how they evolve [START_REF] Leonard-Barton | A Dual Methodology for Case Studies: Synergistic Use of a Longitudinal Single Site with Replicated Multiple Sites[END_REF]. In line with previous production system design research, our view is that the case study provided a good possibility to capture a more complete and contextual assessment of the process of integrating equipment suppliers in the design work. Clearly, applying a single case study approach suffers from problems of generalization [START_REF] Yin | Case Study Research: Design and Methods[END_REF]. However, the single case study results created here do not rely on statistical generalization but on analytical generalization, i.e. generalization towards theory, which is a potential provided by the case study methodology. The study was part of a new product development project, which required the design of a new production system since the new product could not be assembled at the existing production line but required the design and building of new production equipment. The unit of analysis was the production equipment acquisition project at the manufacturing company in which the equipment supplier played a key role in the design and building of the production equipment. Data was collected between November 2009 and August 2011, where actions and events were observed in real time for 37 days. Overall, being at the company was important for the data collected. For example, on several occasions, project members discussed critical aspects and possible problem solutions at greater length during the lunch break than in project meetings. Data were collected from multiple sources of evidence including passive and active observation, semi-structured interviews and documents aiming at data triangulation. In practice, the same problem or fact was addressed by more than a single source of evidence during the data collection. The collected data has been analysed according to the guidelines provided by Miles and Huberman [START_REF] Miles | Qualitative Data Analysis: An Expanded Sourcebook[END_REF], i.e. data reduction, data display and conclusion drawing/verification. In order to reduce and display the data in an appropriate way, directly after each time of data collection the findings were summarised and transferred into a worksheet for further analysis. Further, a detailed description of the case was developed. The results from the case study were compared with and related to the existing theory, i.e. enfolding the literature. Empirical findings The background to the project studied was the need to replace the existing assembly system as a consequence of the design of a new product generation. The idea generation and concept iteration for the production system was carried out under severe time pressure as the time plan for product design was not adhered to and the resources allocated were inadequate at the beginning while the date for start of production was unchangeable. At the manufacturing company, most of the internal work regarding the design of the production equipment was carried out by the production engineering department and the project leader responsible for the industrialization, i.e. the process required when transferring the product design into start of production. The selected equipment supplier was located in Sweden but in another city about 500 km away. The equipment supplier had wide experience as project supplier to the automotive industry and was thus aware of the particular requirements of the automotive industry. Since there was little room for concept iterations, the case study company commissioned one equipment supplier to design a concept solution between November and December 2009. In parallel, two internal solutions were created based on the earlier ideas of the production engineering manager. The three different concepts were eval-uated and synthesized into one final solution, including solutions for production equipment and material supply aspects. The design of the production equipment followed a formalized process in which the case study company had mapped out different steps or activities that the equipment suppliers had to complete at various points in the process. The activities undertaken in each stage incorporated the transfer of new information from both sides. Since the company normally involves equipment suppliers a considerable amount of standards and rules were used such as the technical requirement specification. However, also new standard documents were created. For example, the manufacturing company put a lot of effort on collecting and documenting more project specific requirements. To coordinate the work between the user, i.e. the case study company, and the equipment supplier, a time plan was created. The time plan included not only key dates to be kept in the project such as when the factory acceptance test or site acceptance test should be carried out but also several verification occasions, which should take place under the project progress. The verification occasions were summarized in a verification plan, which was used to outline when and how the fulfillment of the specified requirements and the progress of the design and development of the production equipment should be followed up and assessed. In addition, the equipment suppliers and the manufacturing company appointed contact persons. The contact person appointed by the manufacturing company was a production engineering manager who had experience from previous development projects and was also the system designer of the assembly line used for the previous product generation. Further, during the production equipment acquisition project a number of meetings comprising different participants from the user and supplier company took place. Depending on the purpose of the meetings, employees from different functions with different knowledge were invited. For example, assembly operators attended the discussion about the screen size at each workstation, while employees from the information technology department contributed to the decision about the operating system. The meetings at the manufacturing company offered an opportunity for the equipment supplier to study the manufacturing plant and the assembly of the actual product, to collect information about on-going manufacturing and the production processes connected to the targeted production system part, i.e. the equipment to be built. Critical factors for successful integration The potential of integrating equipment suppliers in the production system design process are compelling. In the studied project, the integration of the equipment supplier resulted into an innovative and new solution based on the access to and application of new technology. Although the potential benefits can be substantial, integrating equipment suppliers in the production system design process is sometimes an uncomfortable way of doing business. In order to achieve successful integration and thus better production systems a total of 10 critical factors were identified, see Table 1. This factors help to identify reduce potential barriers and expand the relationships between the partners. Table 1. Critical factors for improved user-supplier integration in order to accomplish better production system solutions Factor How it contributes to user-supplier integration Human factors Appoint skilled contact person The contact person enables easily access to missing information and is also used to discuss critical issues. Assign suitable resources Resources needs to be available to engage in the design process and thus to make integration possible. Build trust The manufacturing company needs to have confidence in the equipment supplier's capabilities. Core team A cross-functional team ensures a holistic perspective and speeds up the decision making. Clear and explicit goals should be established. Project management factors Contract Regulatory issues are clearly defined and help to minimized concerns that either part will take advantage. Formal approaches Formal methods such as the process applied, documentation and planning facilitates coordination and synchronization of work activities. Frequent face to face meetings Meetings are used to reduce any equivocality surrounding the process of designing the production system and help to align the culture of the two partners. Information flow Open communication channels are required to improve decision making and ensure that all partners are updated. Design factors Joined idea generation Joined idea generation contributes to clear directions and expectations for the project. The creativity of all individuals involved can be utilized in both concept and detailed design. Further, each partner can identify possible benefits. Specified requirements Specifying as well as understanding requirements of the user contributes to a solution in line with the needs. The findings highlighted the importance of the humans involved in the design process from both partners, i.e. the equipment supplier and the manufacturing company. The results of the case study indicate that it is particularly important to have a skilled contact person at the manufacturing company, which is in line with earlier research [START_REF] Hobday | Introduction[END_REF][START_REF] Von Haartman | Manufacturing competence: a key to successful supplier integration[END_REF]. This person can be compared to the role of a gatekeeper in new product development projects, i.e. a person which can overcome barriers based on differences such as terminology, norms, and values [START_REF] Tushman | External Communication and Project Performance: An Investigation into the Role of Gatekeepers[END_REF]. The contact person can be considered as a key communicator between both organizations and provide a link between the manufacturing company and the equipment supplier. To have the right competence at the manufacturing company is important to evaluate and judge the appropriateness of the solutions proposed by the equipment supplier. Further, the empirical findings show that the project management is critical for successful user-supplier integration [START_REF] Bruch | Management of Design Information in the Production System Design Process[END_REF]. The coordination between the equipment supplier and the manufacturing company are likely to benefit from a formal approach [START_REF] Bruch | Design information for efficient equipment supplier/buyer integration[END_REF]. Following this recommendation leads to different initiatives may be established. Effort should be placed on planning activities and establishing a standardized process and documents. The reason for not being able to coordinate the work of the two partners and being late is because the underlying means used are not constructed to support the work at physical dispersed settings. On the other hand by applying for example a structured process gives advices on what work activities needed to be completed at different points in time and what decisions that needed to be accomplished at different points in time. In addition, the findings indicate that in line with prior research [e.g. 10] efforts should also be placed on communicating the expected production system solution internally, i.e. within the own (user) organisation including product designers and the end-user such as operators, and support functions like production engineers or maintenance engineers. Thus, face-to-face meetings could be used to invite also other people outside the core team. By involving end-users as early as possible in the process their input and feedback could be considered when designing the production system thus avoiding late design changes. Often equipment suppliers are involved when the manufacturing company has already developed its scope of supply including a conceptual solution and the requirement specification. However, the potential benefits that can be achieved by integrating the equipment supplier in the design process are minimised if the equipment supplier is included at this late stage. Thus, there is a need to include equipment suppliers earlier, while at the same time negative effects such as the risk that key competences are distributed to competitors should be minimised. This means that there is a need to look at intellectual factors focusing on achieving a good balance between competition and co-operation. Hence, technical aspects of the buying process need to be addressed [START_REF] Lager | Equipment supplier/user collaboration in the process industries: In search of enhanced operating performance[END_REF][START_REF] Olsen | Governance of complex procurements in the oil and gas industry[END_REF]. Conclusions The purpose of the paper was to identify and discuss critical factors facilitating an integrated user-supplier approach when designing production systems. From the rich database of the real-time case study, a total of 10 critical factors were identified, which in one way or another have an impact on integration between user and supplier. Underlying these factors were three categories: humans, project management and design factors. The three factors are thus related to existing theory. However, what we add is a description of the specific details of user-supplier integration when designing the production system, an issue which has only been addressed to a limited extent. The research presented in this paper should be seen as a first step to improve the integration between users and suppliers in the design of the production system and clearly more research is needed. For example, our empirical findings revealed somewhat unexpectedly that despite physical distance and organizational boundaries, there were no major coordination problems between the partners. Thus, gaining a better understanding of the barriers for supplier-user integration in the design of the production system is needed to be able to identify ways to overcome them to achieve the creation of better and more sustainable solutions. Fig. 1 . 1 Fig. 1. T
20,986
[ "991407", "1002176" ]
[ "301185", "301185" ]
01472273
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472273/file/978-3-642-40352-1_54_Chapter.pdf
Elzbieta Pawlik email: [email protected] Winifred Ijomah Jonathan Corney Current State and Future Perspective Research on Lean Remanufacturing -Focusing on the Automotive Industry Keywords: Remanufacturing Process, Lean Remanufacturing, Uncertainties des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Increased awareness of environmental degradation has precipitated legislative requirements that have drawn attention to product recovery options such as remanufacturing, reconditioning and repair. Of these options, remanufacturing is the only one with which a used product can be brought back to a condition at least equal to that of a new product in terms of quality, performance and warranty [START_REF] Ijomah | A model-based definition of the generic remanufacturing business process[END_REF]. As such, remanufacturing represents a good opportunity for sustainable development. It retains not only the raw material -as is the case with recycling -but it can also keep a large part of the value added to the raw material during the original manufacturing process. Retaining the shape of raw material avoids the need for further manufacturing processes that are C0 2 emitting [START_REF] Giuntini | Remanufacturing: The Next Great Opportunity for Boosting U.S. Productivity[END_REF]Gaudette 2003, Ijomah 2008) and, at the same time, provides significant energy savings by using remanufactured components that require 50-80% less energy to produce than newly manufactured parts [START_REF] Lund | Remanufacturing: the experience of the USA and implications for the developing countries[END_REF]). In addition, remanufacturing creates a new market for employment. According to [START_REF] Golinska | Remanufacturing in Automotive industry: Challenges and limitations[END_REF] the automotive industry is recognised as one of the most environmentally aware manufacturing sectors. This is illustrated by the fact that around 90% of the total worldwide remanufacturing industry belongs to this sector [START_REF] Kim | State of the Art. And Future Perspective Research on the Automotive Remanufacturing -Focusing on Alternator & Start Motor[END_REF]. This article examines lean remanufacturing practices within the automotive industry by describing the remanufacturing process and its key problems. In addition, the lean manufacturing approach within a remanufacturing context is reviewed. A case study based on an automotive remanufacturing company is presented to illustrate the current state of the research in the application of lean manufacturing within the remanufacturing industry (both positive and negative implications are identified). Finally, the paper ends with conclusions and provides recommendations for future research priorities and directions related to the application lean remanufacturing approach. Remanufacturing Process The remanufacturing process usually begins with disassembling used products, known as 'cores', into components, which are then cleaned, inspected, and tested to verify that they meet the required quality standards to be reused without further work. Those that do not meet the requirements can be reprocessed via remanufacturing. If this is not possible due to technological issues or economic reasons, the substandard components are put towards other product recovery options -i.e. recycling -and are replaced with new parts. The remanufactured parts are then reassembled -often together with new parts -into the product [START_REF] Giuntini | Remanufacturing: The Next Great Opportunity for Boosting U.S. Productivity[END_REF]. Depending on the product type and volume required, the remanufacturing steps presented above can be undertaken in a different sequence or some may be omitted if the circumstance permits. For example, inspection can be done before the disassembly and cleaning in order to detect damages and select cores that cannot be remanufactured [START_REF] Sundin | Product and Process design for Successful Remanufacturing[END_REF]). However, some general characteristics remain valid in every remanufacturing process. For example, disassembly always precedes reprocessing and reassembly always succeeds disassembly (Östlin 2008). Figure 2 presents the possible steps (in no specific order) that can be taken during the remanufacturing process. ture each item will be unique to its condition, while some operations may not be necessary at all. Additionally, the quantity of new components required will depend on the quality of cores -the primary source for supplying remanufactured parts [START_REF] Giuntini | Remanufacturing: The Next Great Opportunity for Boosting U.S. Productivity[END_REF]. The significant level of uncertainty in the quality of incoming cores is not the only problem that makes planning and controlling the remanufacturing process more difficult. It is difficult to predict when products will stop fulfilling customer needs, therefore it is often difficult to exactly ascertain the timescale for acquiring necessary cores. This makes the remanufacturing process less predictable than conventional manufacturing [START_REF] Lundmark | Industrial challenges within the remanufacturing system[END_REF]). The variability of products is another challenge that occurs in the remanufacturing process in the automotive industry. This is the result of continuous upgrading of the products due to the use of new solutions and technologies or the elimination of design errors [START_REF] Seitz | Automotive Remanufacturing: The challenges European remanufacturers are facing[END_REF]). In addition, since products are not typically designed for disassembly, the remanufacturing process is more complex. Components that were in good condition can be damaged during the disassembly operation, resulting in higher operational costs [START_REF] Giuntini | Remanufacturing: The Next Great Opportunity for Boosting U.S. Productivity[END_REF]. As a result, current remanufacturing processes are more complex and less predictable than conventional manufacturing and require high levels of inspection and testing to achieve high quality products. This can lead to higher remanufacturing costs and longer remanufacturing lead-times. Therefore it is important to find the right strategy to make the remanufacturing process cost effective and thereby contribute to the overall profitability of the remanufacturing business. Lean Manufacturing One way of overcoming the difficulties involved with the remanufacturing process and increase both efficiency and productivity is to apply the principles, tools and methods of lean manufacturing [START_REF] Seitz | Automotive Remanufacturing: The challenges European remanufacturers are facing[END_REF][START_REF] Kucner | A socio-technical study of Lean Manufacturing deployment In the re manufacturing context[END_REF]). Though lean manufacturing philosophy has its roots in the automotive manufacturing industry, [START_REF] Womack | Lean thinking: Banish waste and create wealth in your corporation[END_REF] state that the principles of this approach can be successfully applied to other sectors. They highlight the principles that define lean thinking as (2003): precisely specify value by specific product; identify the value stream for each product; make value flow without interruptions; let the customer pull value from the producer; and pursue perfection. The entire range of lean manufacturing tools and methods were developed for the practical application of lean thinking. Those tools and methods include: Value Stream Mapping to help generate ideas for process redesign; 5S that allows effective organization of work area; and Kanban, which limits work in process and regulates the flow of goods between the factory, suppliers and customers. Lean Remanufacturing The application of the lean manufacturing approach within a remanufacturing context -Lean Remanufacturing -has only recently gained the attention of researchers and practitioners. Hence, there is little literature on this subject. The combination of remanufacturing and lean manufacturing offers a good opportunity to increase process efficiencies within the remanufacturing industry (Seiz 2007, [START_REF] Kucner | A socio-technical study of Lean Manufacturing deployment In the re manufacturing context[END_REF]). The first reported study is a case study conducted by Amezquita et al (1998) that focuses on an independent automotive remanufacturer and specifically analyzes the process of remanufacturing clutches. Their analysis shows how the lean manufacturing approach can enhance the effectiveness of the remanufacturing process by developing techniques for lean automation and different methods for the reduction of setup times. [START_REF] Fargher | Lean: Dealing with Eight Wastes[END_REF] states that lean manufacturing applied to the remanufacturing operations brings various benefits thanks to the identification and elimination of nonvalue-added activities through continuous improvement. These benefits include [START_REF] Seitz | Automotive Remanufacturing: The challenges European remanufacturers are facing[END_REF]): Reduction of lead-time; Reduced work in process; Improved on-time shipments; Reduced floor space; and Improved quality. Research Methodology The research addresses efforts to improve the remanufacturing process through the application of lean manufacturing practices. As a relatively new topic, there is limited understanding of lean remanufacturing. In order to identify key research challenges and needs within this immature research area we must first understand the current state of the application of lean manufacturing within the automotive remanufacturing industry. A case study is a relevant research strategy that enables a rich understanding of the research context as well as the process being enacted [START_REF] Saunders | Research Methods for Business Students[END_REF]. In this instance, the case study took place in an automotive remanufacturing facility in the United Kingdom. During the visit, observation of the shop-floor and semi-structured interviews with Managers were conducted. Case Study In order to achieve a better understanding of the lean manufacturing approach within an automotive remanufacturing context, a case study was undertaken. The study is limited to analysis of operational shop-floor activities (focusing on the application of shop floor tools) in automotive remanufacturer, Caterpillar Remanufacturing Ltd (CatReman). Three types of products are remanufactured at CatReman: engines, turbines, and turbochargers. Lean manufacturing methods were first introduced into the company in 2005 and since then the principles and tools have been gradually implemented on a broader scale. Case Study Findings. CatReman started their lean application by introducing a lean manufacturing tool called Visual Control 1 within the facility's most critical areas. Thereafter, the lean approach was implemented within the whole facility, starting with creating current and future state Value Stream Mapping 2 , before tools and methods -Pull system 3 (only from customer) and Overall Equipment Effectiveness 4 -were applied. The facility also implemented Total Productive Maintenance 5 for critical machines and is working towards this for all major machinery. Visual Control is of particular importance for CatReman during the visual inspection process. According to Errington (2009) the inspection step is crucial for remanufacturing. The incorrect assessment of a core or component can cause unnecessary additional operational costs. As remanufacturing is strongly affected by variation in products, those tasked with assessment require precise knowledge of each variation. At CatReman, visual boards are employed that display sample components and give visual and written descriptions of the critical areas for inspecting as well as the acceptable criteria. They are located near to the inspection, machining and the assembly areas, which also have standard worksheets giving the employees the same information. This means that if an operator is unsure of whether the component he or she receives is good enough to remanufacture, he or she can check it at the visual display board. This also serves to remind operators of the importance of quality. CatReman is also using other forms of visual display boards, such as section (display metrics specific to the section in which they are located) and facility boards (display metrics for the whole facility) to measure, communicate and control the following metrics: people (largely safety and training); quality (warranty to sales, test rejects, etc.); speed (on time delivery, performance to TAKT time etc.) and cost (unplanned overtime, etc.). The top ten most common defects are also presented on the section metrics boards. All of these visual control tools are used to aid the machine operator in the lean process and act as a reminder of the most prevalent quality issues as part of general communications. Moreover, the plant's layout was significantly changed to be more lean. Employees have a meeting with managers every day, in which they discuss the previous day's 1 Visual Control -'is any communication device used in the work environment that tells us at a glance how work should be done and whether it is deviating from the standard' (Liker,2004 ) 2 Value Steam Mapping -'captures processes, material flows, and information flows of a given product family and helps to identify waste in the system.' [START_REF] Liker | The 14 Principles of the Toyota Way: An Executive Summary of the Culture Behind TPS[END_REF] 3 Pull system -'the preceding process must always do what the subsequent process says' [START_REF] Liker | The 14 Principles of the Toyota Way: An Executive Summary of the Culture Behind TPS[END_REF]). 4 Overall Equipment Effectiveness -'a measure of equipment uptime' [START_REF] Liker | The 14 Principles of the Toyota Way: An Executive Summary of the Culture Behind TPS[END_REF]) 5 Total Productive Maintenance -method for improving availability of machines through better utilization of maintenance and production resources production and the coming days production, and disseminate any local or corporate information, such as visits to the factory. There was also the opportunity for employees to voice comments and give feedback to their manager. Each identified problem is investigated and resolved by using the Ishikawa diagram, 5 why and histograms 6 . These activities have improved the operations within CatReman by: • Reducing work in process; • Increasing production control; and • Providing better service (to increase ability to meet deadlines). The facility metrics show the benefits of lean manufacturing and are reported corporately each month. Despite the advantages gained from implementing lean manufacturing tools and methods identified above, it was observed that not all lean tools and principles were easily and successfully applied within the remanufacturing context of CatReman. The pull system within operations is difficult to apply because of the high variability and low repeatability of products. It is also hard to use takt time 7 due to the uncertain condition of cores. Components have to go through different operations to meet the required specification. Some of them will require more time to pass each step and in some cases some operations will be omitted. Additionally, the uncertain condition of components (particularly unique ones) might cause a delay in the reassembly step, because of the need to wait for new components. Because of the high variability of products it is not cost effective to keep stock of all new components that may be needed. However, it was observed that there was a high inventory level of used products. This is a result of the uncertainty in the quantity and timing of incoming cores, i.e. difficulties in predicting the types of cores and when they will arrive at the facility. During the interviews it was found that implementation of 5S is also difficult, since operations on the various components are carried out at the same workplace. As a result, there is a need to keep many different tools at a workstation, not all of which are required regularly. However, reducing the number of tools can cause waste in motion as a result of continuously picking up tools from the store when required. Returned products are usually dirty and this also makes it difficult to keep workplaces clean. The interviews with Managers identified that Caterpillar has implemented Standard Operating Procedures 8 for all remanufacturing operations -some general (for example for cleaning and inspecting bolts) and some specific to a particular product (for example remanufacturing a cylinder head). They also have Standard Operating Procedures (SOPs) for other processes such as machine maintenance and daily operator checks. This means they can give SOPs to the operator but if additional salvage is required they can not cover it this way. A part might need additional (and not necessarily costeffective) work because it is not possible to buy new parts (the engine is not in current production) or because the lead-time for the new part is so long. In cases such as this, sometimes other similar used parts are adapted to make the part that is requiredbreaching SOPs. In this way, SOPs mitigate some of the problems but are not entirely effective. Conclusion The literature review and case study results have confirmed that the lean manufacturing approach brought some important benefits to the automotive remanufacturing sector. On the other hand, the case study also identified that some of the lean manufacturing tools and methods cannot be implemented successfully within automotive remanufacturing operations. Moreover, it was identified that the uncertainty involved in incoming cores (particularly with quality) might be the key problem in the application of the lean manufacturing tools within an automotive remanufacturing shop floor. A similar observation was reported by Östlin and Ekholm (2007) 9 based on their analysis of the toner cartridge remanufacturer company, Scandi-Toner AB. It was observed that the variable processing times and uncertainties in materials recovered limited the implementation of a lean manufacturing approach within that remanufacturing context. As variable processing times are a result of uncertainty in the quality of incoming cores [START_REF] Lundmark | Industrial challenges within the remanufacturing system[END_REF]) and the same can be said about the uncertainties in materials recovered, the conclusion might be drawn that uncertainty of incoming cores might be one of the main negative factors for the application of lean manufacturing tools within the remanufacturing contextfor every type of product. Despite the fact that difficulties that occur within the remanufacturing process are product type-dependant [START_REF] Sundin | Product and Process design for Successful Remanufacturing[END_REF]), factors that limit implementation of lean manufacturing arise from its origins. Lean manufacturing has its roots in the Toyota Production System and was developed in the conventional manufacturing sector where uncertainty involved with input is not such an important issue. The lean manufacturing approach was not developed to apply to the variable conditions of remanufacturing. The implementation of some of the principles and tools of lean manufacturing within remanufacturing may require adapting to changeable cores or the elimination of identified constraints. This study provided empirical evidence that identified both positive and negative implications of the role of lean manufacturing within remanufacturing context. However, to make any step to improve lean remanufacturing application, one must acknowledge that one case study is not sufficient and more research is necessary. In particular, there is a need to confirm if uncertainties are indeed the main constraint. Identifying other factors that limit implementation and classifying their relative significance would achieve this. Fig. 1 . 1 Fig. 1. The generic remanufacturing process. (Source: Sundin 2004) Ishikawa chart, 5Why, histogramare problem solving tools. Takt time -'time required to complete one job at the pace of customer demand',[START_REF] Liker | The 14 Principles of the Toyota Way: An Executive Summary of the Culture Behind TPS[END_REF] An analysis of the possibility of implementation lean manufacturing methods within toner cartridge remanufacturer.
20,703
[ "1002177" ]
[ "13192", "13192", "13192" ]
01472274
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472274/file/978-3-642-40352-1_55_Chapter.pdf
Benjamin Knoke Thorsten Wuest Klaus-Dieter Thoben Understanding Product State Relations within Manufacturing Processes Keywords: product state, manufacturing process, (inter-)dependencies, relations ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction When it comes to company requirements, manufacturing companies are experiencing more and more challenges from their customers towards product and information quality [START_REF] Kovacic | Application of the Genetic Programming for Increasing the Soft Annealing Productivity in Steel industry[END_REF]. At the same time, the manufacturing processes themselves are becoming more complex, as they are no longer being carried out at one single location [START_REF] Seifert | Unterstützung der Konsortialbildung in Virtuellen Organisationen durch perspektives Performance Measurement[END_REF]. Business success of manufacturing companies is directly based on the quality of their processes, so there is a need for constant process improvement [START_REF] Linß | Qualitätsmanagement für Ingenieure[END_REF]. One step towards this goal is to increase the transparency of the processes, which in turn increases understanding of them. The product state based view focuses on describing an individual product along an industrial manufacturing process, including the state changes and information involved. It is based around the description of the product through its product state. The product state within a manufacturing process can be described at any time with holistic knowledge about its relevant characteristics [START_REF] Wuest | Der Werkstückzustand im Informationsmanagement[END_REF]. Despite this deterministic approach, holistic knowledge concerning all state characteristics is neither worthwhile nor feasible. The reasons for not considering a state characteristic can be divided into three groups [START_REF] Wuest | State of steel products in industrial production processes[END_REF]. They can either be technical (e.g. not measureable or measureable by destroying the product), financial (e.g. measurement is too costly), or caused by a knowledge gap (e.g. state characteristic is not known). However, some state characteristics can be characterized as relevant regarding their impact on the manufacturing process and the product state. One way to identify relevant state characteristics is whether they include crucial information needed for each manufacturing process step [START_REF] Wuest | State of steel products in industrial production processes[END_REF]. Therefore, a product state characteristic that neither impacts the manufacturing process nor influences other product state characteristics may be disregarded. Knowing what the relevant state characteristics do can improves transparency and increases understanding of the manufacturing process itself. The state characteristics are often not independent, but relate to each other and form a complex (manufacturing) system, as well. This paper focuses on the understanding and structure of these relationships between product state characteristics. State Characteristics within Manufacturing Processes First explaining the problem with an initial modeling approach, this chapter then focuses on the relationships between state characteristics in manufacturing processes and in opportunities of application. Initial Modeling Approach A generalized model of a manufacturing process is shown in Fig. 1. Product states (S) frame the process steps (P), through which the product is then described by discrete product state characteristics (SC). The term relation describes the general connections between SCs. These relations can either be one-directional (dependent, [START_REF]TheFreeDictionary.com Großwörterbuch Deutsch als Fremdsprache[END_REF]) or bi-directional (interdependent, [START_REF] Miller | Dictionary entry overview: What does interdependency mean?[END_REF]). Fig. 1. Process model to visualize the relations between state characteristics The parameters of the manufacturing process also shape SCs. SCs depend partially on process parameters (e.g. cutting speed, damping pressure) of previous process steps. As shown in Fig. 1, these production steps are framed by previous and following states. The extent of the process steps has to be defined according to the modeling degree. It is possible to insert the product states into the process flow after each change of the product (or added value), but it is simpler and more reasonable to merge similar activities according to the modeling focus The concept becomes significantly more complex through the integration of process parameters into it and will need to be elaborated in further research. An analytical approach towards the modeling of the relations between state characteristics is described in the following section. Characterization of Possible Relations between State Characteristics Interdependencies can only occur within a definite product state while dependencies cannot go against the process flow, so any potential shapes of the dependencies and interdependencies can be reduced. This is based on two axioms regarding the temporal restrictions of these connections:  Dependencies can never go against the process flow, since a state characteristic always has an existing value that only past or present effects can influence.  Interdependencies can only exist between state characteristics of the same state and time, since a future effect cannot impact the past. If a decision within the manufacturing process is considered because of an upcoming event, it is in fact not influenced by the future event but by expected requirements and other information existing at the present time of the decision. For example: A car within a manufacturing process is painted red not because a customer is expected to react positively to this specific color at the moment of exchange, but because he had ordered a red car in the past, and this information was already useable during the manufacturing process. Analytical Approach and Different Types of State Characteristics As described within the previous section, a SC is also dependent on SCs from previous states. These cross-state relations can add up and become very complex. From an analytical perspective, the relations of SCs can be characterized as mathematical functions. For example, the dependency of a state characteristic SC1 on another state characteristic SC2 is expressed in the term SC1 = f (SC2). If interdependency between these two state characteristics exists, they are described by a common function f (SC1, SC2). These functions can be described either by a mathematical term (e.g. the mass of a cylinder: m = ρ * l * d² * π) or a text (e.g. the overall error ratio is 3% in the dayshift and 5% in the nightshift). Fig. 2. Possible characteristics of state characteristic dependencies If dependencies between three or more state characteristics exist, four different characteristics can be identified. These types are visualized in Fig. 2. In complex models, these types may appear in combination:  State characteristics with discrete dependencies have independent influence on another state characteristic. This occurs on the condition of additional process parameters (x,y). Since SC 3 within the functions SC 3 = f 1 (SC 1 ) and SC 3 = f 2 (SC 2 ) could be eliminated, therefore f 1 (SC 1 ) = f 2 (SC 2 ) would imply a direct connection. This causes the need of additional process parameters, which influence each function SC 3 = f 1 (SC 1 , x) and SC 3 = f 2 (SC 2 , y).  Linked dependencies are another form of the connection between state characteristics. In this case, the combination of two or more state characteristics impacts another. If two state characteristics SC 1 and SC 2 influence SC 3 within a linked dependency, they share an interdependency f 1 (SC 1 , SC 2 ), and SC 3 can be described by the common function SC 3 = f 1 (f 2 (SC 1 , SC 2 )).  The sequence of multiple dependencies is defined as lined dependencies. If the dependencies SC 2 = f 1 (SC 1 ) and SC 3 = f 2 (SC 2 ) exist, they can be merged into a function SC 3 = f 2 (f 1 (SC 1 )).  Finally a state characteristic can also influence two or more other state characteristics. These split dependencies share a common origin and impact different state characteristics. E.g. the functions SC 2 = f 1 (SC 1 ) and SC 3 = f 2 (SC 1 ). Fig. 3. Optional visualization possibilities of multiple interdependencies If three or more state characteristics share interdependencies, they can be described by a common function. Following this approach, the visualization of all connections is redundant and can be replaced by a chain of interdependencies, as shown in Fig. 3. This can significantly improve the simplicity of a model. Mapping redundant information can be evaded by a structured approach to gather all relevant information. This crucial information is described in the next chapter. Crucial Information to Map State Characteristic Relations To benefit from the information of the relations between state characteristics and to share knowledge within a manufacturing process, a map of these relations needs to be modeled. To complete this task, certain information needs to be collected. The necessary data is:  Aim and scope: Needs to be defined in order to create a model that considers relevant elements and neglects irrelevant ones.  Modeling degree: Defines the modeled levels of relations. This includes the number of iterations in describing the relations of elements before and after being connected to state characteristics or process parameters.  States and process steps: Along with their sequence, the states and process steps provide the basic structure of the process. Their sequential arrangement follows the rule of a bipartite graph, for a state is always followed by a process step and vice versa.  Process parameters and state characteristics: The state characteristics and process parameters represent the nodes of the network. Each has to be aligned with the states and process steps.  Transfer functions: The transfer functions describe the relations between state characteristic and process parameters. Collecting the transfer functions, that flow into each node to cover all relations, is sufficient. A possible modeling form for a structured collection of all relevant data is shown in Table 1. Along with information about the manufacturing process and this data set, the relations of all state characteristics and process parameters are described. A model created on the basis of this data can be applied in multiple applications, which are described within the next section. Opportunities for Application of the Concept The structure of the linked state characteristic concepts provides two different approaches for application. Whenever changes within a manufacturing process occur or have to be implemented, the model of state characteristic relations, when transferred to a PPC, can be applied. If the value of a state characteristic exceeds the acceptable range, the system can be used to create a model with all relevant influences on the state characteristic to identify the problem (Fig. 4, left). Alternatively, if a process parameter has to be changed, the system can be used for the opposite purpose: With a map that provides information about the impact of the change (Fig. 4, right). Fig. 4. Different opportunities of application A map that contains all information about any relation between state characteristics and process parameters within a manufacturing process tends to become very complex and difficult to handle. To solve this, a model with different hierarchical layers and levels of detail could be applied. One possible approach is to split the model into a meta-model and two sub-models:  A meta-model that provides a general overview on all states and process steps with their aligned process parameters and state characteristics, along with the general process structure.  A state-model that focuses on the relations of a single state or process step, and shows the relations of all process parameters or state characteristics of the focal state or process step.  A state characteristic-model that visualizes all relations of a single state characteristic or process parameter, and may include the functions that describe its relations. With the information described within the previous section, and a defined modeling notation, the automatic generation of such models enters the realm of possibility and may be the outcome of future research. Limitation and Outlook This paper presented an approach to analyze and map relations between state characteristics based on the product state based view on collaborative manufacturing processes. After a brief introduction on the importance of transparency and in depth understanding of the own processes, the possible relations, dependencies and interdependencies were presented. The different types of dependencies between state characteristics were then elaborated on, and after a brief presentation on information requirements to apply the approach, the possible opportunities of the concept were discussed. Overall, the topic of describing relations between state characteristics over a collaborative manufacturing process chain is very complex and, if applied in industry, requires an in-depth understanding and a high transparency of product, process and effects to realize its potential. Theoretically, if an application of the approach is possible, it will help to increase the final product quality and process efficiency by reducing waste and rework through early identification of problems and allocation of information to the right addressee. The topic itself still needs further research concerning possible ways to identify and describe occurring relations in a practical and efficient way. Early practical insights into the manufacturing processes of an SME (1 st tier automotive supplier) have indicated that the complexity of illustrating relations of state characteristics, along a manufacturing process, increases very fast and has to be managed very careful in order to not thwart the goal of increasing transparency for the stakeholders at hand. Due to these first impressions, parallel to further investigate the possibilities to describe relations on a cause-effect basis, other promising methods and tools such as combined cluster analysis and machine learning will be elaborated on, based on their contribution towards the goals of the product state based view on manufacturing processes. Table 1 . 1 Exemplary structured modeling form to collect all relevant data Form of (process-name) Aim (aim of the model) Scope (scope and regarded relation levels) State or Process State Characteristics or Incoming functions of Step Process Parameters relations A: (state A) (state characteristic A.1) (state characteristic A.2) (function A.1.1) (function A.2.1) 1: (process step 1) (process parameter 1.1) (function 1.1.1) B: (state B) (state characteristic B.1) (function B.1.1) (function B.1.2) 2: (process step 2) (process parameter 2.1) (function 2.1.1) C: (state C) (state characteristic C.1) (state characteristic C.2) (function C.1.1) (function C.2.1) 3: (process step 3) (process parameter 3.1) (function 3.1) D: (state D) (state characteristic D.1) (state characteristic D.2) (function D.1.1) (function D.2.1)
15,558
[ "1002178", "991770", "989864" ]
[ "217679", "217679", "217679" ]
01472275
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472275/file/978-3-642-40352-1_56_Chapter.pdf
Jiří Holík email: [email protected]@vsb.cz Lenka Landryová Universal simulation model in Witness software for verification and following optimization of the handling equipment Keywords: enterprise, model, monitoring data, optimization The aim of this work is to verify the working load of forklifts (generally rolling-stock) based on actual transports and following optimization using a universal simulation model in the software Witness. This aim can be characterized in three following phases. In the initial phase, the actual transport data are obtained for a monitored interval and for the distance matrix between the network points of the actual transports. In the second phase, obtained data are implemented into the simulation model and then the simulation is performed to find utilization of all forklifts. In the last phase, the workload is validated and the optimization is performed using Witness Optimizer. Introduction In companies in which high demands on supply logistics are placed, compliance with the agreed delivery times or deadlines, and work organization itself is emphasized in supplying more end work stations [START_REF] Daněk | Fundamentals of Transport (in Czech Základy dopravy)[END_REF]. Solving of deterministic transport systems can be found at specialized literature, [START_REF] Daněk | Fundamentals of Transport (in Czech Základy dopravy)[END_REF] or [START_REF] Kluvánek | Operational Analysis (in Slovak Operační analýza)[END_REF] etc. Authors of these publications use mostly mathematical methods for searching results. In this work we apply the already known deterministic access and its characteristic into the simulation tool Witness software [START_REF]WITNESS Getting Started Materials[END_REF], [START_REF]WITNESS OPTIMIZER -Optimizer Module[END_REF]. Practical applications of using Witness software are at [START_REF]WITNESS Getting Started Materials[END_REF] or [START_REF] Holík | Resultes of Selected Systems of Queoing Theory on Simulating program Witness and Theoretic Calculation Comparison (in Czech Komparace výsledků simulace vybraných systémů hromadné obsluhy v simulačním programu WITNESS) VŠB -Technická univerzita Ostrava[END_REF], [START_REF] Dorda | Simulation Using for Modelling of Marshalling Yard Hump Operating[END_REF]. From literature it is also known that the specific requirement occurrence for the deliveries, which can be generally characterized as a demand, usually has a time periodic or random character [START_REF] Daněk | Fundamentals of Transport (in Czech Základy dopravy)[END_REF], [START_REF] Kluvánek | Operational Analysis (in Slovak Operační analýza)[END_REF], [START_REF] Holík | Resultes of Selected Systems of Queoing Theory on Simulating program Witness and Theoretic Calculation Comparison (in Czech Komparace výsledků simulace vybraných systémů hromadné obsluhy v simulačním programu WITNESS) VŠB -Technická univerzita Ostrava[END_REF]. Randomness is mainly caused by frequent changes of the produced assortment, also accompanied by limiting the capacity or the number of handling equipment, which in practice can be traced. This paper analyzes full utilization of the current handling equipment under given conditions for specific requirements for deliveries for a given reference period, and subsequently it validates acquired information about the utilization of handling equipment to allow optimization of their number from the point of view of two optimization criteria. Optimization criteria are chosen on the basis of a universal simulation model created in the Witness environment, suitable for tasks of this nature [START_REF]WITNESS Getting Started Materials[END_REF], [START_REF]WITNESS OPTIMIZER -Optimizer Module[END_REF], and their implementation in practice is an alternative engineering solution to managerial methods Just-in-time or Kanban implemented in manufacturing enterprises. Analysis of the Current State of Knowledge Solving the problems mentioned above is closely related regarding the theory to the tasks of planning cycles of vehicles. Planning cycles of vehicles falls within the scientific theory of transport which was once very successfully developed at the University of Transport and Communications in Žilina, Slovakia. To excellent study materials there can be included [START_REF] Kluvánek | Operational Analysis (in Slovak Operační analýza)[END_REF], in which the problem is well formulated and is followed by a theoretical analysis. There are also approaches to the solution and graphical methods of solution, which can be found in [START_REF] Černý | Fundamentals of Mathematical Theory of Transport (in Slovak Základy matematickej teórie dopravy)[END_REF], and which are complex for the large number of vehicles and trips. The subject of an analysis of the current status is the finding of several characteristics that are important input data for subsequent verification of the load of handling equipment. The list of necessary input information is given below in indents. Each area is detailed in the subchapter.  Information on carried out transports -the deliveries during the reporting period in the past,  The matrix of distance between transmission points of the network,  The parameters of handling equipment. Carried out transports The information about carried out transports is demanded mainly in terms of the format of records that must be followed for the proper functionality of the simulation model. A preview of the data structure for simulation of transport is given below as Table 1. The length of simulation period is not critical, but for the relevance of the results it is useful to have data only for periods with similar volumes of deliveries (of production). From -Matrix point - To -Matrix point - Number of handling Units Pallets Material ID - Group of handling device - Speed (Loaded) m.s -1 Speed (Unloaded) m.s -1 Loading Time S Unloading Time S A record containing all information required by the above Table is ideal input data for one or more carried out transports related to FROM WHERE -TO WHERE simulation. Date and time of the transport  Date and time of the transport (if this information is not traceable, then any time of the earliest traceable track can be used). From -matrix point  Point of matrix representing the starting point of transport network, from where the transport was carried out. To -matrix point  Point of matrix representing the destination of transportation networks, from where the transport was carried out. Number of handling units  The number of handling units transported within one record made about carried out transport,  The handling unit can be a pallet, crate, it is necessary to follow for the same handling equipment the same types of handling units -possibly convert to volume units, unless it if needed. Material ID  Identification of transported material, it's optional data item if it is not necessary to statistically follow a transported amount of material types. Group of handling device  Identification of specific handling equipment (a forklift), or a group of handling equipment, performing the same activity. Within the group of handling equipment it is necessary to choose the type of handling equipment which will be promptly available for the carried out transport. Speed loaded -full, empty  Indicates the speed in carrying out the handling of the session. There are handlings, which are limited by speed limits for safety reasons, but the speed of handling equipment is usually limited across the transport network (the company). Loading time, Unloading time  Specifies the time required for loading (unloading) of a handling unit on (from) handling equipment. Time may vary according to space constraints of locations on the network, depending according to, for example, if it is the floor or a position in a certain height. Distance matrix of points in transport network Since the simulation model works with the real speed of handling equipment, it is also necessary to have real scale of distances between points of transport networks, i.e. points, among which transport is carried out. For the purpose of the simulation model it is sufficient if the carried out transports are completed with information about the distances of all the uniquely determined transport sessions, therefore in any clearly designated routes from -to. The simulation model reads the requirements from MS Excel workbook, which includes a macro to generate a complete matrix of distances between all points of the network. The generated matrix is read as input to the simulation model. A preview of the distance matrix (network points are indexed) is shown below as Table 2. Handling equipment specifications Among the main parameters of the handling equipment is speed in a loaded and empty status, and duration of loading and unloading. These parameters must be completed either as a global value, which means that it applies to all transports, or is valid for each transport carried out according to Table 1 in particular. The speed of movement is filled in as the basic unit [ms -1 ] and the duration of loading or unloading in seconds [s]. Load of Handling Equipment -Current State After implementing into the MS Excel control workbook simulation experiment can be carried out, reflecting the load of handling equipment in the monitored period, which correspond to the input data. The output of the simulation model in this phase of work is load charts of all handling equipment, subject to the simulation. The simulation model takes into account parameters such as shifts of handling equipment (their service), or technological operations such as refueling, replacement of battery, etc. Graphical workload is always relative to the useful time shift for the handling equipment. A preview of obtained graphs is shown below as Figure 1. At the moment, when we are working with more handling equipment within each group, it makes sense to deal with the question of the number of those actually re-quired handling equipment. This question can be approached in a dynamic simulation using the Witness Optimizer tool. This tool allows us to change the input parameters of simulation and subsequently compare observed characteristics, which directly reflect the impacts of these changes. The optimization module used in this work allows us to change the number of handling equipment within the group of handling devices (device within a group performs the same group of operations) [START_REF]WITNESS OPTIMIZER -Optimizer Module[END_REF]. Optimization can be done with several different algorithms that seek the best solutions according to the defined optimization criteria. These algorithms use heuristic approaches with the possibility to restrict the set of all possible combinations, thus with a finite number of iterations performed. The found solution is therefore not possible to declare with certainty to be the optimal (it would be possible in the simulation case for ALL Combination) [START_REF]WITNESS OPTIMIZER -Optimizer Module[END_REF], but due to the real time given for simulation it is considered the best possible. The optimization module used in this work is programmed so that you can choose from two contradicting optimization criteria. The first is the cumulative load handling equipment in the group, which can be further understood as a weighted average of all loads of handling equipment belonging to the same group. The second one is the average time required to meet specified requirements for handling. This is the time interval that elapses between the entry requirement for transport until the object is unloaded at the destination point of transport network. Contradicting criteria are chosen to reflect the optimization of the negative impacts. It can be characterized in such way that if we attempt to minimize the average time to meet the need for transport, the negative effect is increasing the number of handling equipment and therefore reducing the value of their cumulative workload. On the other hand, in an effort to maximize the value of the cumulative load, the average time to meet the demand for transport negatively increases. Results of the simulation After each performed iteration, the initial configuration of the number of handling equipment in each group are recorded in the resulting table as well as values of both optimization criteria. If necessary, it is possible to add other statistics that are needed (amounts transported, the number of trips undertaken, the size of stocks in various parts of the logistics chain, etc.). A preview of the table obtained after optimization is shown below as Table 3. Conclusion This work dealt with an analysis of handling equipment load at specified conditions, using tools from the Witness simulation environment to allow optimization of the number of them from the point of view of two optimization criteria. The chosen optimization criteria are based on a heuristic search for optimal solutions in the final set of defined options. The methodology used in this work is applicable to tasks with a similar focus to find the optimum parameters for the operation of such systems in practice. Figure 1 . 1 Figure 1. Graphic preview of the handling equipment load for the simulation of the current state Table 1 . 1 Records of implemented transport Item Units Date a time of Transport DD.MM.YYYY HH.MM Table 2 . Fragment of distance matrix for the simulation model 2 FROM / TO 1 2 3 4 5 6 7 8 1 0 88.2 87.3 87.7 87.7 87.4 87.5 87.7 2 88.2 0 1.7 1 0.9 1.6 1.6 1.2 3 87.3 1.7 0 1.3 1.1 0.5 0.6 0.8 4 87.7 1 1.3 0 0.6 1.2 1.3 0.9 5 87.7 0.9 1.1 0.6 0 0.7 0.7 0.3 6 87.4 1.6 0.5 1.2 0.7 0 0.2 0.4 7 87.5 1.6 0.6 1.3 0.7 0.2 0 0.4 8 87.7 1.2 0.8 0.9 0.3 0.4 0.4 0 Table 3 . 3 Fragment of the resulting comparison of all simulated variants in the optimization Watching parameter Run 1 Run 2 Run 3 Run 4 Run 5 Run 6 AVG waiting for execution 144 148 148 152 154 155 of demand Acknowledgment This research is supported by the CP-IP 214657-2 FutureSME, (Future Industrial Model for SMEs), EU project of the 7FP in the NMP area. A compromise between both contradicting criteria, see Figure 2, can be called the point of balance. The balance point in this case is understood as a generic term. A preview of an intersection of the two criteria is also shown in Figure 2.
14,483
[ "1002179", "991810" ]
[ "153920", "153920" ]
01472278
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472278/file/978-3-642-40352-1_58_Chapter.pdf
Torgeir Welo Intelligent Manufacturing Systems: Controlling Elastic Springback in Bending Keywords: Forming, machine system, adaptive control, dimensional accuracy A rotary compression bending system with automated closed-loop feedback control has been developed. The overall goal is to improve the dimensional accuracy of formed shapes by transferring in-process data into a steering model for instant springback compensation. An analytical method based on the deformation theory of plasticity was employed to develop a physically-based steering model. Unlike alternative control strategies, the present control strategy is attractive for volume production since the approach does not impact cycle time. More than 150 tests of AA6060 extrusions were conducted to determine the capability of the technology. Prior to forming, the material was exposed to different heat treatments to provoke different stress-strain characteristics. The results show that the springback-angle standard deviations were improved from ±0.41° to ±0.13° by activating the closed-loop feedback system. Since the dimensional process capability is improved by a factor of three, it is concluded that the technology has high industrial potential. Background European manufacturing companies are currently facing increased competition from low cost countries. One strategy to meet this challenge is developing more automated production technology, providing reduced labor cost while improving product quality. Hence, the future competitiveness of European manufacturing companies is strongly related to their capability in developing and integrating new technology, followed by commercialization of new products that provide superior value to customers. Despite its long history [START_REF] Hardt | Sheet Metal Die Forming Using Closed-Loop Shape Control[END_REF][START_REF] Hardt | Closed-loop shape control of a tool-bending process[END_REF][START_REF] Jenne | Closed loop control of roll bending/twisting: A shape control system for beams[END_REF][START_REF] Kwok | The development of a machine vision system for adaptive bending of sheet metals[END_REF][START_REF] Lou | Three-Dimensional tune geometry control for rotary draw tube bending, Part 2: Statistical tube tolerance analysis and adaptive bend correction[END_REF][START_REF] Kuzman | Closed-loop control of the 3D bending process[END_REF][START_REF] Ferreira | Close loop control of a hydraulic press for springback analysis[END_REF], adaptive processing is still a technology that may create competitive advantages in the market place. In bending operations, for example, adaptive control strategies can be used for elastic springback compensation and dimensional control of the final product. One control strategy (A), [START_REF] Chu | Modeling and Closed-Loop Control of Stretch Bending of Aluminum Rect. Tubes[END_REF], is to unload the part at an intermediate forming stage and use recorded data to estimate stop position using a predetermined algorithm. A second strategy (B) is to repeat procedure A multiple times until the part geometry meets the desired specifications. Both these strategies are suitable for low-volume production only since the approach increases cycle time. A third strategy (C) is to use a closed-loop feedback scheme and measure parameters such as bending moment, stretch, bend angle and section dimensions, using a steering model for instant prediction of springback. Its main advantage is producing high-quality parts in one single step without intermediate unloading, making it suitable for volume production. Successful application, however, is strongly dependent on the capabilities of the steering model and the measurement technology used to record instant input data. Automotive product examples where adaptive forming has high value creation potential are shown in Fig. 1. The reader is advised to [START_REF] Kleiner | Manufacturing of lightweight components by metal forming[END_REF] for more applications and to [START_REF] Welo | Sheet Metal Forming[END_REF] for research outlook. Technology Brief The overall goal is to improve the dimensional accuracy of formed shapes using a physically-based steering model for elastic springback compensation. A new rotary draw bending machine system with automated closed-loop feedback control is being developed. In-process Bending accuracy is key to downstream processing, fit-up, function and safety. measurement data are transferred into an algorithm for instant prediction of springback and bend angle prior to the unloading sequence. The bending system, Fig. 2, consists of an electric power unit that is connected to a gearbox. A torque transducer is placed between the gearbox and the entry shaft of the bending arm. The rotation of the bending arm is measured using a rotational transducer connected to the gear. A drawback (sleigh) is mounted underneath the bending arm to eliminate friction as the profile slides against the tool during bending. The drawback is hinged locally at the bending arm to ensure free rotation of the front end of the profile. A pneumatic clamp is used at the rear end of the profile, constraining rotation and translation in the length direction. The lower tool has constant radius and is fixed. The tool's contact surface is made with a protruding ridge to form a local imprint along the inner flange, hence preventing uncontrolled local buckling. During forming, torque and rotation are continuously recorded and fed into a PCoperated control system, which automatically calculates stop position using a steering model. The process is entirely managed by the control system, without any human interference other than specifying the desired bend angle. Due to the control strategy (C) adopted, the cycle time of the bending machine is the same as for conventional draw bending. Steering Model Establishing a continuum-based steering model for springback compensation is a tedious matter, see [START_REF] Welo | A New Adaptive Bending Method Using Closed-Loop Feedback Control[END_REF], whose details will be omitted herein. In general, it is essential that the model is capable of capturing the main CONTROL HARDWARE MACHINE HARDWARE Motor with integrated angle meter and control syst. CONTROL SOFTWARE Pneumaic cylinder Torque meter Springback meter sources to variability in the bending process, including material parameters such as yield stress and strain hardening as well as dimensional characteristics of the profile. The analytical method used was based on the theory of plasticity using beam theory in combination with a nonlinear, closed-form moment-curvature relationship as basis for the predictions. The kinematics and structural scheme of the process are interpreted in Fig. 3. Since elastic springback is essentially a result of the instant moment-to-stiffness ratios, i.e. elastic curvature , in the entire region A-D prior to unloading, it is important to establish a model that reflects the actual bending moment distribution. Here it is assumed that the bending moment varies linearly from zero at point A to M B at point B, attaining a stationary, maximum value M C over a transition angle  B from the profile's first contact point with the die and vanishing over a distance 2 • (depth) inside the clamped region C-D. The bending moment is a function of instant material properties and the shape of the cross section, while stiffness is only a function of the latter since Young' modulus (E) is assumed unaffected by (pre-)deformation. Fig. 4 shows an overview of the sequential steps included in the calculation strategy to obtain the steering model. In its general form, the solution includes instantaneous input from multiple parameters, including geometrical ones, which would surpass the measurement capabilities of the current system. If the measurements are limited to die rotation and bending moment only, however, the above equation may be converted into a simplified algorithm on the form: Θ ̂ ̂ φ 1 ̂ φ CLAMP 2D D C B A F R L L Δθ B 0 θ0 M M C B Here  ~ is the die rotation at end of forming, ) ( ~ M is the measured moment, 0  is the desired bend angle, and the parameters ̂ are calibration constants. ̂ reflects an offset factor due to elastic springback within region C-D, while ̂ reflects the elastic springback contribution from regions A-C, which is the reciprocal of the cross-section's bending stiffness . The constant ̂ may be interpreted as a correction factor, making springback response non-linear due to inelastic material behavior, shifting contact conditions and cross-sectional distortions. Calibration Procedure Due to the low inelastic bending resistance of the profile and the use of its variation as input to the steering model, the accuracy of in-process moment readings is key to the success of this technology. Since the torque ( φ ) and rotation (φ ) are measured directly on the shaft between the gear and the bending arm, the effects of gravity forces ( ) ( g M ) and bearing friction ( ) , ( i M    have to be eliminated. Hence ) ( ) , ( ) ( ) ( ~     g i p M M M M      where ) ( p M is the bending resistance of the profile and ∆ is a set of variables that may affect friction (temperature, lubrication, speed, position, bending force, etc). A calibration procedure was conducted to en- k p e p M f        ds m s i v p i e i ) (      0 i k a    0 F   0 i k b     ds m s EI s M i v e i )) ( / ) ( (    ds m s i v p e i p e i ) ( , ,   ) (  f  ) ( ~0   f                            ) ( ) ( 1 ) ( 2 2 ) ( ) ( 2 1 ) ( ~0 2 0 0 0 3 0 0 2 1 0 0 0        M EI R n f M L D L R n f EI L n f n f R L   ) ( 1 ) ( 3 1 0 0    M c M c c      sure that the transducer would measure the net contribution from the profile in the tests. After zeroing out signals from gravitation and friction, additional tests were run without profile to determine noise. The results showed that the torque standard deviation was 1.22.0 Nm within one cycle. The mean value drifted slightly from the first to the last test (1.45.0 Nm), reflecting mainly variations from friction. Capabilities of Manual and Adaptive Bending More than 150 bending tests of hollow AA6060 extrusions were conducted to determine the dimensional capability of the technology. In order to provoke different material behaviors, the profiles were aged to different tempers: 'as is'(T1), 60 minutes and 120 minutes at 175 ºC, providing initial yield stress range of ± 17%. After bending, the profile was clamped loosely to a fixture and the relative angle between the fixed and the free ends was measured. The repeatability was checked by performing several consecutive measurements of the same profile. The dimensional capabilities can be evaluated by considering the process capability index, 6 • Θ ⁄ , in which is the tolerance band and SD() is the realized angle standard deviation. Inspecting the results in Fig. 5 shows that the manual process (steering model deactivated) creates three bend angle clusters, one for each heat treatment, with T1-profiles providing the least springback. With the steering model activated, the results merge together, indicating that the main parameters are taken into account in the steering model. Table 1 shows that the adaptive process is able to reduce the maximum variation from 1.27° to 0.45°. Assuming 1.0°, the adaptive process shows a process capability that is three times better than the manual one. If the bend angle is, say, a standard dimensional feature, C p >1.33, the manual process would require a tolerance band of 3.26°, whereas the adaptive process would need a tolerance band of ± 0.53° to provide good parts. This result demonstrates that the technology has high industrial potential in terms of improved quality and reduced cost. Conclusions and Lessons Learned Based on this work, the following conclusions can be drawn:  A new, adaptive bending technology with closed-loop feedback has been developed and validated using full scale experiments;  The adaptive bending method has proven to dramatically improve the dimensional process capability;  The technology has great industrial potential in terms of improved dimensional quality and reduced manufacturing costs. Overall, the main challenge of this work was measuring the bending moment with sufficient accuracy. It turned out that the key was to reduce friction (variations) by replacing sliding bearings with roller bearings for the pivoting bend tool. Doing the machine design all over again, it would be beneficial to use lighter tool and die components to reduce sensitivity to variations since the bending forces are of the same magnitude as the gravity forces for the current concept. It was also a challenge to establish an in-process measurement strategy, relating the datum points of the tool to those of the profile since fit-up is dependent on dimensional accuracy of the incoming part. On the way to commercialization, its robustness and durability in a plant environment must be tested. Finally, since the steering model may utilize additional instant geometry data, future work includes extending measurement capabilities to improve the accuracy of the bending methodology even further. Fig. 1 . 1 Fig. 1. Application examples where springback control in bending is key to product quality. Fig. 2 . 2 Fig. 2. System for automated closed-loop feedback control (left). Mechanical details (right). Fig. 3 . 3 Fig. 3. Kinematics and moment distribution used as basis for development of steering model. Fig. 4 . 4 Fig. 4. Calculation strategy for development of steering model for springback compensation. Fig. 5 . 5 Fig. 5. Distributions of bend angle ( adjusted to the same mean value) for the two methods. Table 1 . 1 Result summary. Manual Adaptive Improvement Average bend angle: 80.75° 79.82° NA Maximum variation: 1.27° 0.45° 282% Standard deviation: 0.41° 0.13° 315% C p (B t = 1.0): 0.41 1.25 305% B t (C p =1.33) : 3.26° 1.06° 307% 6
14,100
[ "1002181" ]
[ "50794" ]
01472279
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472279/file/978-3-642-40352-1_59_Chapter.pdf
Dag E Gotteberg Haartveit Marco Semini email: [email protected] Erlend Alfnes email: [email protected] Splitting or sharing resources at the process level: An automotive industry case study Keywords: Focused factories, Operations strategy, Original equipment suppliers, automotive industry Original equipment suppliers (OES) supplying the automotive industry are in a business characterized by fierce competition and long contracts. Fulfilling these contracts often implies producing serial parts while an automotive is in serial production and an obligation to provide spare parts after the serial phase. The first period is characterized by large volumes and production based on stable forecasts. The second period implies production for the spare parts marked and this period is characterized by sporadic orders and small volumes. Focused factories theory suggests that production of products with different market and product characteristics should be carried out in separate focus factories. This paper discusses the feasibility of focused factory theory using an OES as an illustrative case, and presents relevant questions to address in order to achieve focus at the process level. Introduction Original equipment suppliers (OES) supplying the automotive industry are in a business characterized by fierce competition and long contracts. Contracts for supplying original equipment manufacturers (OEM) with serial parts are usually 7 years and when they run out OES's are under obligation to provide spare parts for periods up to 15 years. The first period is characterized by large volumes and production based on stable forecasts. The second period implies production for the spare parts marked and this period is characterized by sporadic orders and small volumes. Automotive parts are normally mass produced for efficient production in large volumes. The same production system is also utilized for low volume production of spare parts. The spare parts market has the potential of being lucrative for the OES's but serving the market requires flexible production with short lead times [START_REF]Performance in reserve; protecting and extending automotive spare parts profitability by managing complexity[END_REF]. This creates challenges for the OES's. Being able to satisfy the marked requirements of ordinary automotive parts and spare parts demands production systems for, respectively mass production and flexible production. This influences choice of machinery, operators and production control, and is thereby difficult to combine in one production system. On the other hand, separating the production systems entail duplicating resources and might not be cost efficient. The purpose of the paper is to present challenges related to focused factories theory and to propose criteria and questions to address in order to achieve focus on the process level based on a comprehensive case study The case company is a Norwegian subsidiary of a German corporation and is one of the world's largest manufacturers of car bumpers made of aluminum. It supplies bumpers to almost all mayor car manufacturers producing cars in Europe. Since Skinner [START_REF] Skinner | The focused factory[END_REF] introduced the concept of focused factories in 1974, creating this focus by assigning operations resources to satisfy competitive factors has been discussed by scholars. Hill [START_REF] Hill | How to organise operations: Focusing or splitting[END_REF] recently described six alternative approaches suggested within theory to find this focus. One of them were the volume approach which Semini et al. [START_REF] Semini | Effective Service Parts Production: A Case Study[END_REF] applied to propose an organization of the serial and spare part production at the case company in to two different production systems and thereby suggesting two focused factories. This paper will build upon the work of Semini et al. but acknowledge Hill's finding that focus is not necessarily achieved by splitting. In some cases different resources and processes should be shared in order to reach strategic goals [START_REF] Hill | How to organise operations: Focusing or splitting[END_REF]. Literature review The literature review chapter will shed light on the pros and cons of applying focused factory theory. The term focused factories is limited to imply factories within factories for the remainder of this paper. This limits the scope of the term to i.e. not include decisions related to facility location. Splitting the factory into two focused factories as was suggested by Semini et al. was thoroughly rooted in literature. Slack and Lewis [START_REF] Slack | Operations Strategy, 2nd. 2008[END_REF] argues that production systems for products with clearly differing characteristics will not be effective. Porter [START_REF] Porter | What is strategy? Published[END_REF] concurs by arguing that these kinds of production systems makes the company "stuck in the middle" meaning that the production system will have to cater to different, often contradictory goals with the same equipment, organization and processes. Being stuck in the middle leads to issues such as: • Challenges in regards to choosing right levels of automation, flexibility and integration • Challenges related to achieving flow oriented layout (product type or process type layout • Planning and control principles not adapted to production environment or demand patterns • Challenges related to developing knowledge and know-how for many different product types • Challenges for the sales and marketing department handling two different markets While the focused factory theory has been a success for many companies and industries, production of aluminum bumpers where each bumper is a serial part and then a spare part creates extraordinary issues. Production of the same product in its two phases requires the same product knowledge, production equipment and technological know-how etc. Production of a product with these characteristics in two focused factories seems not to be cost efficient and will according to Hyer and Wemmerlov [START_REF] Hyer | Reorganizing the factory: Competing through cellular manufacturing[END_REF] lead to these issues: • Reduced scale effects and unnecessary duplication of tools and machinery • Risk of sub-optimizing each factory • Long lead times and poor utilization of capacity in marked fluctuating situations • Loss of knowledge and know-how related to each product • Reduced opportunity for optimized planning due to factory and resource boundaries. The two previous sections have indicated that there is no clear cut guidance on how to split resources to ensure focus. Hill [START_REF] Hill | How to organise operations: Focusing or splitting[END_REF] suggests that the overall focus for an organization should be chosen based on the company's products' order winners and qualifiers. The next chapters will introduce the case company and propose criteria that enable organizations to find its focus by splitting or sharing at the process level. Bumper production This chapter presents how bumpers are produced at the case company and goes on to explain the alternative operations strategy proposed by Semini et al. [START_REF] Semini | Effective Service Parts Production: A Case Study[END_REF] Bumper production at the case company today Bumper production essentially consists of three processes. In the casting house, aluminum billets are produced from ingot, scrap metal, and alloying metals. The second process uses these billets to produce profiles of adequate shape and length by means of extrusion and cutting. More than 100 different types of profiles are produced due to unequal shapes and forms of different car models' bumpers. Finally, the third process forms the bumpers. Forming of the bumper is carried out in the bumper plant. Extruded profiles are processed in one of several automated forming lines, carrying out sawing, cutting, tempering, stretch forming, stamping, cutting and washing. Thereafter, all products need to be hardened in furnaces. While some bumpers are finished after hardening, many of them -especially spare parts -need some further processing, such as CNC (computer numerical control) machining, welding, assembly, etc. Serial parts are either sent directly to OEMs or to assembly plants where they are assembled into integrated crash management systems. Spare parts are often sent to OEM-owned central spare parts warehouses. As far as production planning and control is concerned, the forming lines operate with relatively large batches, with batch sizes varying between 2000 and 10000 bumpers. When a serial part becomes a spare part, it is treated as before. It is often run at the same forming line as before and processed at the same CNC/welding machines. It is also run with the same batch sizes as before, but much more infrequently given their much lower volume. This is again due to relatively complex changeovers, which are particularly challenging for spare parts, since spare parts are produced so infrequently. Often, the tools needed to produce spare parts need considerable maintenance before they are ready for production again. Given that customers often order low quantities of spare parts, has led to considerable stocks of both WIP and finished spare parts, with its associated cost in the form of invested capital, space, maintenance, quality deterioration, administration and handling, and risk of obsolescence. Two separated, dedicated factories The corner stone of the new operations strategy proposed by Semini et al. was to separate serial parts production from spare parts production, thereby creating two focused factories [START_REF] Semini | Effective Service Parts Production: A Case Study[END_REF]. In focused factories, only products with certain characteristics are produced, which allows an increased level of focus. That is, by having separated processes, both physical and planning processes, the two factories within the factory can be run with two different focuses, each adapted to the specific needs of each product group. Products can be grouped according to volume, process, product/market, variety geography, or order-winners and qualifiers [START_REF] Hill | How to organise operations: Focusing or splitting[END_REF]; the grouping proposed by Semini et. al. [START_REF] Semini | Effective Service Parts Production: A Case Study[END_REF] was done according to volume (serial parts = high volume; spare parts = low volume). The serial parts factory would produce approximately 15% of the product spectrum, which stand for approx. 80% of the volume. The spare parts factory would produce the remaining 85% of variants, standing for 20% of the volume. From overall focus to focusing at the process level The question of achieving focus is not merely an overall question answered by separating production based on product volume as was introduced by Skinner [2] and proposed by Semini et. al. [START_REF] Semini | Effective Service Parts Production: A Case Study[END_REF] in the previous chapter. The overall focus needs to be brought down on a process level where focus can imply both splitting and sharing individual resources, and various degrees of splitting. The following chapter will present and structure relevant questions to address in order to achieve focus on a process level, and introduce three dimensions of splitting. Focus at the process level While attempting to organize the production at the case company we realized that achieving focus is a stepwise but also iterative process. One need to decide on an overall focus based on the alternative approaches recapped by Hill [START_REF] Hill | How to organise operations: Focusing or splitting[END_REF]. At the same time the processes involved has to be understood and their feasibility to be split or shared for different products examined. If the process is to be split, splitting can be done to different degrees in several dimensions. The overall focus is chosen based on analysis of the company's product and marked characteristics. Hill argues for achieving focus by focusing production to suit products with the same order winners and qualifiers. If this implies more than one focus the foci needs to be broken down on a process level. Table 1 shows the processes involved in producing aluminum bumpers at the case company. The table was developed based on the mapping guidelines presented in The extended enterprise model [START_REF] Bolseth | The Extent Enterprise Operations Model Toolset[END_REF]. Each process can be split or shared to achieve the overall strategic goal each foci aims for. In order to make these decisions some questions based on six key criteria needs to be addressed. These criteria and questions are gathered and adapted from theory [START_REF] Hill | How to organise operations: Focusing or splitting[END_REF][START_REF] Hyer | Reorganizing the factory: Competing through cellular manufacturing[END_REF][START_REF] Hallgren | Differentiating manufacturing focus[END_REF] and structured in Table 2. Table 2 Criterion and guidelines for focusing production systems with differing strategic tasks If the answer to the questions implies to split a process, this splitting needs to be decided for three dimensions. Should the two or more focused processes be carried out in different areas with different equipment, but at the same time be organized as one entity with employees servicing both processes? Table 3 illustrates the dimensions and span in degrees of splitting that could be chosen for each process. Examples from the case company The overall foci for the case company were found by analyzing the company's products order winners and qualifiers as Hill proposed. The result was a serial part focus and a spare part focus. This differed from Semini et al.'s proposition by not being based on volumes. Some serial products are made in small volumes, but have the same order winners and qualifiers as other serial parts and should be produced with a serial part focus. These two foci should then be evaluated for each process in order to find if the individual process should be shared or split to which degree. The two following examples illustrate briefly these kinds of decisions. They concern two of the 22 processes identified in Table 1. The first example is a relatively straight forward decision which regards the casting process. The cast house have large scale effects, requires large investments to duplicate and utilization of the equipment is important. The performance objectives for the casting process do not vary between the two foci and the complexity is not influenced. The process at the cast house should therefore be shared and fully integrated along the three dimensions. The second example is the order management process. Order management at the case company is tightly connected and integrated with customer relations. Each OEM (customer) has its dedicated Key Customer Manager (KCM) which handles contracts and orders from the specific customer. The situation today is that KCM's has the responsibility for both serial and spare parts. KCM's significantly affects operations by influencing order sizes, lead times and end of life production negotiations. The performance objectives in regards to these vary significantly for serial and spare parts. The fundamentals of achieving good spare part production performance are different from serial part production. This knowledge is limited at the case company today and the knowledge that exists is not communicated to KCM's. In regards to utilization of the KCM resources, having more that one KCM per customer is excessive. These KCM's are senior employees with unique relations and knowledge of the customers that is hard and costly to duplicate. Thus, the order management process should be shared, co-located, integrated, but bolstered with employees that can support the KCM's with reaching performance objectives for spare parts. In this regard job specialization should increase in the extended order management process. Conclusion This paper builds on the work by Semini et al. [START_REF] Semini | Effective Service Parts Production: A Case Study[END_REF], but acknowledge that the authors did not address the question of focused versus shared resources sufficiently. The characteristics of aluminum bumper production imply that some resources and processes should be shared. At the same time it is necessary to split other resources and processes in order to achieve focus. Thus, taking focusing decisions on an overall level is not sufficient. The main contribution of this paper is a presentation of relevant questions to address in order to achieve focus on a process level. The paper introduces three dimensions for which splitting decisions has to be made and briefly explains two focusing suggestions for the case company. A more thorough and complete mapping and evaluation of all processes and resources associated with the production of a bumper at the case company is currently being carried out in order to decide the organization of operations. Based on the notion that some resources and processes should be shared, opportunities for further research emerge: how should the shared resources be planned and controlled? What kind of principles should be utilized to ensure that the focused parts of the factory get the level of service it should? Table 1 Processes 1 Administrative processes Physical processes Order management Inbound handling Assembly Forecasting Internal transport Storage Production and inventory control Production Packaging and labeling Procurement -Casting Outbound handling Quality management -Extrusion External transport Tooling -Forming Performance measurement -Machining Sales and operations planning -Welding/CNC Table 3 Degrees of splitting 3 Dimension Degree of splitting Spatial/Physical Co-located -Same equipment Geographically separated -Different equipment Organizational Integrated Disintegrated Job Specialization Low High Acknowledgements This research was made possible by AUTOPART (AUTOPART -World class, focused spare part production) and SFI NORMAN (SFI -Norwegian Manufacturing Future) supported by the Research Council of Norway. Criterion Guidelines -questions to address for each process Competitive priority / strategic tasks How do the product differences affect the particular process? Is the same competence required for the different products? Will the different foci demand specific product/process knowledge? Flow What impact does sharing or splitting the process have on the flow between the processes (information, material, etc.)?
18,770
[ "1002182", "1001970", "991453" ]
[ "556764", "556764", "50794" ]
01472285
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472285/file/978-3-642-40352-1_64_Chapter.pdf
Hajime Mizuyama email: [email protected] Design and Simulation-Based Testing of a Prediction Market System Using SIPS for Demand Forecasting Keywords: Agent simulation, collective intelligence, Delphi method, demand forecasting, prediction markets Self-adjustable interval prediction securities (SIPS) are newly proposed prediction securities that are suitable for market-based demand forecasting. The whole feasible region of the demand quantity to be estimated is divided into a fixed number of mutually exclusive and collectively exhaustive prediction intervals. Subsequently, a set of winner-take-all-type securities are issued that correspond to these intervals. Each portion of the securities wins a unitary payoff only if the actual sales volume falls in the corresponding interval. The contracts are called SIPS because the borders between the intervals are dynamically and adaptively self-adjusted to maintain the informativeness of the output forecast distribution. This paper first designs a prediction market system using SIPS equipped with a central market maker and then confirms how the system operates through agent-based simulation. Introduction In addition to the historical data that are formally owned by a company, fragmentary, dispersed and informal knowledge owned by the company's employees or customers has begun to be treated as a valuable information source for forecasting demand in today's rapidly changing market [START_REF] Mizuyama | A Prediction Market System Using SIPS and Generalized LMSR for Collective-Knowledge-Based Demand Forecasting[END_REF]. Recently, it has been demonstrated that a prediction market can aggregate this dispersed knowledge in a similar, but more efficient manner than the Delphi method [START_REF] Plott | Markets as Information Gathering Tools[END_REF][3] [START_REF] Wolfers | Prediction Markets[END_REF]. When a company uses a prediction market to conduct demand forecasting, it usually recruits employees or customers as participants and allows them to trade the fixed-interval prediction securities (FIPS) concerning the demand quantity to be estimated. FIPS are a set of winner-takes-alltype contracts, each of which is tied to a future event that the actual sales volume falls in a specified one among the predetermined set of prediction intervals. Because the intervals are mutually exclusive and collectively comprise the feasible region of the quantity to be estimated, the market prices of the contracts provide a subjective probability distribution of the demand quantity. Some researchers have argued that the scope between the most pessimistic and optimistic prior estimates should be divided into around eight equal-width intervals, and they should be used as the prediction intervals together with the regions lower than and higher than the scope [START_REF] Ho | New Product Blockbusters: The Magic and Science of Prediction Markets[END_REF]. The approach described above has been shown to be effective when tested in an existing company [START_REF] Chen | Information Aggregation Mechanisms: Concept, Design and Implementation for a Sales Forecasting Problem[END_REF]. Despite its utility, however, this approach has several limitations. Most notably, it does not clearly specify the possible scope of the demand quantity a priori. Ironically, as the potential for capturing information from the prediction market grows, so too does the difficulty associated with properly setting the scope. The output forecast distribution depends on a predefined scope and if it is not appropriately set, the entire market session can be rendered meaningless. To resolve this limitation, Mizuyama and Maeda [START_REF] Mizuyama | A Prediction Market System Using SIPS and Generalized LMSR for Collective-Knowledge-Based Demand Forecasting[END_REF] introduced a new type of prediction securities called the self-adjustable interval prediction securities (SIPS). Like FIPS, SIPS are also a set of winner-take-all-type contracts assigned to prediction intervals. Unlike FIPS, however, in SIPS the borders between the intervals dynamically and adaptively self-adjust over the entire feasible region to maintain the informative quality of the output forecast distribution. This paper first designs a prediction market system using SIPS suitable for demand forecasting, then verifies how the system operates through agent-based simulation. The remainder of the paper will be organized as follows. First, SIPS and the prediction market system using them will be described. Next, the agent-based simulation model for testing the system will be presented. Following this, simulation experiments and their results will be given. Finally, conclusions drawn from these activities will be offered. 2 Prediction Market System Using SIPS Self-Adjustable Interval Prediction Security Suppose that ten prediction intervals, ] , ( 1 1 x I   , ] , ( 2 1 2 x x I  , …, ) , ( 9 10   x I , of demand quantity x to be estimated are defined and their corresponding prediction securities are issued. Each unit of these securities is a contract that will pay off a unit amount of money if and only if the realized value of x is actually contained in the corresponding interval. In this situation, the most pessimistic and optimistic prior estimates are x 1 and x 9 respectively, and the initial possible scope is (x 1 , x 9 ]. At the beginning, intervals I 2 , I 3 , ..., I 9 have equal width (w 0 ). If the contracts were FIPS, the borders between intervals x 1 , x 2 , ..., x 9 would not be modified until the entire market session is finished. In case of SIPS, however, the entire market session is divided into several rounds and the divide-merge operation described below is applied at the end of each round so the borders between the intervals can be appropriately updated. Step 1: Find the interval I n having the highest value of D n , which is the evaluation measure to be defined later. If the value of D n is greater than a predetermined threshold, go to Step 2. Otherwise, repeat this step after the next round. Step 2: Divide the chosen interval I n : ) ..., , 8 , 9 ( 1 n k x x k k    (1)               ) 10 ( ) 9 ..., , 3 , 2 ( 2 / ) ( ) 1 ( 0 1 1 0 n w x n x x n w x x n n n n n (2) Step 3: Merge the pair of consecutive intervals I m and I m+1 having the least countereffect on the evaluation measure: ) 10 ..., , 1 , ( 1     m m k x x k k (3) Each unit of the prediction security owned by a participant assigned to an interval divided at Step 2 is automatically exchanged for the pair of the securities corresponding to the sub-intervals that resulted from the split. Similarly, a pair of the securities tied to the intervals merged at Step 3 is altered into a unit of the new security corresponding to the merged interval. When a participant holds an uneven number of securities to be merged, some redundant units of either one of them will remain. When this occurs, the redundant units are also exchanged for the new merged security so the prices of the other securities and the proportion of the market value of the participant's assets will not change. The evaluation measure, D n , is defined as follows: The objective of the dividemerge operation is to readjust the definition of prediction intervals so that the hidden collective forecast f(x) can be accurately captured by the price density function g(x), which is defined as: ) ( / ) ( n n n n I x w p g x g    (4) where p n is the unitary price of the nth security and w n is the width of the nth interval (the preset finite value w 0 is used for w 1 and w 10 , instead of  , for convenience). Thus, the effect of the operation can be measured by the extent to which distance between f(x) and g(x) is reduced by the operation. When measuring this effect for each interval, since f(x) is an unknown function, a piecewise quadratic approximation constructed from g(x) is used instead, as shown in Fig. 1, where: n x x n n x x p g w dx x g dx x f n n n n           1 1 ) ( ) ( ~ (5)                    ) 10 ( x n-1 w n w n+1 w n-1 x n g n-1 g n g n+1 x n+1 x n-2 ) ( ~x f n L n R x ) ( ~x f n L n R Fig. 1. Piecewise quadratic approximation of f(x) before and after division.                    ) 10 ( 0 ) 10 1 ( ) 1 ( 2 1 1 1 n n w w g w g w n g R n n n n n n n n (7) According to this approximation, it is possible to estimate how the division of the nth interval will change the shape of g(x) as shown in Fig. 1. Thus, the effect of the division can be evaluated by comparing the distance from g(x) to ) ( ~x f between the two graphs in Fig. 1. The reduction in distance resulting from the division can be calculated with a formal distance measure between the distributions, such as a Kullback-Leibler distance, L1-norm, L2-norm, and so on. For the sake of simplicity, this paper uses the L1-distance between g(x) before and after the division as a surrogate measure. Thus, the evaluation measure D n of dividing the nth interval is defined by: n n n x x x x x x n w L R dx x f dx x f D n n n n n n               ) ( 4 1 ) ( ) ( ~2 2 1 1 1 (8) The effect of merging a pair of consecutive intervals can be similarly evaluated. Central Market Maker for SIPS Due to the occasional activation of the divide-merge operation, trading SIPS through simple continuous double auction among the participants will be confusing. Thus, this paper provides a computerized market system equipped with a central market maker for trading SIPS. A central market maker in a prediction market accepts any bid/ask requests from a participant as far as she/he agrees with the price offered by the market maker. Accordingly, it resolves the liquidity problem even in a thin market setting. One of the most well-known and widely-used market-making algorithms for a prediction market is the Logarithmic Market Scoring Rule (LMSR) proposed by Hanson [7][8]. This market maker can handle FIPS. To illustrate, suppose that there are K participants, and participant k has q kn units of the security assigned to the nth interval. Given this, the entire security that has been sold thus far is provided by:    K k kn n q Q 1 (9) The LMSR defines a cost function based on this variable:             10 1 / exp log ) ( n n b Q b C Q (10) where Q = (Q 1 , Q 2 , …, Q 10 ) , and determines how much to charge when a participant buys q  units of the securities using the cost function: ) ( ) ( Cost Q q Q C C     (11) Thus, the unitary price of the nth security is determined by:         10 1 / exp / exp ) ( i i n n n b Q b Q dQ dC p Q (12) Unfortunately, simply applying LMSR to SIPS cannot ensure continuity of the securities' unitary prices in addition to the market value of each participant's assets before and after the divide-merge operation. Thus, to resolve this problem, an explicit prior forecast distribution f 0 (x) can be introduced and the cost function can be extended accordingly:              10 1 / exp log ) ( n n n b Q r b C Q (13) where:     n n x x n dx x f r 1 ) ( 0 (14) Then, the unitary price of the nth security is given by:           10 1 / exp / exp ) ( i i n n n n n b Q r b Q r dQ dC p Q (15) Because of this extension, the division of a certain interval will not affect the respective prices of the securities corresponding to the other intervals and the sum of the prices of the divided securities is equivalent to the price of the original security prior to the division. Gao et al. [START_REF] Gao | Betting on the Real Line[END_REF] also extended the LMSR to the real line for interval betting. The fundamental concept of their approach was similar to ours, but our approach is specifically tailored for handling SIPS by introducing an explicit prior forecast distribution. 3 Agent-Based Simulation Simulation Model To test how the proposed prediction market system using SIPS operates, an agent simulation model was developed. In this model, some computerized trading agents traded SIPS with the central market maker introduced above. Each agent had its own subjective forecast distribution and evaluated its risky assets according to the logarithmic utility function and this subjective forecast distribution. For each agent's trading turn, it chose from three options: (1) buying a unit of security corresponding to a certain prediction interval, (2) selling a unit of security corresponding to a certain prediction interval, and (3) buying and selling nothing. Among these, the option that was chosen maximized the posterior subjective expected utility. Simulation Experiments Ten trading agents were modeled in the simulation experiments. They were endowed 100 P$ at the beginning of a thirty-round market session (P$ is the unit of the play money used in the simulation, and the unitary payoff of the security is 1 P$). There were 100 trading turns in each round, and the turns were randomly distributed among the agents. The prior forecast distribution was set as f 0 (x) = N(150, 100); the value of the LMSR parameter (b) was set at 100; and the threshold value of the divide operation was set as 0.03. The objective of the simulation experiments was to confirm whether the proposed prediction market system using SIPS can appropriately adjust the scope of forecast according to transaction history. Thus, all the agents were assigned the same Gaussian distribution f(x) = N(200, 20) as the subjective forecast distribution, and initial scope of forecast (x 1 , x 9 ] which was set as (a) being away from the forecast (300, 400], (b) being too wide compared to the forecast (100, 600], or (c) being too narrow compared to the forecast (190,210]. The output distributions obtained by FIPS and SIPS are illustrated in Fig. 2, 3 and 4. Simulation results confirmed that in all cases, the scope of the forecast is actually modified step-by-step when SIPS is used. After several rounds, it successfully captured the location of the given subjective forecast distribution. How many rounds it took until the scope has been properly readjusted was contingent upon on the difference between the initial scope and the given distribution. Conclusions This paper details the design of a prediction market system using SIPS equipped with a central market maker and confirms how the system worked through agent-based simulation. As a result, this chapter has demonstrated that the proposed system can properly adjust the scope of a forecast according to transaction history. This paper assumes that the divide-merge operation splits an interval into two equal-length subintervals, but the point at which the interval should be divided can also be treated as a variable that can affect the performance of SIPS. In addition, the number of intervals need not be ten, and changing the number is relatively straightforward. The system proposed above is now ready to be tested in a real-world setting. Promising areas in which this system could be applied include the estimation of a new product's demand quantity for a given time period after its launch, the estimation of an existing product's demand quantity during a planned sales promotion, and so on. Further, because the performance of the system also depends on the quality of its users' knowledge, it is not suitable for a product on which knowledgeable users are difficult to find. Finally, the proposed system can currently only handle the demand quantity for a single product in a single time period. To utilize the system in a more traditional setting (i.e., one that includes multiple products and periods), it should be extended to handle multiple related demand quantities in parallel. This presents an interesting challenge that should be undertaken in future research in this area. Fig. 2 . 2 Fig. 2. Output distributions in case (a). Fig. 3 . 3 Fig. 3. Output distributions in case (b). Fig. 4 . 4 Fig. 4. Output distributions in case (c). Acknowledgments. This research was partially supported by the Japan Society for the Promotion of Science, Grant-in-Aid for Scientific Research (B) 20310087.
16,189
[ "996342" ]
[ "481270" ]
01472286
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472286/file/978-3-642-40352-1_65_Chapter.pdf
Quang-Vinh Dang Izabela Nielsen email: [email protected] Kenn Steger-Jensen Multi-Objective Genetic Algorithm for Real-World Mobile Robot Scheduling Problem Keywords: Multi-objective, Scheduling, Mobile Robot, Genetic Algorithm This paper deals with the problem of scheduling feeding tasks of a single mobile robot which has capability of supplying parts to feeders on production lines. The performance criterion is to minimize the total traveling time of the robot and the total tardiness of the feeding tasks being scheduled, simultaneously. In operation, the feeders have to be replenished a number of times so as to maintain the manufacture of products during a planning horizon. A method based on predefined characteristics of the feeders is presented to generate dynamic time windows of the feeding tasks which are dependent on starting times of previous replenishment. A heuristic based on genetic algorithm which could be used to produce schedules in online production mode is proposed to quickly obtain efficient solutions. Several numerical examples are conducted to demonstrate results of the proposed approach. Introduction The automation technology in combination with advances in production management has dramatically changed the equipment used by manufacturing companies as well as the issues in planning and control. With these changes, highly automated and unmanned production systems have become more popular in several industrial areas, e.g., automotive, robot, and pump manufacturing [START_REF] Crama | Cyclic Scheduling in Robotic Flowshops[END_REF]. An automatic production system consists of intelligent and flexible machines and mobile robots grouped into cells in such a way that entire production of each product can be performed within one of the cells. With embedded batteries and manipulation arms, mobile robots are capable of performing various tasks such as transporting and feeding materials, tending machines, pre-assembling, or inspecting quality at different workstations. They have been thus employed in not only small companies which focus on exact applications and a small range of products, but also large companies which can diversify applications in a longer term and larger range. Within the scope of this study, a given problem is particularly considered for a single mobile robot which will automate partfeeding tasks by not only transporting but also collecting containers of parts and emptying them into the feeders needed. However, to utilize mobile robots in an efficient manner requires the ability to properly schedule feeding tasks. Hence, it is important to plan in which sequence mobile robots process feeding operations so that they could effectively work while satisfying a number of practical constraints. The problem of scheduling part-feeding tasks of the mobile robot has been modeled in some respects comparable to the Asymmetric Traveling Salesman Problem (ATSP) which belongs to the class of NP-hard combinatorial optimization problems [START_REF] Germs | Lower Tolerance-Based Branch and Bound Algorithms for the ATSP[END_REF]. Among heuristic approaches, Genetic Algorithm (GA) has been widely used in the research areas of TSP, ATSP, or robot task-sequencing problems. Liu and Zheng [START_REF] Liu | Study of Genetic Algorithm with Reinforcement Learning to Solve the TSP[END_REF], Moon et al. [START_REF] Moon | An Efficient Genetic Algorithm for the Traveling Salesman Problem with Precedence Constraints[END_REF], and Snyder and Daskin [START_REF] Snyder | A Random-Key Genetic Algorithm for the Generalized Traveling Salesman Problem[END_REF] discussed about using GAs to solve TSP, while Choi et al. [START_REF] Choi | A Genetic Algorithm with a Mixed Region Search for the Asymmetric Traveling Salesman Problem[END_REF] and Xing et al. [START_REF] Xing | A Hybrid Approach Combining an Improved Genetic Algorithm and Optimization Strategies for the Asymmetric Traveling Salesman Problem[END_REF] proposed GAs to deal with ATSP. Zacharia and Aspragathos [START_REF] Zacharia | Optimal Robot Task Scheduling Based on Genetic Algorithms[END_REF] introduced a method based on GA and an innovative encoding to determine the optimal sequence of manipulator's task points which is considered an extension to the TSP. Beside genetic algorithms, Bocewiz [START_REF] Bocewicz | Production Flow Prototyping subject to Imprecise Activity Specification[END_REF] presented the knowledge-based and constraint programming-driven methodology in planning and scheduling of multi-robot in a multi-product job shop taking into account imprecise activity specifications and resource sharing. Hurink and Knust [START_REF] Hurink | A Tabu Search Algorithm for Scheduling a Single Robot in a Jobshop Environment[END_REF] proposed a tabu search algorithm for scheduling a single robot in a job-shop environment considering time windows and additionally generalized precedence constraints. Maimon et al. [START_REF] Maimon | A Neural Network Approach for a Robot Task Sequencing Problem[END_REF] also presented a neural network approach with successful implementation for the robot task-sequencing problem. Although there are many related research, the problem of scheduling a single mobile robot with dynamic time windows and restricted capacity where multiple routes have to be carried out has surprisingly received little attention in the literature despite its important applications in practice, e.g. part-feeding task. Such a task must be executed a number of times within time windows which are dependent on starting times of the previous executions of that task, hence, the term, dynamic time windows. The objectives of minimizing the total traveling time of the robot and the total tardiness of the tasks are taken into account to support the global objective of maximizing system throughput. The existing approaches are not well suited and cannot be directly used to solve the problem. Thus, in this paper, a heuristic based on GA, a possibly promising approach to the class of multi-objective optimization, is developed to find efficient solutions for the problem. The advantageous feature of GA is the multiple directional and global search by maintaining a population of potential solutions from generation to generation. Such population-to-population approach is useful to explore all nondominated solutions of the problem [START_REF] Gen | Network Models and Optimization[END_REF]. The remainder of this paper is organized as follows: in the next section, problem statement is described while a genetic algorithm-based heuristic is presented in Section 3. Numerical examples are conducted to demonstrate results of the proposed approach in Section 4. Finally, conclusions are drawn in Section 5. Problem Statement The work is developed for a cell which produces parts or components for the pump manufacturing industry at a factory in Denmark. The essential elements considered in the manufacturing cell consist of an autonomous mobile robot with limitation on carrying capacity, a central warehouse designed to store small load carriers (SLCs), and multiple feeders designed to automatically feed parts to machines of production lines. Besides, every feeder has three main characteristics including maximum level, minimum level, and part-feeding rate to machine. In operation, the robot will retrieve and carry one or several SLCs containing parts from the warehouse, move to feeder locations, empty all parts inside SLCs, then return to the warehouse to unload empty SLCs and load filled ones. To maintain the manufacture of a quantity of products during a given planning horizon, the feeders (tasks) have to be replenished a number of times, the robot consequently has a set of subtasks of tasks to be carried out within time windows. Such a time window of a subtask of a task could be only determined after starting time of the previous subtask of that task. Fig. 1 below shows a layout of the described manufacturing cell. Central warehouse Small Load Carrier Mobile Robot Feeder Fig. 1. Layout of the manufacturing cell To enable the construction of a feeding schedule for the mobile robot, assumptions are considered as follows:  The robot can carry one or several SLC(s) at a time.  All tasks are periodic, independent, and assigned to the same robot.  Working time, traveling time between any pairs of locations of the robot, and partfeeding rate to machine of a feeder are known.  All feeders of machines must be fed up to maximum levels and the robot starts from the ware house at the initial stage. In order to accomplish all the movements with a smallest consumed mount of battery energy, the total traveling time of the robot is an important objective to be considered. Apart from that, another performance measure is the amount of time a feeder has been waiting to be replenished by the robot. Alternatively, due time of a time window of a feeding task could be considered soft constraint, i.e. schedules that do not meet this constraint are taken into account. In addition, making decisions on which way the robot should provide parts to feeders is a part of real-time operations of production planners. Moreover, concerning the problem belong to NP-hard class, computation time exponentially grows with the size of the problem (e.g. larger number of feeders). It is therefore necessary to develop a computationally effective algorithm, namely GA-based heuristic, which determines in which sequence the feeders should be supplied so as to minimize the total traveling time of the robot and the total tardiness of feeding tasks while satisfying a number of practical constraints. Genetic Algorithm-based Heuristic In this section, genetic algorithm, a random search method taking over the principle of biological evolution [START_REF] Goldberg | Genetic Algorithms in Search, Optimization, and Machine Learning[END_REF], is applied to develop a heuristic which is allowed to convert the aforementioned problem to the way that efficient solutions could be found. The GA-based heuristic shown in Fig. 2 Genetic Representation and Initialization For the problem under consideration, a solution can be represented by a chromosome of non-negative integers ( ) which is an ordering of part-feeding tasks of the robot where : feeder index; ; : number of feeders. The original length of a chromosome is equal to the total number of subtasks of tasks added the first subtask of task at the central warehouse ( ∑ : number of subtasks of task i). For the initial generation, genes on a chromosome are randomly filled with tasks at feeders. The frequency of such a task is the number of subtasks of that task, in other words, number of times that tasks has to be executed. Constraint Handling and Fitness Assignment After initialization or crossover and mutation operations, chromosomes are handled to be valid and then assigned fitness values. A valid chromosome should satisfy two constraints of limitation on carrying capacity Q m of the robot and time windows of subtasks of part-feeding tasks. For the first type of constraints, to guarantee the robot not to serve more number of feeders than number of SLCs carried in one route, the subtasks of task at the warehouse represented by zeroes are inserted into a chromosome after every Q m genes starting from the first gene. For instances, if the limitation on carrying capacity of the robot is two SLCs of parts, the chromosome should be restructured to be ( ). The second type of constraints requires a subtask of a task to be started after release time and completed by the due time of that subtask, if possible. As mentioned, due time constraints are considered soft constraints. They thus could be modeled as an objective of the total tardiness of part-feeding tasks. The release time and due time could be determined as shown in Equation ( 1) and (2) below. ( ) (1) ( ) (2) where : release time, due time, and starting time of subtask k of task i : maximum level, minimum level of parts of feeder i, and partfeeding rate to machine of feeder i After constraint handling procedure, the objectives of the total traveling time of the robot and the total tardiness of part-feeding tasks are calculated one after another for every chromosome in the population. A weighted-sum fitness function F is then used to assign a fitness value to each chromosome as shown in Equation ( 3) where is traveling time of the robot from one location to another, is working time of the robot per SLC at feeder i, and is the weighted coefficient. (∑ ) ( ) (∑ ( { ( ) }) ) (3) Genetic operators Selection, crossover, and mutation are three main genetic operators. For selection, various evolutionary methods could be applied in this problem. (μ + λ) selection is used to choose chromosomes for reproduction. Such selection mechanism guarantees that the best solutions up to now are always in the parent generation [START_REF] Dang | A Genetic Algorithm-based Heuristic for Partfeeding Mobile Robot Scheduling Problem[END_REF][START_REF] Dang | Scheduling a Single Mobile Robot for Part-feeding Tasks of Production Lines[END_REF]. Crossover operator generates offspring by combining the information contained the parent chromosomes so that the offspring inherits good features from their parents. The Roulette-wheel selection is used to select the parent chromosomes based on their weighted-sum fitness values. Order crossover (OX) [START_REF] Gen | Network Models and Optimization[END_REF] operated with probability P c will be employed to generate an offspring as follows. Genes having zero values are removed before two cut points are randomly chosen on the parent chromosomes. A string between these cut points in one of the parent chromosomes is first copied to the offspring, the remaining positions are then filled according to the sequence of genes in the other parent starting after the second cut point. When an offspring is produced, it undergoes insertion mutation [START_REF] Gen | Network Models and Optimization[END_REF] with probability P m which selects a gene at random and inserts it in a random position. Termination Criteria Termination criteria are employed to determine when the GA-based heuristic should be stopped. Note that making decisions on which sequences the robot should serve feeders is a part of real-time operations of production planners. Therefore, on the one hand if the best solutions over generations do not converge to a value, the maximum generation G m would be used to stop the run. On the other hand, if the best solution does not improve over G c consecutive generations, it would not be valuable to continue searching. Numerical Examples The performance of the GA-based heuristic will be tested on several problem instances in this section. Three problems, which are as similar to the real-world case as they can be, are generated with difference number of feeders (namely 3, 5, and 10 feeders), and other system parameters such as limitation on carrying capacity, working time, traveling time of the mobile robot, planning horizon, and characteristics of feeders. The robot is designed to carry up to 3 SLCs at a time to perform part-feeding tasks during a given planning horizon of one hour (corresponding to an eighth of a full production shift). The maximum and minimum levels of parts of feeders are respectively distributed within the ranges of [300, 2000] and [100, 1000] while part-feeding rates in seconds are in-between the interval [1.5, For GA parameters, the population size, P c , P m , G m , and G c are set to be 100, 0.6, 0.2, 500, and 100, respectively. The weighted-sum fitness function F (Equation 3) will be calculated using one of three different values of the weight coefficient , namely, 0.2, 0.5, and 0.8. The proposed heuristic has been coded in VB.NET, and all the problem instances run on a PC having an Intel® Core i5 2.67 GHz processor and 4 GB RAM. The results for three randomly generated problems in combination with three values of the weighted coefficient are presented in Table 1 below. The total traveling time of the robot, total tardiness of tasks, and computation time shown in Table 1 are the average of 10 runs. It can be observed that as the weighted coefficient increases, two objectives of each problem instance have opposite trends where the total traveling time of the robot decreases and the total tardiness of tasks increases. In other words, as saving battery energy allowing the robot to be utilized in a longer duration is more important, the robot has a tendency to travel less and vice versa. Similar explanation is also applicable to the total tardiness of part-feeding tasks. Such kinds of solutions in Table 1 are non-dominated solutions for which no improvement in any objective function is possible without sacrificing the other objective function. It also shows that when the size of the problems grows, the computation time of the GA-based heuristic becomes longer, but it is still acceptable (i.e., the largest problem size with 10 feeders and the coefficient of 0.5 requires 2.7 seconds in average to find the efficient solution). These results provide evidence to prove that the GA-based heuristics could be used to produce efficient schedules within reasonable time in online production mode. The above solutions are initial schedules for the robot. These schedules serve as input to a Mission Planner and Control (MPC) program which is accessed by using XML-based TCP/IP communication to interact with the robot, Manufacturing Execution System (MES), and the module of GA-based heuristic. In practice, there might be some errors in manufacturing such as machine breakdown, or changes in manufacturing conditions such as characteristics of feeders (e.g. minimum levels of parts), or carrying capacity of the robot. These events will be reported by the MES so that the MPC program can update current states of the shop floor and then call the heuristic module to reschedule part-feeding tasks of the robot. By relaxing the last assumption mentioned in Section 2, the proposed heuristic in turn will use the current states as new input, re-optimize to get alternative schedules, and send these schedules back to the MPC program. Conclusions In this paper, a problem of scheduling a single mobile robot to carry out part-feeding tasks of production lines is studied. To maintain the manufacture of products, it is important for planners to determine feeding sequences which minimize the total traveling time of the robot and the total tardiness of the feeding tasks while taking into account a number of practical constraints. The main novelty of this research lies in the consideration of dynamic time windows and limitation on carrying capacity where multiple routes have to be performed by the single mobile robot. A genetic algorithmbased heuristic was proposed to find efficient solutions for the problem. The results in the numerical examples showed that the proposed heuristic is fast enough to be used to generate efficient schedules compromising the objectives in online production mode. The heuristic may be also used to produce alternative schedules in rescheduling scenarios when there might be some errors or changes in manufacturing conditions. Moreover, the heuristic could be considered to deal with more performance criteria according to requirements of planners, and by investigating different scenarios with various weighted coefficients of those criteria, it can specify which schemes are more beneficial for the manufacturing. For further research, a general model of scheduling multiple mobile robots should be considered together with rescheduling mechanisms to deal with real-time disturbances. Fig. 2 . 2 Fig. 2. Flow chart of GA-based heuristic . The working times of the robots in seconds at feeders and the warehouse per SLC are respectively distributed within the range of [40, 60] and [25, 40] while the traveling times of the robot in seconds are in-between the interval [20, 60]. Note that the cost matrix of the generated traveling times should satisfy the triangle inequality. Table 1 . 1 The best solutions of three generated problems Problem No. of feeder No. of subtasks of tasks Weighted coefficient ( ) Total traveling time of robot (second) Total tardiness of tasks (second) Computation time (second) 1 3 11 0.2 432 0 0,49 0.5 432 0 0,63 0.8 428 6 0,41 2 5 24 0.2 706 0 1,00 0.5 690 4 1,12 0.8 682 15 1,10 Acknowledgments This work has partly been supported by the European Commission under grant agreement number FP7-260026-TAPAS.
20,670
[ "1002194", "991718" ]
[ "300821", "300821", "300821" ]
01472288
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472288/file/978-3-642-40352-1_67_Chapter.pdf
Grzegorz Bocewicz email: [email protected] Zbigniew A Banaszak email: [email protected] Peter Nielsen Quang-Vinh Dang Multimodal Processes Rescheduling scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. INTRODUCTION A cyclic schedule [START_REF] Alpan | Dynamic analysis of timed Petri nets: a case of two processes and a shared resource[END_REF], [START_REF] Fournier | Cyclic scheduling following the social behavior of ant colonies[END_REF] is one in which the same sequence of states is repeated over and over again. In everyday practice, cyclic scheduling arises in different application domains such as manufacturing, time-sharing of processors in embedded systems, and in compilers for scheduling loop operations for parallel or pipelined architectures as well as service domains covering such areas as workforce scheduling (e.g., shift scheduling, crew scheduling), timetabling (e.g., train timetabling, aircraft routing and scheduling), and reservations (e.g., reservations with or without slack,) [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF], [START_REF] Liebchen | A case study in periodic timetabling[END_REF], [START_REF] Polak | The performance evaluation tool for automated prototyping of concurrent cyclic processes[END_REF]. In this paper, such cyclic scheduling problem considered follow from Flexible Manufacturing System (FMS) [START_REF] Bocewicz | Declarative approach to cyclic steady states space refinement: periodic processes scheduling[END_REF] and employed by them Automated Guided Vehicle (AGV) systems used for material handling. An AGVs provide asynchronous movement pallets of products through a network of guide paths between the workstations. Such flows following production routes are treated as multimodal processes, i.e. sequences of alternating transportation and machining operations. The problems arising concerning material transportation routing and scheduling belong to NP-hard ones. Since the steady state of production flows treated as multimodal processes has periodic character, hence servicing them AGV-served transportation processes (usually executed along loop-like routes) encompass also cyclic behavior. Many models and methods have been proposed to solve the cyclic scheduling problem [START_REF] Levner | Complexity of cyclic scheduling problems: A state-of-the-art survey[END_REF]. Among them, the mathematical programming approach (usually IP and MIP), max-plus algebra [START_REF] Polak | The performance evaluation tool for automated prototyping of concurrent cyclic processes[END_REF], constraint logic programming [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF], [START_REF] Wójcik | Constraint programming approach to designing conflict-free schedules for repetitive manufacturing processes. Digital enterprise technology[END_REF] evolutionary algorithms and Petri nets [START_REF] Alpan | Dynamic analysis of timed Petri nets: a case of two processes and a shared resource[END_REF] frameworks belong to the most frequently used. A majority of them are oriented towards finding a minimal cycle or maximal throughput while assuming deadlock-free processes flow. In that context, our main contribution is to propose a new declarative modeling based framework enabled to evaluate the cyclic steady state of a given system of concurrent cyclic processes as well as supported by them multimodal cyclic processes. The following questions are of main interest: Does the assumed system behavior can be achieved under the given system's structure constraints? Does the assumed multimodal processes cyclic steady state is reachable from another one? MULTIMODAL PROCESSES Let us consider the above mentioned questions on in the context of Automated Guided Vehicles (AGVs) periodically circulating along cyclic routes (see Fig. 1b) that can be seen as a network of loosely coupled material transportation/handling subsystem modeled in terms of Systems of Cyclic Concurrent Processes (SCCPs) shown in Fig. 1a. Four local cyclic processes [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF] are considered: , , , and two multimodal processes [START_REF] Bocewicz | Declarative approach to cyclic steady states space refinement: periodic processes scheduling[END_REF] (executed along the parts of local cyclic processes) , , respectively. , contain two sub-processes , representing AGVs moving along the same route. The AGVs are used to transport workpieces along transportation routes followed by , processes, respectively (Fig 1 . b). Declarative modeling The following notations are used [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF]: -the pick-up/delivery points -the Automated Guided Vehicle (AGV) b) a) -the transportation sector (resource) -the resource occupied by the stream and controlled by the priority dispatching rule -the routes of the multimodal processes, respectively: ,  specifies the route of the local process's stream ( -th stream of the -th local process ), and its components define the resources used in course of process operations execution, where: (the set of resources: )denotes the resource used by the -th stream of -th local process in the -th operation; in the rest of the paper the -th operation executed on resource in the stream will be denoted by ; -denotes a length of cyclic process route. specifies the process operation times, where denotes the time of execution of operation .  specifies the route of the multimodal process , where: , , for , , , The transportation route is a sequence of sub-sequences of local cyclic process routes. For example, a route of the process (Fig. 1a) is following: .  is the set of the priority dispatching rules, where is the sequence components of which determine an order in which the processes can be executed on the resource , . In that context a SCCP can be defined as a pair [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF]: , ( 1 ) where: characterizes the SCCP structure, i.e. the set of resources (e.g. the transportation sectors), the set of local processes (e.g. AGVs ), the set of local process routes (e.g. routes of AGVs ),, the set of local process operations times, the set of dispatching priority rules. characterizes the SCCP behavior, i.e. the set of multimodal processes, (workpieces), the set of multimodal process routes (transportation routes of materials). The main question concerns SCCP cyclic behavior and a way this behavior depends on direction of local transportation routes , the priority rules , and a set of initial states, i.e., an initial allocation of processes to the system resources. Cyclic steady states space Consider the following SCCPs state definition describing both the local and multimodal processes allocation: , where:  is the state of local processes, corresponding to , , where: the processes allocation in the -th state, , the -th resource is occupied by the local stream , and the -th resource is unoccupied. the sequence of semaphores corresponding to the -th state, means the name of the stream (specified in the -th dispatching rule , allocated to the -th resource) allowed to occupy the -th resource; for instance means that at the moment stream is allowed to occupy the -th resource. the sequence of semaphore indices, corresponding to the -th state, determines the position of the semaphore in the priority dispatching rule , , . the sequence of multimodal processes allocation: , allocation of the process , i.e.: , where: is a number of resources , , means, the -th resource is occupied by the -th multimodal process , and the -th resource is released by the -th multimodal process . The introduced concept of the -th state is enables to create a space of feasible states [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF]. In this kind of space two kinds of behaviors can be considered: a cyclic steady state and a deadlock state [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF]. The set , is called a reachability state space of multimodal processes generated by an initial state , if the following condition holds: [START_REF] Fournier | Cyclic scheduling following the social behavior of ant colonies[END_REF] where: the next state transition defined in [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF], and means transitions linking two feasible states , . The set , is called a cyclic steady state of multimodal processes (i.e., the cyclic steady state of a ) with the period , . In other words, a cyclic steady state contains such a set of states in which starting from any distinguished state it is possible to reach the rest of states and finally reach this distinguished state again . The cyclic steady state specified by the period of local processes execution is defined in the similar way. Graphically, the cyclic steady states and are described by cyclic and spiral digraphs, respectively. Two cyclic steady states of SCCP from Fig. 1 a) are presented in Fig. 2. Moreover, since an initial state leads either to or to a deadlock state , i.e. , multimodal processes also may reach a deadlock state. Problem statement Consider a SCCP specified (due to (1)) by the given set of resources, dispatching rules , local , and multimodal processes routes as well. Usually the main question concerns SCCP periodicity, i.e. Does the cyclic execution of local processes exist? Response to the above question requires answers to more detailed questions, for instance: What is the admissible initial allocation of processes (i.e. the possible AGVs dockings)? What are dispatching rules guaranteeing a given SCCP periodicity (in local/multimodal sense)? The problems stated above have been studied in [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF][START_REF] Bocewicz | Declarative approach to cyclic steady states space refinement: periodic processes scheduling[END_REF][START_REF] Bocewicz | Cyclic Steady State Refinement[END_REF], [START_REF] Wójcik | Constraint programming approach to designing conflict-free schedules for repetitive manufacturing processes. Digital enterprise technology[END_REF]. In that context, a new problem regarding possible switching among cyclic steady states can be seen as their obvious consequence. Therefore, the newly arising questions are: Is it possible to reschedule cyclic schedules as to "jump" from one cyclic steady state to another? Is it possible to "jump" directly or indirectly? What are the control rules allowing one to do it? These kind of questions are of crucial importance for manufacturing and transportation systems aimed at short-run production shifts and/or the itinerary planning of passengers (e.g. in a sub-way network). In terms of introduced notations ( ) the above questions boil down to the following ones: Does there exist a nonempty space of local ( ) and multimodal ( ) processes cyclic steady states? Let Is the state reachable from ? Repetitive processes scheduling Searching for response to the first of above stated questions, let us note that a possible cyclic steady state of multimodal processes formulated in terms of CSP can be stated as the following constraints satisfaction problem [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF], [START_REF] Bocewicz | Declarative approach to cyclic steady states space refinement: periodic processes scheduling[END_REF]: [START_REF] Levner | Complexity of cyclic scheduling problems: A state-of-the-art survey[END_REF] where: the decision variables, where and are local and multimodal cycles (periods); , -sequences of operations beginning in local, and multimodal processes, respectively, domains of variables ,constraints determining the relationship among local and multimodal processes, i.e. constraints linking . The detailed specification of constraints considered is available in [START_REF] Bocewicz | Declarative approach to cyclic scheduling of multimodal processes[END_REF], [START_REF] Bocewicz | Cyclic Steady State Refinement[END_REF]. Cyclic processes reachability Cyclic steady states of multimodal and local processes are solutions to the problem [START_REF] Levner | Complexity of cyclic scheduling problems: A state-of-the-art survey[END_REF] Illustrative example Given the SCCP see Fig. 1. The periods of the cyclic processes steady states and (local processes behaviors) obtained from the (5) the solution while implemented in OzMozart platform (in less than 1 s due to Intel Core Duo 3.00 GHz, 4.00 GB RAM computer) are equal to and 11 t.u., respectively. In turn the periods of the cyclic multimodal processes steady states and are equal to 55 and 33 respectively. The obtained cyclic steady states space is shown in Fig 2 . The local states are distinguished by " ", while multimodal by " ". In case of multimodal processes the completion time for each route is the same and equals to 30 u.t. (see Fig 3). However, in case of the completion time for the route equals to 22 u.t, and 27u.t. for the (see Fig. 3). That means the longer cycle of the implies the longer multimodal processes completion time. So, for a given SCCP the following question can be stated: Does the steady state is reachable from ? Due to the property provided the response is positive, however only if there exist states and possessing a common shared allocation The is reachable from in the local state sharing the common allocation with . The states and follow the multimodal states , that can be mutually reachable from each other. Switching from state to can be seen as result of change either of semaphores (from to ) and/or indices (from to ). The switching assumes the same processes allocationthat means (in terms of AGVS) the same AGVs allocations. The considered rescheduling from to employing of switching to is distinguished by the green line in Fig. 2. Another illustration of such rescheduling provides the Gantt's chart from Fig. 3. Note that in the considered case the rescheduling does not disturb execution of multimodal processes. Concluding remarks Structural constraints limiting AGVS behavior imply two fundamental problems: Does there exist a set of dispatching rules subject to AGVS's structure constraints guaranteeing solution to a CSP representation of the cyclic scheduling problem? What set of dispatching rules subject to assumed cyclic behavior of AGVS guarantees solution to a CSP representation of the cyclic scheduling problem? In terms of the second question the paper's contribution is the property providing sufficient condition guaranteeing local and multimodal cyclic processes rescheduling. Therefore, the developed conditions can be treated as the new rules enlarging the above mentioned set of dispatching rules. Moreover, the provided conditions complete the rules enabling direct switching between cyclic behaviors [START_REF] Bocewicz | Declarative approach to cyclic steady states space refinement: periodic processes scheduling[END_REF], [START_REF] Bocewicz | Cyclic Steady State Refinement[END_REF] for the new rules allowing one to reschedule cyclic behaviors indirectly, i.e. through a set of so called transient states (not belonging to rescheduled cyclic steady states). Fig. 1 . 1 Fig. 1. Illustration of the SCCP composed of four processes a) while modeling AGVs b). Fig. 2 . 2 Fig. 2. The cyclic steady states spaces of SCCP from Fig. 1 Fig. 3 . 3 Fig. 3. Gantt's chart illustrating the way the cyclic steady state can be reachable from .Legend:-the routes of the multimodal processes, respectively: ,
16,060
[ "1002198", "1002199", "991697", "1002194" ]
[ "486175", "235985", "300821", "300821" ]
01472296
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472296/file/978-3-642-40352-1_73_Chapter.pdf
Heiko Duin Gregor Cerinsek email: [email protected] Manuel Oliveira email: [email protected] Michael Bedek email: [email protected] Slavko Dolinsek email: [email protected] Using Behavioral Indicators to Assess Competences in a Sustainable Manufacturing Learning Scenario Keywords: Serious Game, Sustainable Manufacturing, Lifecycle Assessment (LCA), Content Development, Competence-based Learning, Behavioural Indicators This paper introduces a learning scenario created for a serious game to develop competences in the domain of sustainable manufacturing, by applying a Lifecycle Assessment (LCA). A set of behavioral indicators is introduced to assess how particular competences do change while the player is engaged in playing the game scenario. It furthermore presents early evaluation results of the game scenario on a sample of master grade students at the University of Bremen. Introduction Manufacturing industries account for a significant part of the world's consumption of resources and generation of waste. Worldwide, the energy consumption of manufacturing industries grew by 61% from 1971 to 2004 and accounts for nearly a third of today's global energy use. Likewise, they are responsible for 36% of global carbon dioxide emissions [START_REF]IEA (International Energy Association): Tracking Industrial Energy Efficiency and CO2 Emissions[END_REF]. Manufacturing industries nevertheless have the potential to become a driving force for the creation of a sustainable society. This requires a shift in the perception and understanding of industrial production and the adoption of a more holistic approach to conducting business [START_REF] Maxwell | Functional and systems aspects of the sustainable product and service development approach for industry[END_REF]. Sustainable manufacturing considers all life-cycle stages, from pre-manufacturing, manufacturing and post-use (holistic view). These stages are spread across the entire supply chain with different partners managing activities at each of these stages. Thus, many players in the manufacturing process must adopt sustainable principles to ensure that higher production standards are met [START_REF]E-brochure[END_REF]. Therefore, it is quite important to know the environmental impacts of produced products which can be determined by performing a Lifecycle Assessment (LCA). Nevertheless, there are many difficulties associated with the LCA process and traps in its application (conduction and usage), e.g. wrong scoping of the analysis, collecting wrong data, no data available, improper understanding of the production processes, etc. Finally, the success of a LCA analysis depends on the social engineering skills of the analyser to make all responsible managers to support the task. The training of current and future manufacturing managers needs to achieve two criteria: first, the targeted learning outcomes need to be achieved rapidly, and second, the learners need to be able to apply the learning outcomes into complex, life-like situations and. Competence-based and technology-enhanced learning (TEL) in general and serious games and simulations in particular have recently attracted a great deal of attention as they have the potential to deliver on both accounts [START_REF] Cerinsek | Contextually enriched competence model in the field of sustainable manufacturing for simulation style technology enhanced learning environments[END_REF]. A general introduction to Serious Gaming to support competence development in Sustainable Manufacturing including a requirements analysis has been presented by [START_REF] Duin | Serious Gaming Supporting Competence Development in Sustainable Manufacturing[END_REF][START_REF] Duin | A Methodology for Developing Serious Gaming Stories for Sustainable Manufacturing[END_REF]. Serious Gaming has proven to support learners in acquiring new and complex knowledge and is ideally suited to support problem based learning by creating engaging experiences around a contextual problem where users must apply competences to solve these presented challenges [START_REF] Duin | A Methodology for Developing Serious Gaming Stories for Sustainable Manufacturing[END_REF]. For a comprehensive assessment of the progress of player's competences, behavioral indicators for such a game scenario need to be elaborated. This paper presents the indicators defined for a Sustainable Manufacturing game scenario focusing on carrying out a LCA which includes the competences of 1) information gathering, 2) ability to perform LCA, and 3) decision making. Scope of the Sustainable Manufacturing Game scenario The presented challenges in the education and training for sustainable manufacturing are also addressed by the TARGET project which identified sustainable manufacturing as an emerging field where new competences are required to facilitate the new manufacturing paradigms and technologies [START_REF] Duin | A Methodology for Developing Serious Gaming Stories for Sustainable Manufacturing[END_REF]. The TARGET (Transformative, Adaptive, Responsive and enGaging EnvironmenT) project aims to develop a novel TEL platform that provides learners with a responsive environment that addresses personalized rapid competence development and sharing of experiences in the domain of project management, innovation and sustainable manufacturing. The TARGET environment consists of a learning process supported by a set of components that constitute the TARGET platform. The core component of the TARGET platform is a serious game combined with virtual world technology, which confronts individuals with complex situations in the form of game scenarios. The serious game facilitates situated learning that results in experiences leading to the development of competences, whilst the interaction within a virtual world enables individuals to externalize their tacit knowledge [START_REF] Andersen | The Coming Revolution in Competence Development: Using Serious Games to Improve Cross-Cultural Skills[END_REF][START_REF] Pirolli | Information Foraging´[END_REF]. The sustainable manufacturing game scenario reflects the phases an enterprise has to run through when dealing with sustainability issues. Within the game scenario, the player takes over the role of a Sustainability Manager who was recently hired by the Chief Executive Manager (CEO) of a production company. When starting the game scenario, the player finds himself in a meeting with the CEO and the other managers (i.e. Production Manager, Logistics Manager, Human Resources Manager etc.). The CEO introduces the plan that the LCA) should be conducted concerning the production of a specific product and advises the player to do so. He also urges the other managers to support him. The CEO and the other managers are non-player characters (NPCs) who are driven by the game engine. After the meeting finishes, the player starts to execute the relevant steps of the LCA, i.e. 1) setting the objectives, 2) setting the boundaries, 3) selecting the flow chart, 4) selecting inputs and outputs, 5) deciding on the data for inputs and outputs, 6) setting the impact categories. In the first phase (scoping of the LCA) the player has to define the objectives and boundaries for the LCA. In order to effectively complete his/her tasks the player needs to gather right and relevant information from different NPCs (i.e. CEO, Production Manager, Shift Manager) and furthermore through accessing the Enterprise Resource Planning (ERP) System or directly visiting the shop floor. For instance, when setting the boundaries, the player can discuss the issue with the CEO, who would advise him/her to focus the LCA on the whole life cycle of the product. On the other hand, player can also discuss the issue with the Production Manager, who would advise him/her to focus the LCA solely on the production of the product. The decision made by the player will have impact on costs, time and the final quality of the LCA. All additional LCA steps required follow the similar logic and the player is virtually free to choose from the data that is available to him/her. The final calculation is done by the virtual LCA tool which also reports whether all necessary data has been entered or not. Finally, the final phase is checking the completeness and consistency of all collected data and evaluating the results in terms of impacts per category. The final result is a report to be created with the virtual LCA tool and delivered to the CEO. Measuring Competence Performance in Serious Games Competence assessment in the field of TEL is usually carried out by on-line questionnaires or test items provided after the learner consumed a set of learning objects. Serious Games offer the opportunity to assess if the player is able to apply a particular competence while he or she is playing the game. In other words, a game scenario may encompass both, learning and test objects at the same time. However, given this potential of serious games, the challenge is to avoid that the player´s flow experience [START_REF] Csikszentmihalyi | Flow: The psychology of optimal experience[END_REF] or feeling of presence [START_REF] Slater | A framework for immersive virtual environments (FIVE): Speculations on the role of presence in virtual environments[END_REF] is impaired. Thus, a non-invasive or implicit assessment procedure is required. Our implicit and non-invasive assessment procedure is based on the interpretation of the player´s actions and interactions within the virtual environment [START_REF] Bedek | From Behavioral Indicators to Contextualized Competence Assessment[END_REF]. These actions and interaction, called Behavioral Indicators (BIs) should be valid clues to distinguish between well and poor performing players. The elaboration of BIs starts with the identification and definition of competences to be assessed (see Table 1). The operationalization of BIs leads to formalized functions consisting of parameters that can be measured while the learner is playing the game (game logs). The observation and integration of different BIs constitutes the foundation of an online assessment of the level of competence a player has. C3 Decision Making A very important competence for conducting projects such as the LCA as it requires effective and on-time decisions when 1) setting the objectives; 2) setting the boundaries; 3) flow chart definition; 4) defining inputs / outputs; 5) utilizing gathered data; 6) choosing the impact categories when conducting the LCA. The operationalization of the competences and the elaboration of BIs listed in Table 1 was built upon existing theories, frameworks and empirical evidence. Before describing the competences and their BIs in more detail, we will briefly outline the theory of information foraging [START_REF] Pirolli | Information Foraging´[END_REF] which served as main reference for elaborating BIs for competence C2 (Information Gathering). Theory of Information Foraging The theory of information foraging as proposed by [START_REF] Pirolli | Information Foraging´[END_REF] aims at describing the strategies that are applied in order to seek for and consume valuable pieces of information (for example, when searching for relevant papers in literature data bases). An ideal information forager gains information from external sources effectively and efficiently. Such external sources encompass a wide range of entities, for example online documents or communication partners. In [START_REF] Pirolli | Information Foraging´[END_REF] they are called patches and we consider information sources as specific subset of them. Information sources in the context of TARGET are e.g. an Enterprise-Resource-Planning (ERP) system or NPCs which are also part of the scenario. An efficient and effective information forager maximizes the rate of gaining pieces of valuable information (in our context called information objects) by applying a balanced ratio of explorative and exploitative search activities. These two kinds of activities are mutually exclusive, i.e. the information forager can spend his or her available time on either explorative search behavior (called betweenpatch processing) or exploitative, information consuming behavior (called within-patch processing). The information foraging theory provides a profound set of "success indicators". For example profitability (the ratio of gain per patch to the cost of within-patch processing) or rate of information gain in units of time. Measuring Performance in Information Gathering The behavioral indicator used to measure Information Gathering is the number of information objects found during the beginning of the game (t 0 ) and a specific point in time t. Relating this to the total number of information objects contained in the game scenario provides as a performance indicator the percentage of detected information objects. One could also relate that to the time needed to get a more precise performance indicator for that competence, but for our purpose we just consider the ration of found to the total number of information objects. Table 2. Information Objects hidden in the Game Scenario Name Description and Coding Boundary The boundary is necessary to focus the scope of the LCA. It is coded as a sentence of the CEO: "I know that the production manager will say "focus on production" but I suggest to focus on the whole lifecycle". Flowchart A flow chart describes the production and usage processes defining all the (material) inputs and outputs of each step. When the player has selected the boundary he can get the hint from the CEO and the Production Manager which of the provided flow charts in the LCA tool is the right one. Inputs/Outputs Inputs and outputs describe the flow of energy materials and parts into and out of a production or usage process. This information object is distributed in sentences of the CEO and the Production Manager. The CEO knows about the sub-parts to be assembled while the Production Manager knows exactly the material and energy inputs and outputs. Data Data is a collection of correct values for each of the inputs and outputs. Again, this information object is distributed in the game scenario. Many data can be observed when using the ERP system on of the PCs, other, more precise data is told to the player through the Shift Manager. Impact Categories The impact categories are used as indicators to describe the whole life cycle of the product as green. The information object is distributed in sentences of dialogs of the CEO and the Production Manager. Within the game scenario information objects are coded in either being hidden in a game object or a sentence of a NPC. The two game objects which provide information to the player is a big wall screen showing production processes and a couple of PCs which are accessible by the player showing ERP related data. The NPCs of the scenario are the CEO, the Production Manger, and a shift manager from the production site. All of them are able to answer questions of the player. Table 2 shows which information objects are hidden in the game scenario. Evaluation Evaluation of the Sustainable Manufacturing Scenario has been done 11-13 July 2012 at a laboratory of the University of Bremen. Participants were 24 master students of Management and Industrial Engineering. Evaluation was divided into three steps: 1. All participants filled the first part of a questionnaire with general and scenario related questions to collect demographic data and to assess present understanding of LCA related issues. 2. An instructor introduces the TARGET software and demonstrated how to play it. The participants played the scenario for 20 minutes. After that all participants reflected on their performance related to the three competences mentioned above. The participants have been asked to do a self assessment for the three competences on a scale of 1-9 (where 1 = very poor and 9 = very good) for the phases of beginning, during, and the end of the gaming session. After that, participants were asked how they could improve their performance. All participants played the game for a second time for 20 minutes trying to improve their performance. At the end of the second playing session participants were asked for another self-assessment. 3. Finally, all participants filled the second part of a questionnaire to gather in-game experience, updates on the scenario understanding and general post game evaluation. The following results are focusing on the question, whether the participants were able to improve their performance in competence Information Gathering In the beginning of the first gaming round more than the half of all participants assessed themselves to have only marginal performance in Information Gathering (see Fig. 1). 13 of 24 participants (54%) assessed their own competence on a scale of 1-9 as poor (values between 1 and 3), 8 participants (33%) as medium (values between 4 and 6), and only 3 participants (13%) as good (values between 7 and 9). The highest values participants gave was 7 (3 participants). This situation changed by the end of the second gaming round (see Fig. 2). A total of 75% assessed to have medium to good performance in Information Gathering (37.5% with values between 4 and 6, 37.5% with values greater equal 7). Only 25% assessed themselves still with limited performance. The highest values participants gave was 8 (6 participants). The average value of competence Information Gathering grew from the beginning of the first gaming round to the end of the second gaming round from 3.38 to 5.50 (see Fig. 3). The results are based on a self-assessment of the players and not on measures taken during game play. Even when the meaning of the performance indicators has been explained there is still the risk that the participants provided incorrect answers. Conclusions This paper introduced a serious game scenario designed to teach the Ability to Perform a Lifecycle Assessment (LCA) and related competences, i.e. Information Gath- showing that during the execution of the game scenario players learned and performed better the longer they played. Fig. 1 .Fig. 2 . 12 Fig. 1. Distribution of participants self-assessment of competence Information Gathering in the beginning of the first round of gaming (n=24) Fig. 3 . 3 Fig. 3. Development of the average of self-assessment of competence Information Gathering (left graph represents first round, right graph represents second round of gaming, n=24) Making. On the example if Information Gathering it has been shown how a performance indicator can be designed based on behavioral measures. An evaluation with master students at the University of Bremen has been performed Table 1 . 1 Required Competences for LCA ID Name Description C1 Ability to Perform Life It is related to conducting and executing the seven key Cycle Assessment (LCA) phases of the LCA concerning a specific product, i.e. 1) setting the objectives, 2) setting the boundaries, 3) flow chart definition, 4) inputs and outputs definition, 5) data gathering, 6) choosing impact categories, 7) interpreta- tion of results with recommendations. C2 Information Gathering Concerns getting the "right" information in adequate quality (completeness and correctness) in adequate time. Acknowledgement The research reported in this paper has been undertaken within the European Community funded project TARGET under the 7th Framework Programme (IST 231717). The authors of the paper wish to acknowledge the Commission and all participants of the TARGET project consortium for their valuable work and contributions.
19,783
[ "1002207", "1002208", "991702", "1002209", "1002210" ]
[ "217679", "486186", "556764", "65509", "486186" ]
01472297
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472297/file/978-3-642-40352-1_74_Chapter.pdf
Endris Kerga email: [email protected] Armin Akaberi email: [email protected] Marco Tasich email: [email protected] Monica Rossi email: [email protected] Sergio Terzi email: [email protected] Lean Product Development: Serious Game and Evaluation of the Learning Outcomes Keywords: Lean Product Development, Set-Based Concurrent Engineering, Serious Game This paper presents a Serious Game (SG) about SBCE (Set-Based Concurrent Engineering), which is one element of Lean thinking in Product Development (PD). The game is structured in two stages that simulate the traditional approach to product concept development called PBCE (Pont-Based Concurrent Engineering) and SBCE approaches. Moreover, this paper presents the learning outcomes gained through running the game in a company. Finally, some practical and theoretical insights gained throughout the game play are introduced. Set Based Concurrent Engineering (SBCE) is an element of lean thinking in product development (PD). It is effective at early stages of design when concepts are generated and selected [START_REF] Sobek | Toyota's Principles of Set-Based Concurrent Engineering[END_REF], [START_REF] Ward | The Second Toyota Paradox: How Delaying Decisions Can Make Better Cars Faster[END_REF]. In a traditional approach which is called Point Based Concurrent Engineering (PBCE), a single concept is selected as early as possible assuming that it will be feasible. However, PD is characterized by uncertainties due to changes in customer requirements, manufacturability issues, sub-system configurations and so on. Thus, often PD project suffers from design reworks due to the so called 'false positive feasibility', where project teams assume a concept is feasible, but will learn later in the development process that it is not [START_REF] Oosterwal | The Lean Machine: How Harley-Davidson Drove Top-Line Growth and Profitability with Revolutionary Lean Product Development[END_REF]. Toyota uses SBCE approach to tackle such a problem by effectively utilizing product knowledge (lesson learned) to generate alternative design concepts. Unless a concept is proven to be infeasible, designers won't eliminate it from a solution set. The unique feature in SBCE process is that design decision is based on proven data. Communication and negotiation within teams are facilitated by a pull event where teams can visualize risk and opportunities using tradeoff and limit-curves. Finally, PD teams converge into an optimal design taking rough objective criteria (such as cost, quality and time), so as the process will continue to detail design stages [START_REF] Sobek | Toyota's Principles of Set-Based Concurrent Engineering[END_REF], [START_REF] Ward | The Second Toyota Paradox: How Delaying Decisions Can Make Better Cars Faster[END_REF]. In practice, however, the awareness and the adoption levels of SBCE is limited across the industries surveyed [START_REF] Rossi | Lean Product Development: Fact Finding Research in Italy[END_REF]. Therefore, the purpose of this paper is to design and validate a learning tool using a serious game approach that enable practitioners to have a hand on experience about SBCE principles and its associated enablers. In section 2, introduction to the game's features will be introduced. In section 3, a model to evaluate the learning outcomes of the game will be discussed. The results found in the one game play will be presented in section 4, and followed by conclusions in section 5. Introduction of SBCE Game To design the SBCE learning tool a serious game approach is used. In general, the application of games with the aim of education and learning is defined as "Serious games" [START_REF] Wouters | Measuring learning in serious games: a case study with structural assessment[END_REF]. In SG, players assume different roles and are involved in simple and complicated decision making processes, which makes it attractive for SBCE process where alternative design exploration and convergence involve multiple-views. Moreover, SG creates a safe and entertaining environment, so that players from the industry freely experiments SBCE process without interfering in an actual PD process. In the game, players have to design a simplified Airplane structure as shown in Figure1, using different type of LEGO bricks. The Airplane has four sub-systems to be designed (body, wing, cockpit and tail). The game is divided into two stages: Stage one, where players design an Airplane for a given list of customer requirements following a PBCE process; Stage two, where players are provided with the necessary enablers to execute SBCE process. The enablers will help players to explore alternative design concepts, communicate about alternative solutions within a team, and converge into a preferred (a high value) Airplane structure. After each stages, players performances' breakdown in terms of cost and time of development will be provided to facilitate discussion. The game is played in a team of four players and each player represents sub-system departments (body, wing, cockpit and tail). The main inputs to the Stage one of the game are: Customer requirements and supplier components catalogue. The list of customer requirements to build an airplane structure were made intentionally to be vague, for example, the number of passengers might be from 90 to120 and the wing span could be 7 to 15. Such range of customer requirements (vague) reflect the reality, in which customers often suggest imprecise information, and force designers to explore their concept solutions wide open. In the game, these requirements can be handled in different ways in the Stage one (PBCE process) and in Stage two (SBCE process). Thus, players will understand the advantage of following SBCE than PBCE process to better achieve the customer requirements. In the game, there are five customer requirements: number of passenger (Np), Airplane weight (W) (Airplane structure (Wa) and passengers weights (Wp)), Length of Airplane (L), Wing span (ws) and Tail Span (ts). F o r P e e r R e v i e w O n l y A P M S 2 0 1 2 The supplier components are LEGO bricks in different sizes and shapes that are used to build body, wing, tail and cockpit. Each brick has circular points on the top, and the number of points on the top of a brick define the characteristics of the component. A single point on a brick has the following character: Cost [START_REF] Bernstein | Design Methods in the Aerospace Industry: Looking for Evidence of Set-Based Practices[END_REF], Lead time or component ordering time (0.5), Capacity (3), Weight (100), Length (1), and Width (1). Fig. 1. A simplified airplane to be designed in the game (using LEGO bricks). "L": Length of Airplane, "lw": Length of Wing, "lb": Length of Body, "wb": Width of Body, "lc": Length of Cockpit, "Ws": Wing Span, "lt": Length of Tail, "ts": Tail Span Stage one and testing Taking the customer requirements and the supplier catalogues, players will be asked to build an airplane in this stage. This stage simulates PBCE where players first design an airplane structure, build it and then test it to the constraints. Design-Build-Test approach is what many non-lean organizations follow at early stage of design [START_REF] Kennedy | Product Development for the Lean Enterprise: Why Toyota system is four times more productive and how you can implement it[END_REF]. Once players finish designing and building a prototype design in the first stage, they should submit it to "testing department" to check for stability, flying conditions and dimensional configurations as seen in Table 1. Table 1. Testing constraints Alignment between body and cockpit The facilitator of the game acts as a testing department. Players will not be given these testing constraints at the start of the game. If the design fails, the prototype should be redesigned. Redesigning has penalty costs and additional time to be penalized. After the first trial the testing constraints will be given to player. If the prototype passes the testing constraints, players will be given the breakdown of their performances in terms of cost and time. The determination of cost and time is executed as follows: ─ Total development cost (C) ( ) ( ) ( ) Where, cc=Total number of points*single point cost, ci=30% of cost of components (this is an additional cost if players fail to pass testing constraints), and cp (is an additional cost if players fail to meet customer requirements). cp is determined based on unsatisfied customer requirements following the following rules: Unsatisfied customer requirement Np Stage two and supporting enablers In Stage two, players will follow a structured SBCE process phases. This stage simulates a different approach than the first. Here, players follow a Test-Design-Build approach, and design decisions are made as late as possible until feasibilities are proven. In summary, players will explore, communicate set design solutions, evaluate Explore alternative set of designs: at this phase, players will be supported by QFD (Quality Function Deployment) tool to explore alternative sub-system solutions and able to ingrate customer requirements into an Airplane parameters. QFD is a powerful tool in applying SBCE process, it helps designers to translate rough customer requirements into alternative sub-system solutions [START_REF] Liker | Involving Suppliers in Product Development in the United States and Japan: Evidence for Set-Based Concurrent Engineering[END_REF]. Therefore, each player in a team will explore alternative body, wing, cockpit and tail solutions using its own QFD. This phase is the beginning of a SBCE process in the game. B. Communicating set of design solutions: players at this phase can eliminate Airplane's sub-system solutions that are not compatible. For example, body department might explore alternative feasible body lengths as [START_REF] Frye | Applying Set Based Methodology in Submarine Design[END_REF]12,14). Meanwhile, cockpit department might generate feasible body of lengths as (12, 13, 14). Therefore, the departments should eliminate incompatible body lengths (11 and 13). C. Provision of knowledge from testing department: from step B, players have complete alternative Airplane solutions which are compatible, but need to filter them using physical constraints. In the game, physical constraints come only from the testing department. Limit-curves are used to generalize knowledge and visually depict solutions which are feasible from testing point of view, see [START_REF] Sobek | Toyota's Principles of Set-Based Concurrent Engineering[END_REF][START_REF] Ward | The Second Toyota Paradox: How Delaying Decisions Can Make Better Cars Faster[END_REF][START_REF] Oosterwal | The Lean Machine: How Harley-Davidson Drove Top-Line Growth and Profitability with Revolutionary Lean Product Development[END_REF] for more details about limit-curves. Therefore, at this phase, players eliminate those Airplane solutions which cannot pass the testing constrains listed in Table 1. D. Convergence to a preferred solution: once alternative feasible Airplanes are identified, estimating the cost and development time of each Airplane solutions help to select the preferred solution. Refer section 2.1 to see the cost and time calculations used. In summary, the second stage is to lead players through the step-wise phases of SBCE process. The objective is to educate players how to delay decisions early in design phase, and facilitate test-design-build approach to avoid unnecessary design reworks and missing customer goals. Evaluation Framework for Learning Outcomes The comparison of performances between the two stages can be taken as a validation mechanism to roughly estimate the advantages of SBCE process (Stage one) over the traditional process (Stage two). However, the main purpose is not to measure the performance leverages of SBCE process using the game. Because, the game is a simplified version of the reality and cannot capture the real complexities of a PD that make a SBCE approach more advantageous (such as product complexity, innovativeness of the product, team size and so on). Therefore, in this paper , it is aimed at measuring the effectiveness of the game to translate the SBCE principles and its associated supporting elements. Given that, it is also aimed at measuring how practitioners have perceived the potential of SBCE process and its elements in improving PD performances. F o r P e e r R e v i e w O n l y A P M S 2 0 1 2 Garris et.al. identified three level of knowledge aspects in order to measure the effectiveness of a SG [START_REF] Garris | Games, Motivation, and Learning: A Research and Practice Model[END_REF]:  Declarative knowledge: is the learning of facts or increasing one's knowledge about a subject. In this paper, the understanding of the SBCE principles and its supporting enablers by players are parts of the declarative learning outcomes.  Procedural knowledge: this aspect refers to the learning of procedures, and also to the understanding of patterns of processes and behavior. In the SBCE game, procedural knowledge is related to players ability to associate the specific elements of SBCE process and the benefits of using them to support a better decision making.  Strategic knowledge: Within gaming this aspect has been explained as implementing knowledge from the game in a new (a real-world) situation. Gaming can also contribute to develop reflective competences. Within complex systems, as in PD, it is not only refers to implementing what is taught in the theory but also observing behavior and adapting to new situations. In SBCE game, several complexities have been simplified but adequate challenges are added to enable players to reflect beyond the gaming sessions into real world practices. Based on the above framework, a structured questionnaire based on the Likert 5 scale has been prepared to measure the learning outcomes of the game. After playing the game with 36 designers (Mechanical, Electrical and Software) and project leaders of the Carel company (www.carel.com), players were asked to evaluate the declarative, procedural and strategic learning aspects of the game. The players have working experience ranging from 4-15 years and age from 25-50 years. Results and Analysis In general, the game has increased the level of awareness of players. Players understand the usage of tradeoff and limit curves to generalize knowledge, and their application in order to explore alternative designs. Communication among teams in SBCE process takes different form than a traditional point based approach, where designers have only once conceptual solution to communicate about. In traditional design approach, it is a norm that a functional team through 'over the wall' of a subsequent function and vice-versa. In SBCE process, different functions pull together their conceptual solutions and check sub-system compatibilities. In the game, players were provided with simple check-list to support communication and negotiation among teams. Though players understand how to use this communication mechanism, some doubts are exhibited on the importance of using such a mechanism. This is due to the simplicity of the Airplane to be designed, but in a real PD problem the complexity grows as more functions have to communicate about the alternative set of solutions. Figure 2 shows the perceived advantages of following a SBCE process from practitioners perspective. The theoretical advantages of SBCE seems to be confirmed by the practitioners. Most of the designers played the game agreed that the most significant perceived advantages of SBCE are 'facilitate learning about design solutions' and 'avoid design risks'. Using knowledge from past designs and exploration of alterna- tive designs guarantee the PD teams to brainstorm about set of solutions rather than one alternative. Moreover, frontloading the PD process minimize the probability of 'false positive feasibility' to occur. The players perceived also that SBCE reduce the development time and cost. However, such claims cannot be guaranteed if teams are not able to identify when to stop exploring and start converging [START_REF] Ford | Adapting Real Options to New Product Development by Modeling the Second Toyota Paradox[END_REF]. Among the main difficulties that have been mentioned to implement SBCE process is the generation of 'limit-curves'. Limit curves are fundamental to apply SBCE process. They are curves that generalize knowledge of sub-system designs, and designers can see the 'risky' and 'safer' design regions. Fig. 2. 'Perceived performance' improvements of SBCE process using the game (N=36) However, companies in the current practice don't use such curves to document, represent and share lesson learned or knowledge. Therefore, the main challenge will be to build the necessary competences to capture, represent and share past (static) and current (dynamic) knowledge gained through experimentation. Conclusions Most practical applications of SBCE process are reported from Automotive and Aerospace industries [START_REF] Sobek | Toyota's Principles of Set-Based Concurrent Engineering[END_REF][START_REF] Oosterwal | The Lean Machine: How Harley-Davidson Drove Top-Line Growth and Profitability with Revolutionary Lean Product Development[END_REF][START_REF] Bernstein | Design Methods in the Aerospace Industry: Looking for Evidence of Set-Based Practices[END_REF][START_REF] Frye | Applying Set Based Methodology in Submarine Design[END_REF]. In other industrial sectors its adoption level is limited. There might be some elements of SBCE process in practice, but its implementation as a structured methodological approach in PD is not prevalent. In this paper, a Serious Game that can bring a hand on experience is designed and the learning outcomes are measured taking a one company case. The company is in HVAC/R market (www.carel.com), which is different from Automotive and Aerospace industries. However, the players from different background and experience levels acknowledge that SBCE process is a much better approach than PBCE. Through the assessment, the players identified key advantages and hurdles of applying SBCE process in the company. In summary, SBCE process is an attractive and sensible approach at early phase of PD compared to PBCE approach. However, companies need to have a structure practices, tools and technologies (enablers) to realize the process. Such enablers are supportive to explore alternative solution, enable set based communication and facilitate convergence to a high value solution. F F Length Ratio of weight (RW)Wp = NP * 60 (Average weight of each passenger) Airplane stability F of solution and finally converge into a preferred one. The different phases of the Stage two are: A. F F
18,978
[ "999951", "1002211", "931901", "990007", "831167" ]
[ "125443", "125443", "125443", "125443", "308253" ]
01472300
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472300/file/978-3-642-40352-1_78_Chapter.pdf
Johann H Riedel email: [email protected] Jannicke Baalsrud Hauge Borzoo Pourabdollahian email: [email protected] Johann C K H Riedel The Use of Serious games in the education of Engineers Keywords: ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Today manufacturing is often a complex process, involving several partners around the world. The products are more customized and have shorter life-cycle times, which increases the marginal cost per product. As the employee is the person in an organisation that performs and lives collaboration, the organisational success will mainly depend on his/her capabilities to learn and act in a dynamic environment [START_REF] Windhoff | Planspiele für die verteilte Produktion. Entwicklung und Einsatz von Trainingsmodulen für das aktive Erleben charakteristischer Arbeitssituationen in arbeitsteiligen, verteilten Produktionssystemen auf Basis der Planspielmethodik[END_REF]. Decision makers, like people in general, are prone to the misperceptions of feedback. This means that their performance in complex and dynamic systems is hindered by non-linearity, time delays and feedback structures [START_REF] Sterman | Modeling Managerial Behavior: Misperceptions of Feedback in a Dynamic Decision Making Experiment[END_REF]. Decision making in dynamic systems is hard because it calls for dynamic decision making, which is a stream of decisions closely depending on one another. Thus, the question is: which skills does an employee need in order to perform well in collaborations, and how is it possible to mediate skills in such a way that he/she can act as needed when a new situation arises and how can engineering students be prepared for this during their studies? Manufacturing and engineering education needs to focus on developing the skills required by new generations of employees; adapting the educational content and its delivery mechanisms to the new requirements of knowledge-based manufacturing, the provision of integrated engineering competencies, including a variety of soft skills, and the promotion of innovation and entrepreneurship (Taisch, 2011, p.11). In order to achieve this, it is necessary to focus more on multi-disciplinarity and integrated engineering competencies (Taisch, 2011). 2 Why use serious games The term Serious Games mainly refers to games that are primarily designed for non-entertainment purposes. According to [START_REF] Corti | Games-based Learning; a serious business application[END_REF] a Serious Game "is all about leveraging the power of computer games to captivate and engage end-users for a specific purpose, such as to develop new knowledge and skills". This unique feature significantly supports new requirements in engineering education; especially those that cannot be taught by traditional means. For example, students can interact in virtual environments, which will confront them with complicated situations in which they need to gather and analyse information to take critical decisions. To reach this goal they are pushed to improve their soft skills, such as communication and negotiation, as well as technical skills. Experience so far with the use of serious games in the education of engineers has shown a positive effect on the students' abilities both, to apply the theoretically gained knowledge and to enhance required business skills for a qualified engineer. Learning by serious games can be clarified by Kolb's experiential learning cycle, which views learning as a process, which includes four essential phases: Active experimentation and specific experience, Direct experience, Reflexion, and Assesment. Active experimentation and testing lead to direct experience (Straka, 1986). Direct experience allows for reflection on different aspects of the experienced situation both at an individual as well as at a group level. Based upon this reflection, an assessment as well as a definition of the consequences and potential generalization possibilities leads to the awareness of new actions. This experiential learning approach requires a free, self directed and self organized learning process. Effective engineering education needs a learning-by-doing approach characterised by moving from passive perception to active experience. However, there are not enough real life situations that can be used for education or training, since in many real life situations the occurrence of errors or mistakes -which are natural in learning situations -are not acceptable. Simulation games using advanced information and communication technology can be used as a substitute in order to meet this need for active experience [START_REF] Riis | Simulation Games and Learning in Production Management[END_REF][START_REF] Radcliffe | Contextual Experiences in Concurrent Engineering Learning, The National Teaching Workshop[END_REF]. Creating knowledge by gaming has proved to be particularly effective whenever soft skills are essential and traditional learning methods fail [START_REF] Windhoff | Planspiele für die verteilte Produktion. Entwicklung und Einsatz von Trainingsmodulen für das aktive Erleben charakteristischer Arbeitssituationen in arbeitsteiligen, verteilten Produktionssystemen auf Basis der Planspielmethodik[END_REF]. [START_REF] Warren | The effective communication of system dynamics to improve insight and learning in management education[END_REF] underscored that decision makers should have access to gaming simulation tools in order for them to cope with the business systems in which they evolve, and to reap strategic management skills. Scholz-Reiter et al. (2002) strongly emphasized the need for the insertion of management games to practitioners and engineering students in organizations and universities, respectively, in order for them to learn specific tasks and aptitudes like communication and co-operation in complex distributed production systems. Up to now, there has not been so much research carried out to understand why specific games work or do not work. This paper presents three case studies of three games to start to understand how they work. Case Studies of Serious Games This section describes three case studies of serious games showing how their pedagogical aims and evaluation results compare. In all three cases we have used a blended learning concept based on Kolb's experiential learning cycle [START_REF] Kolb | Experiential learning: Experience as the source of learning and development[END_REF]. The experience so far has shown that a well-designed game will not only help the learner to transfer theoretical knowledge to practical skills, but also to transform the gained experience into knowledge so that they can assess previously acquired knowledge and generate new understanding. The games are used by students at masters level and by engineers in industry. The authors have been using serious games for the mediation of skills to engineering students for several years and have collected good feedback both from the students as well as from the analysis of the learning outcomes [START_REF] Riedel | Serious games and the evaluation of the learning outcomes -challenges and problems[END_REF]). However, with some groups the gaming approach went wrong resulting in a low learning outcome and high stress factors for students. In this paper we analyse why the learning outcome is so dependent on the students' background, and look for mechanisms for improving the learning outcome for the user group with a low learning outcome. A brief description of the games used follows. COSIGA Cosiga is a New Product Development (NPD) simulation game. It was designed to tackle the problem of teaching today's engineering and management students the know-how of to design and manufacture new products, to equip them with the experience of design, and to teach them how to deal with the complexities of the new product development process [START_REF] Riedel | Academic & Industrial User Needs of a Concurrent Engineering Computer Simulation Game[END_REF]. It is a team player game, played by five people playing in the same room, or in a distributed condition using the internet and telecommunications. Each person plays a role in the product development process (project manager, designer, marketing manager, purchasing manager and production manager) and works collaboratively together, to specify, design, and manufacture the final product -a type of truck. The product's manufacturability will be put to the test in the simulated factory to produce the final products. COSIGA enables students to experience the process of new product development from the perspectives of the different disciplines involved in the design process and build their own understanding of the issues of design, manufacture, marketing, project management and purchasing; and the interactions between the disciplines. The game enables students to interact through continuous communication, to share and exchange information, initiate argumentation on problems and concepts, form relation-ships between pieces of discipline specific information and finally articulate knowledge and make decisions. During their experience with COSIGA students are not really learning about the technical aspects of designing and manufacturing a truck but learning how to increase their awareness of the many complex, often interdependent issues of the design process, through constant information sharing, rationale forming and building their capacity to act, make decisions and create new knowledge. Beware game Beware is a multi-player online game implemented in a workshop setting. The application is used as a training medium for companies involved in supply networks covering the issue of risk management. Currently, Beware is designed with two distinct and independent levels. In the first level, the participant experiences risks within the organization. In this first level, the players have to specify, design and produce a simple product within their company. During the game, the players have to identify upcoming risks and think how to reduce or treat them by developing suitable communication and co-operation strategies as well to define the responsibility of each role. The players can communicate using the inbuilt chat, phones or Skype or also schedule physical meetings to discuss relevant issues. In the second level, the players are faced with the design, development and manufacturing of an extended product -a cell phone with a range of services. The players use their acquired knowledge and skills in the inter-organisational contract negotiations as well as to carry out the collaborative production in a distributed environment. While the simulated service company takes the consortia's leadership and develops services, the two simulated manufacturing companies develop and produce generic cell phone parts. As the necessary information will be distributed unequally, the students have to cooperate to enable the constant flow of information that will then lead to a constant flow of material. Also there different events and risks included, and the player needs to carry out some risk management tasks. The game enables students to identify how different types of risks impact differently on the success of the collaboration and also how the impact of risks increases and affects the partners' success over time, if no actions are taken to reduce and control the risks. The students have the possibility to apply risk assessment and risk management methods and thus increase their awareness of risks in production networks as well as the complexity of decision making. Set-Based Concurrent Engineering game (SBCE) Set Based Concurrent Engineering is a concept in new product development based on the lean thinking perspective. It is going to be more diffused in future production systems, due of its advantage in decreasing the time and cost of production. The aim of this serious game is highlighting the benefits of applying the SBCE concept in producing a simplified airplane. It is a teamwork game that includes four members who take the role of each department (body, wing, cockpit and tail). In the first stage players will be asked to design an airplane regarding both customer requirements and supplier components based on a point-based approach and then they will be introduced to the SBCE enablers that they need to execute to design the airplane, with the same data given in the first stage. Finally, after playing the game, players will observe that applying SBCE decreases the time and cost of the design [START_REF] Kerga | Lean Product and Process Development: a Learning Kit[END_REF]). 4 Comparison of the Games' Learning Goals It is useful to compare the learning goals, or objectives, of the three serious games. The table below shows their learning goals. Table 1: learning objectives of the three games From the above table we can see that there are a number of similarities between the learning objectives of the games (notwithstanding the fact that two of them are focused on NPD and concurrent engineering). The learning goals address: subject specific domain knowledge, individual skills and group skills: communication, problem solving, decision making, negotiation, etc. All three games were designed to help engineers and students to develop a practical understanding of a specific engineering technique -new product development, risk management and concurrent engineering. Learning goals Cosiga Beware SBCE To aid the players to understand the enabling factors which lead to an effectual product development by applying Set-Based Concurrent Engineering. x To help players to understand the new product development process and apply Concurrent Engineering principles and practice. x To impart and improve knowledge of the most common Concurrent Engineering rules and tools. x x To acquire best practice in the Concurrent Engineering domain x To identify, analyse and solve potential problems during Concurrent Engineering x To develop the ability to make decisions in a complex context x x To support the understanding on how to apply methods supporting decision making in a cooperative and competitive environment x To support the understanding of risk assessment and risk management in the supply chain x To learn how to apply risk assessment and risk management methods both in the supply chain as well as within a department. x To identify, analyse and solve potential risks in the supply chain x To demonstrate the challenge to meet both design and customer requirements. To acquire and improve group decision making skills. x x x To acquire and improve group negotiation skills x x To acquire and develop the ability to develop a common understanding with others in a CE Group x To acquire and develop the ability to appreciate, understand and make good use of the contribution of others x To improve risk management skills x To raise awareness, understanding and coping with the typical day-today problems in working collaboratively with people from different cultures and languages. x To acquire and develop the ability to collaborate in a European industrial context. x However, engineering is not just about the use of specific techniques or methods to solve problems; it is about groups of engineers working cooperatively together. Therefore, all three games place an emphasis upon developing the group skills of the participants. 5 Comparison of the Games' Learning Outcomes The above table summarises the results of several evaluations carried out on the three games. A primary way to tell if a game is simulating the intended process correctly is to examine the communication flow within the game -who is asking who for information and who is supplying information. Various post-game questionnaires were used to determine if the participants had learnt the appropriate concepts. This showed that some concepts were learnt very well, but others less so -eg. the importance of product cost in Cosiga declined after the game, this was due to their being very little emphasis placed on product cost in the game itself [START_REF] Riedel | A Report On The Experiences Gained From Evaluating The Cosiga NPD Simulation Game[END_REF]. Another influence on participants' learning was their prior knowledge and their liking of the gaming method. For the learning from the serious game to be successful the participants need to have the same level of knowledge -if some of the players have inadequate background knowledge, gaming is less successful. Conclusion The three games discussed in this paper are all used in the education of engineers. The games are used by students at master level and engineers in industry. The authors have been using serious games for the mediation of skills to engineering students for several years and have collected good feedback both from the students as well as from the analysis of learning outcomes. The evaluation of the games showed in general that the players were able to apply the gained theoretical knowledge and also to strengthen their collaboration skills. However, the analysis also showed that the effectiveness of the games was dependent on the group -their level of background knowledge, if it was an inhomogeneous group or a homogenous group, as well as being dependent on their openness for playing games. x To demonstrate how implementing Set-Based Concurrent Engineering can affect the product development process x To acquire and develop group communication skills Table 2 : 2 learning outcomes of the case study games Social/Soft Skill Knowledge Declarative Procedural Strategic Different types of Understanding the Understanding the communication New Product De- product develop- were observed (e.g. velopment concept. ment process. ask for information, Understanding the Understanding offer information, distribution of how to collabo- request action, etc) knowledge during rate, with down- Cosiga and the result the game represent-demonstrated that product develop-ment. stream and up-stream actors. ed the required communication pattern. Improving multidis- ciplinary team working and deci- sion making skills. Different types of Understanding co- Understanding Applying communication operative produc- how to redesign several during the decision tion in a distributed the supply chain methods making process environment. for reducing risks. supporting were observed. Identify the long Understanding risk man- Beware term impact on decisions made both on own and partners' organiza- how cost, quality, time, customer service indicators are affected by the agement. Under-standing the long tion. production pro- term im- Supply chain risk cess and the iden- pact of management. tification and decisions treatment of risks. and long term risks. Acknowledgements. The research reported in this paper has been partially supported by the European Union, particularly through the projects: GaLA: The European Net-work of Excellence on Serious Games (FP7-ICT-2009.4.2-258169) www.galanoe.eu and ELU: Enhanced Learning Unlimited (FP6-IST-027866).
19,349
[ "991671", "1002215", "1002216" ]
[ "217679", "125443", "485173" ]
01472304
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472304/file/978-3-642-40352-1_81_Chapter.pdf
Eirin Lodgaard email: [email protected] Inger Gamme email: [email protected] Knut Einar Aasland email: [email protected] Success factors for PDCA as continuous improvement method in product development Keywords: Automotive supplier industry, product development, continuous improvement, PDCA In order to maintain sustainability in an ever changing environment, where customer requirements contains a yearly price reduction over the life cycle of a product, decreased time for development of new products and increased product quality, there is an increased need for focus on continuous improvements. A well-known improvement method is the PDCA (Plan-Do-Check-Act), which many companies have succeeded in implementing in the manufacturing department. Not so common, is the use of this method for the development process. The aim of this article is to present success factors which must be in place to succeed in using the specified method, and thereby the desired improvement during continuous improvement initiatives within product development. Management commitment is ranked as most important followed by knowledge about how to use the method, when to apply PDCA, efficient performance and use of internal marketing activities to focusing on the topic. Introduction The long term business sustainability in the automotive supplier industry depends on the ability to face demanding customer requirements. Requirements of a yearly price reduction over the life cycle of a model, decreased time for development of new products and increased requirements and expectations of continuous development of better products and processes for the future are typical for the automotive industry. This requires that companies in the automotive supplier industry continuously improve in all functions of the company, including product development (PD). At least 80 % of the life cycle cost of a product is determined in the early phase of PD. This indicates the importance of how the PD is performed with regards to continuous improvement [START_REF] Ragatz | Success Factors for Integrating Suppliers into New Product Development[END_REF]. Today there are several types of continuous improvement methods in use in the automotive supplier industry, such as LAMDA (Look-Ask-Design-Model) [START_REF] Ward | Lean Product and Process Development[END_REF], Six Sig-ma [START_REF] Bicheno | Six Sigma and the Quality Toolbox[END_REF] and PDCA (Plan-Do-Check-Act) [START_REF] Shook | Managing to Learn. Using the A3 Management Process to Solve Problems, Gain Agreement, Mentor, and Lead[END_REF][START_REF] Sobek | Understanding A3 Thinking, A Critical Component of Toyota`s PDCA Management System[END_REF]. This research project has chosen to study the PDCA method because it is highly recommended as continuous improvement method in the automotive industry, outlined in the quality standard, ISO/TS 16949 [START_REF]Quality Management System: Particular Requirements for the Application of ISO 9001:2008 for Automotive Production and Relevant Service Part Organization[END_REF]. Many companies have successfully implemented the PDCA method in the manufacturing department, but for the future it is important to include the product development as well to be competitive in this demanding industry. The aim of this paper is to identify success factors when using the established continuous improvement method PDCA, in product development. What success factors must be in place to succeed in using the specific method to achieve the desired improvements during continuous improvement initiative? 2 Continuous improvement in product development Continuous improvement has been in focus in the automotive industry for several decades. It is important to have a never-ending process of performance improvements to gain effective processes in product development [START_REF] Morgan | The Toyota Product Development System, Integrating People, Processes and Technology[END_REF]. Several companies have succeeded implementing systematic continuous improvement in manufacturing, but few have succeeded in PD. Although many "kaizen" or continuous improvement initiatives are started, the failure rate is high [START_REF] Bessant | An Evolutionary Model of Continuous Improvement Behavior[END_REF]. Lillrank et al reported that two out of three continuous improvement initiatives fail to deliver the desired improvement [START_REF] Lillrank | Continuous Improvement: Exploring Alternative Organizational Designs[END_REF]. This shows the importance of finding out why the failure rate is high and what factors must be in place for successful implementation of continuous improvement in a complex and turbulent environment, such as product development. PDCA as a continuous improvement method In 1950 W. Edwards Deming was invited to Japan by the Union of Japanese Scientist Engineers to teach statistical quality control. He arranged seminars together with manager, focusing on the connection between quality and productivity and the use of statistical process control. He also introduced the Deming wheel which emphasized the importance of constant interaction among research, design, production, and sales to assure better product quality and satisfied customers. The Japanese developed it further to be applied in all problem solving situation and called it PDCA [START_REF] Imai | Kaizen. The Key to Japan's Competitive Success[END_REF]. The PDCA method has been used ever since by the Japanese industry as a systematic continuous problem solving approach. Western industry started to focus on development and implementation of continuous improvement processes in the beginning of the 1980s [START_REF] Nilsson-Witell | Continuous Improvement in Product Development[END_REF]. Today, this tool is widely outspread in the manufacturing industry and the PDCA is highly recommended in the quality assurance standard ISO/TS 16949 used by the automotive supplier industry. The PDCA method includes four phases: Plan, Do, Check and Act. The phases are defined in different versions in the literature, but with the same purpose, to continuous improve. We have divided the four phases into seven steps, as shown in figure 1. It starts with identifying the current situation and target for the improvement. Thereafter, to dig deep into the details to discover root causes to avoid jumping to solutions. After implementation of the actions it is important to study and evaluate the results and if necessary go back to prior phases and modify solutions. The final step is to ensure that the improved level of performance is maintained and to capture what is learned during each of the phases in the PDCA cycle. Today, the PDCA method is primarily applied in the manufacturing department and less applied in the product development. In manufacturing of a physical product, it is easier to implement the method. In addition, the manufacturing staff is more easily manageable than in the innovative and creative environment of product development. In product development, it is necessary to find a balance between formal processes and creative freedom in order to succeed with continuous improvement. Case company and research method The following section outlines the case company and the research approach used to conduct this study. The case company is a Norwegian automotive supplier with customers on a worldwide basis. The automotive supplier industry was chosen as a case since they already have a formal requirement, based on ISO/TS 16949, to define a continuous improvement method and to apply the method to assure continuous improvement in the entire company. The chosen case company has a Quality Assurance system, where the PDCA method is chosen to be implemented in the entire organization. They have long experience with continuous improvement methods primarily in the manufacturing department; therefore it is interesting to investigate the product development area and the main critical success factors for successful implementation of the PDCA method. Definition of improvement initiative Background & Target Collecting of facts Root cause(s) analysis Chose action & Make an action plan Implement according to action plan Study the results Standardization & Transfer of knowledge The research method is based on an extension of the action research approach. The action research is on application of the PDCA method in continuous improvement projects in product development. After finishing several continuous improvement projects in the action research, with the purpose to get experience with PDCA, a brain writing workshop was performed with the aim of defining the success criteria for application of PDCA. The same team participated in both the improvement projects and the brain writing workshop. The participants were professionals from the field of management, simulation, calculations, process and product development, quality assurance and research. Brain writing is a silent, sharing, written creativity method and does not involve group discussion of written ideas during the idea-generation session [START_REF] Heslin | Better than Brainstorming? Potential Contextual Boundary Conditions to Brainwriting for Idea Generation in Organizations[END_REF].The brain writing method applied was the 6x3x5 method, which uses the principle: 6 participants writing down 3 ideas in 5 minutes. Each participant starts with writhing down three ideas on a sheet of paper before sending the paper on to person seated on his right-hand side. The next step was to complement ideas on the received sheet. This was continued until the sheets were passed through all participants. Success factors for use of PDCA This chapter outlines the results from the brain writing workshop based on the extended action research project with focus on application of PDCA. Table1 summarizes the results from the brain writing workshop with the main success criteria when using PDCA in product development. These main factors have been grouped into five categories, after a coding and analyzing process [START_REF] Miles | Quality Data Analysis: A sourcebook[END_REF]. The first result in the table was ranked as the most important factor and the last one as less important in the brain writing workshop. The listed factors are: management commitment, knowledge on how to use PDCA, when to apply the method, efficient performance and, internal marketing activities. Each participant picked out the three most important ideas which are shown in the table as subgroups. Each success factor and its subgroup factors will be presented more in-depth. Management commitment Management commitment is ranked as the most important factor in succeeding with your continuous improvement initiatives based on the PDCA method. One possible explanation for this result is that people tend to prioritize the work that the management team wants them to perform. It is not easy to stay motivated working on topics nobody asks for. Imai stated that continuous improvement is the most dominant concept behind good management [START_REF] Imai | Kaizen. The Key to Japan's Competitive Success[END_REF]. Bessant et al. stated that the management is a key variable to maintain the continuous improvement behavior patterns but that is often poorly understood by themselves [START_REF] Bessant | An Evolutionary Model of Continuous Improvement Behaviour[END_REF]. The use of PDCA must be included in the strategy to show that the management really wants to apply the PDCA method. This will help the company to sustain the use of the PDCA method, especially if the management also uses the method. Knowledge on how to use PDCA Not surprisingly, the results from the brain writing workshop shows that the entire organization must have performed education in the PDCA method as well as further training in practical use. Educational actions in order to get knowledge about the method are essential to be able to perform continuous improvement initiatives based on PDCA [START_REF] Langley | The improvement Guide. A practical Approach to Enhancing Organizational Performance[END_REF]. A firm competitive is not so much production equipment but rather what it knows and how it behaves [START_REF] Bessant | An Evolutionary Model of Continuous Improvement Behavior[END_REF]. Ishikawa stated that continuous improvement starts with education and ends with education [START_REF] Ishikawa | What is Total Quality Control? The Japanese Way[END_REF]. This is supported by Caffyn who found that lack of training is an inhibiting factor for extending continuous improvement to the new product development process [START_REF] Caffyn | Extending Continuous Improvement to the New Product Development Process[END_REF]. Finding from the study done by Yan and Makinde shows that management must prioritize training opportunities to all employees to assure that they really understand what continuous improvement is about [START_REF] Yan | Impact of Continuous Improvement on New Product Development within SMEs in the Western Cape, South Africa[END_REF]. This is substantial to be able to apply the method in a time efficient way as the intention of PDCA is. When to Apply PDCA When to apply PDCA is not clearly identified [START_REF] Bessant | An Evolutionary Model of Continuous Improvement Behavior[END_REF]. Not all continuous improvement initiatives require use of PDCA, however purposeful improvement in large or complex systems will be appropriate to use PDCA [START_REF] Langley | The improvement Guide. A practical Approach to Enhancing Organizational Performance[END_REF]. Common understanding about when to apply the method could be an advantage in order to avoid frustrated employees that do not know what type of continuous improvement initiatives they shall apply the PDCA method on. This will probably be more clarified when the organization has used the PDCA for a while. When you got more experience, then you know when the PDCA can be efficiently applied or not. The dedicated project team in this research study had performed some continuous improvement projects by applying PDCA. In spite of this it was not sufficient to have an overview of when to apply PDCA. Efficient performance Today it is several definitions in the literature of the four phases of PDCA, all with the purpose to assure continuous improvement and problem solving. It is important for organizations to clearly define the concept together with well prepared and user friendly templates. The western companies is known as too fast to conclude on solutions therefore it is important to define the first phase clearly and to secure enough time to collect data to achieve fact based solutions [START_REF] Sobek | Understanding A3 Thinking, A Critical Component of Toyota`s PDCA Management System[END_REF]. Internal marketing activities The use of intranet, or similar type of mediums, in order to communicate the results from previous improvement projects to the affected part of the organization, is among the participants, defined as an important topic. Use of internal marketing activities to make a clear and credible plan for the organization together with demonstrated results from continuous improvement initiative are essential element in motivating people and to grow culture for continuous improvement [START_REF] Imai | Kaizen. The Key to Japan's Competitive Success[END_REF]. Communication about where the company is heading and how it will get generally make people more enthusiastic about the improvement process. This may indicate that visualization of results from continuous improvements, which must be common for all affected employees, is important to drive successful improvement projects. In a study done by Caffyn and Grantham the results show that all companies studied lacked a strategic approach to the development of continuous improvement, and to capture, share and deploy learning from the improvement [START_REF] Caffyn | Enabling Continuous Improvement of New Product Development Process[END_REF]. Many people have difficulties sharing knowledge across the team and sometimes even with people in the team. Internal marketing activities can make the knowledge more accessible to others, enable reuse of the knowledge they have, and make it more accessible for others. Capturing and sharing knowledge created is important in delivering high quality products [START_REF] Radeka | Lean Product Development at Playworld System[END_REF]. Concluding remarks Based on action research and a brain writing workshop, involving one of the development teams, we have identified the main success factors and their subgroups to be able to succeed using PDCA in product development. From observations in our study we can propose five aspects which are essential to be aware of if you want to implement the specified improvement method PDCA, in product development. The first, and the most important one, is the commitment by the management. This aspect is of importance to be able to establish the method as a standardized improvement method. As a strategy to solve continuous improvement initiatives, PDCA must be included by the management as method, which they are using in their daily work for the suggested purpose. When to use the defined improvement method, combined with sufficient competence of how to use the method, are the next two success factors. The fourth aspect is to ensure efficient ways to perform the method, such as use well prepared user-friendly templates, with a thought-through common way to perform the improvement issues. The last aspect is to ensure an internal marketing of the application of PDCA method, with specific results from implemented improvement initiatives to show the efficiency of using PDCA. This will contribute on the sustainability of uses of the method if people can see the profit by use of PDCA. Fig. 1 . 1 Fig. 1. PDCA. Table 1 . 1 Success factors when applying PDCA Main Success factor Subgroup Number of feasible and relevant ideas Management Commitment • Walk like you talk 20 • Engagement • Sustainability • Included in the strategy Knowledge on how to use • Education 22 PDCA • Training of practical use • Knowledge on how to use PDCA in the entire organ- ization • Training included in the budget When to apply PDCA • Predefined which type of 6 continuous improvement project Efficient performance • A time efficient method • PDCA method applied as 11 simple as possible • Use of PDCA must be a choice in their existing "to do list" Internal marketing activi- • Use of intranet in general 13 ties • Publishing of implement- ed PDCA • Specialist continuous marketing the method • Hang up a notice (A3) in the office area
18,830
[ "1001938", "991791", "1002219" ]
[ "50794", "556764", "301160", "50794" ]
01472305
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472305/file/978-3-642-40352-1_82_Chapter.pdf
Siavash Javadi Sasha Shahbazi Mats Jackson Bruksgatan 21a Supporting Production System Development through the Obeya Concept Keywords: Production system development, Obeya, Kaikaku, Kaizen, Data visualization de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction The manufacturing industry is one of the dominant sectors of the European economy providing jobs for around 34 million people, and producing an added value exceeding €1 500 billion from 230 000 enterprises with 20 and more employees. Also, a large part of the growing service sector in Europe is linked to the manufacturing companies. However, the manufacturing industry in Europe faces intense and growing competitive pressure on several fronts. Although innovative and effective organization of operations has been the basis for industrial success and competition since the days of Ford, current challenges put new and stronger pressure on European manufacturing industry than ever before. Globalization, demographic changes, environmental challenges and new values drives increased demands on resource efficiency, sustainable manufacturing, and innovative and individualized products. Manufacturing in mature traditional sectors is increasingly migrating to low-wage countries such as China, India, Mexico and Brazil, but these countries are not standing still in their development. On the contrary, they are rapidly modernizing their production methods and enhancing their technological capabilities -in many cases building new green field sites, which means that they do not only have low labor cost but also the latest technology. In meeting such extensive competition, Swedish manufacturing industry both needs to build on existing strengths and find new ways to compete. One solution is to build on the under-utilized potential of innovative production development instead of mainly emphasizing on the operations phase, i.e. running production. Our industrial historical base and infrastructure give particularly good preconditions for Swedish manufacturing companies to compete with innovative production development as a very effective strategy. Also, to succeed in developing new efficient products and processes and thereby withstand and handle the global competition, continuous development and improvements as well as radical changes around existing production processes and technology are required. Thus, innovation in relation to production is becoming a crucial area which includes e.g. new business models, new modes of 'production engineering', efficient industrialization of new products and an ability to profit from groundbreaking manufacturing sciences and technologies. Innovation in both new production technology and new ways of working during development and operations is often difficult for competitors to get hold of and copy. Hence, it falls into the competitive advantage category of differentiation as a way to take offensive action in creating a defendable position in industry and generating a superior return on investment, according to [START_REF] Porter | The competitive advantage of nations: with a new introduction[END_REF]definition. In summary, the challenges facing the manufacturing sector in Sweden require radical transformations from a cost-based to a value-based focus. An ability to constantly adapt and improve the production operations and working procedures will bring about the required changes. To tackle them appropriately, manufacturing companies need to invest in creativity, entrepreneurship and new innovation models, specifically in the area production system development. Thus, an overall objective and research question is how to support innovation in production system development. One of the tools introduced by lean philosophy is Obeya or war room for development projects which will be explained in details in section 2.2 of this paper. In this research we are trying to illustrate how Obeya or similar meeting places can support innovative production system development. 2 Theoretical Framework Production System Development Different research traditions have contributed to the current state of knowledge concerning production system development. From an operations strategy perspective, [START_REF] Hayes | Link manufacturing process and product life cycles[END_REF] introduced the product-process matrix in order to choose production system layout according to product and process life cycle stage. [START_REF] Miltenburg | Manufacturing Strategy: How To Formulate And Implement A Winning Plan[END_REF] defines seven production systems and put them in a matrix in order to analyze similarities and differences between them. However, according to [START_REF] Cochran | A decomposition approach for manufacturing system design[END_REF], [START_REF] Miltenburg | Manufacturing Strategy: How To Formulate And Implement A Winning Plan[END_REF] and [START_REF] Hayes | Link manufacturing process and product life cycles[END_REF] fail to communicate how lower level design decisions, such as equipment design, operator work con-tent and so on, will affect system performance. These approaches treat production system design as a problem of selecting an appropriate off-the-shelf design from a given set of choices and criteria. Designers are not given the freedom to create a unique production system to satisfy a broad set of requirements in a particular environment. Examples of research from an industrial engineering perspective in the production system design area are the technology focused book by [START_REF] Bennett | Production Systems Design[END_REF], methods based on Integrated Definition for Function Modeling 0 (IDEF0) by for example [START_REF] Wu | A unified framework of manufacturing systems design[END_REF] and methods based on the function/solution mapping in Axiomatic Design, such as [START_REF] Suh | The Principles of Design[END_REF], [START_REF] Kulak | A complete cellular manufacturing system design methodology based on axiomatic design principles[END_REF], [START_REF] Cochran | A decomposition approach for manufacturing system design[END_REF], and [START_REF] Almström | Development of Manufacturing Systems -A Methodology Based on Systems Engineering and Design Theory[END_REF]. The system approach is taken on the production system problem by [START_REF] Seliger | Knowledge-based Simulation of Flexible Manufacturing Systems[END_REF]. Examples of other approaches for systematic design and evaluation of production systems are: Bellgran (1998), [START_REF] Säfsten | Evaluation of Assembly Systems: An Exploratory Study of Evaluation Situations[END_REF], [START_REF] Bellgran | Produktionsutveckling, Utveckling och drift av produktionssystem[END_REF], [START_REF] Wiktorsson | Performance assessment of assembly systems[END_REF] while methods based on the stage gate method e.g. [START_REF] Ulrich | Product Design and Development[END_REF] are developed further by e.g. [START_REF] Blanchard | Systems Engineering and Analysis[END_REF] and [START_REF] Wu | Manufacturing Systems Design and Analysis: Context and techniques[END_REF]. Innovation in a production system development perspective is given by Manufuture which describes the need of innovating production by "…important research, innovation and education activities that could transform the competitive basis of producing and delivering products and services that reach a new level in satisfying society's desires and expectations" (Manufuture, 2006). Innovation in production can also be related to improvements and changes within the production system, innovative production capabilities. In general, two approaches towards production system improvements are commonly recognized: (1) incremental / continuous improvements and (2) infrequent and radical improvements. The first type (called Kaizen in Japanese) is a well-known approach for improving production. Kaizen became widely known after the introduction by [START_REF] Imai | Kaizen (Ky'zen), the key to Japan's competitive success[END_REF] and is widely used within the lean production paradigm. The key characteristics of Kaizen are often described as continuous, incremental improvement in nature, participative, and processoriented. The concept has been extensively described, and a number of supporting methods and tools have been developed and widely applied in industry. The radical improvement approach or "Kaikaku" in Japanese has also been conducted by many companies. However, it has been less documented and conceptualized compared to continuous improvement. Radical changes are conducted infrequently, involving some fundamental changes within production and causing dramatic performance gain, and they are often initiated by top or senior management (Yamamoto, 2010). Obeya In this paper we studied Obeya as an innovation support tool for production system development with both above mentioned approaches. The Obeya concept is a part of Toyota product development system which has been used as a project management tool in Toyota. The concept was introduced during the development process of Prius in late 90's and since then it has become a standard tool for product development projects in Toyota [START_REF] Morgan | The Toyota Product Development System: Integrating People, Process, and Technology[END_REF] . Obeya in Japanese simply means "big room". However, it has also been called with other names such as "war room", "program room", "control room" and "the pulse room" in different researches and companies. By any name, Obeya is an advanced visual control innovation room where activities and deliverables are outlined and depicted in a visual format to be discussed in frequent meetings. A cross functional team including design and production engineers and other decision makers gathers in a single big room to make real time key decisions on the spot. [START_REF] Andersson | Spatial design and communication for Improved Production Performance[END_REF] assert that Obeya saves the time since it is not required to move to conference room or others rooms since people are already present in a single room to provide information and answer the questions. In Obeya it is not just the chief engineer who manages the process but all involved people contribute in the decision making process [START_REF] Liker | The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer[END_REF], which leads to higher level of cross-functionality in the process [START_REF] Söderberg | Building on Knowledge, An analysis of knowledge transfer in product development[END_REF]. Effective data visualization is another benefit of Obeya [START_REF] Söderberg | Building on Knowledge, An analysis of knowledge transfer in product development[END_REF]. The big room's walls are covered by different types of data to help the project team to make more informed decisions through simple and instant access to all required information in one place simultaneously. Visualized data can be designs and drawings, schedules and plans, technical specifications etc. "Engineers plaster the room's walls and mobile walls with information organized by vehicle part… and this information allows anyone walking the walls to assess program status (quality, timing, function, weight) up to the day" [START_REF] Morgan | The Toyota Product Development System: Integrating People, Process, and Technology[END_REF]. [START_REF] Andersson | Spatial design and communication for Improved Production Performance[END_REF]describe the benefits of Obeya as following:  Helping to make plan, do, check and action cycle shorter through gathering all decision makers in a single place  Facilitating communication between team members through face to face daily contact  Supporting the product development through combination of effective communication and proper technology  Providing an infrastructure for idea generation and development for both new products and cost reduction. Since very few studies have been done about obeya and its application to production development, we studied the current practice of using Obeya or similar meeting places in lean companies directly. We have studied four companies in Sweden who have been working according to the lean principles for several years. The goals of study were to understand ─ The uses of meeting places and its contribution to production system development ─ The methodology and work process related to those uses ─ Data visualization methods and tools used. Research Method To gather required information, semi-structured interview and direct observation techniques were used for the case studies. That type of interview was chosen for this research due to flexibility, allowing discussing and causing to come up with new questions during the interview. At each company along with interviews, production processes observed directly, meeting places were visited and in one of the cases authors participated in the daily morning meeting of the company for daily production issues. All meetings, interviews and visits were documented through voice recording and its transcription as well as taken notes. All studied companies are a part of international companies or groups which are considered as one of the leading names in their industries. Cases are named company A, B, C and D and they belong respectively to material handling equipment, automotive, construction and automotive parts industries. Company A has about 1800 employees in 5 assembly lines and 2 production departments. Company b is large size company with almost 1200 employees working in 3 different main departments. Company C is medium size company with almost 100 employees with a single production line. Case D is also a medium size company with almost 150 employees with 5 different product assembly lines. A cross-case analysis was done in order to compare the gathered information and data from the cases and their uses of meeting places for production system development. Results Gathered data from interviews and visits shows the following results: A Single meeting place is used in company A to manage daily production problems and Kaizen projects in the whole factory. Every department and line has their own 5 to 10-minute morning meeting in the meeting place. Meetings are held to discuss last day problems of the related line with people related to the problem. Predefined A4 forms are used for registering the problem. Responsible person has 24 hours for finding the root cause and suggesting a temporary or permanent solution for it. Data registration and visualization process are totally manual. Forms and reports are kept on the room walls as a visualization tool for follow ups. A similar space with similar design and tools is also used for problems related to suppliers. In company B, each department and line has its own meeting places. There is a general design for meeting places in production and assembly departments which includes daily data about quality, production and safety issues. Maintenance department has it is own special room design. There are number of customized visualization tools including different schedules and reports for ongoing maintenance kaizen projects. Generally same process as company A is followed in company B. short meetings about 10 minutes are held every morning with main actors. But no deadline exists for finding the root cause and solution. Company C has single meeting place which is used for 30-minute morning meetings about production problems, solutions and kaizen projects. Despite other 3 cases, company C's meeting space is in the form of separate room from the production line because of noise disturbance in production line. In addition to conventional visualization tools like white boards, forms and reports, improvements tags are also used to mark source of the problem in the production line. Also digital tools are used for visualizing some information about current situation of production. Company D has one meeting place for each of its five active production lines. 10minute morning meetings are held with contribution of line operators and supervisor to mainly follow the production rate and its fluctuations. But some kaizen projects are also followed on those meetings. Few basic visualization tools including white boards and A4 forms for Kaizen projects are used. In all of the cases Kaizen projects refers to minor production system development mostly initiated by problems in production process, defects in products, deviations from production schedules or safety incidents. Also A3 reports in all cases are more or less similar and come from Toyota data visualization system. They are single piece of A3 paper which simply show and document the whole process of identifying a problem in production system and developing a solution for it [START_REF] Liker | The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer[END_REF]. Table 1 shows the summary of the results from gathered data through the interviews and visits. Discussion and Conclusion As the results indicate, in all cases the meeting spaces are used only for performing incremental changes which are basically minor modifications in production systems. These modifications are mostly initiated by occurrence minor problems in production process, defected products or safety issues and solutions are developed using lean tools like 5whys and 5 Ws which are main tools for preparing A3 reports. Currently there is no indication of using such meeting spaces in radical changes in production system development such as developing and implementing new production system or general modification in current production systems in those companies. In such cases production system development is mostly considered as a part of product development process according to its dependence to developing new products [START_REF] Bruch | Management of Design Information in the Production System Design Process[END_REF]. But in practice it is a huge complex separate project. Obeya meeting spaces can be used for acquiring and generating production system development information like idea development sessions and designing production system. It can also be a very useful tool for sharing and using information especially during the implementation of radical changes in production systems. In addition, current meeting spaces are not adequately capable of transferring data and results to involved internal actors like people in other production sites as well as external ones like suppliers. This could be mainly because of total dependence of those spaces to non-digital tools. As Bruch (2012) explains, design information management as a critical part of production system design and development consist of three main parts: acquiring, sharing and using design information. Obeya or such meeting spaces can be used for these purposes in production system design and development process. Using digital tools can help the two latter parts through facilitating sharing and using acquired data in a faster and more effective manner. In summary, review of the Obeya concept, its advantages and its current practice in industry shows that it can be applied to other purposes than product development projects. The case studies show that similar meeting places are already used for incremental production system development projects. Radical improvements can even benefit more from this concept because of their nature that needs to implement great changes in a short time which usually demands considerable amount of close teamwork. But to maximize the benefits methods and visualization tools used in a conventional Obeya should be customized to be adapted to this purpose. When the Author is more than one person the expression "Author" as used in this agreement will apply collectively unless otherwise indicated. § 1 Rights Granted Consent to Publish The copyright to this Contribution is transferred to Springer-Verlag GmbH Berlin Heidelberg (hereinafter called Springer-Verlag). The copyright transfer covers the sole right to store, reproduce, publish, disseminate, and sell throughout the world the said Contribution and parts thereof, as well as its translations in any foreign languages, in all forms and media of expression -such as in its electronic form (offline, online) -now known or developed in the future. Springer will take, either in its own name or in that of Author, any necessary steps to protect these rights against infringement by third parties. It will have the copyright notice inserted into all editions of the Contribution according to the provisions of the Universal Copyright Convention (UCC) and dutifully take care of all formalities in this connection, either in its own name or in that of Author. The parties acknowledge that there may be no basis for claim of copyright in the United States to a Contribution prepared by an officer or employee of the United States government as part of that person's official duties. If the Contribution was performed under U.S. Government contract, but Author is not a U.S. Government employee, Springer grants the U.S. Government royalty-free permission to reproduce all or part of the Contribution and to authorize others to do so for U.S. Government purposes. § 2 Rights Retained by Author Author retains, in addition to uses permitted by law, the right to communicate the content of the Contribution to other scientists, to share the Contribution with them in manuscript form, to perform or present the Contribution or to use the content for non-commercial internal and educational purposes, provided that the Springer publication is mentioned as the original source of publication in any printed or electronic materials. Author retains the right to republish the Contribution in any collection consisting solely of Author's own works without charge but must ensure that the publication by Springer is properly credited and that the relevant copyright notice is repeated verbatim. Author may self-archive an author-created version of his/her Contribution on his/her own website and/or in his/her institutional repository, as well as on a non-commercial archival repository such as ArXiv/CoRR and HAL, including his/her final version. Author may also deposit this version on his/her funder's or funder's designated repository at the funder's request or as a result of a legal obligation. Author may not use the publisher's PDF version, which is posted on www.springerlink.com, for the purpose of self-archiving or deposit. Furthermore, Author may only post his/her version provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link should be accompanied by the following text: "The original publication is available at www.springerlink.com". Author retains the right to use his/her Contribution for his/her further scientific career by including the final published paper in his/her dissertation or doctoral thesis provided acknowledgement is given to the original source of publication. Author also retains the right to use, without having to pay a fee and without having to inform the publisher, parts of the Contribution (e.g. illustrations) for inclusion in future work, and to publish a substantially revised version (at least 30% new content) elsewhere, provided that the original Springer Contribution is properly cited. Lecture Notes in Computer Science 123 Title of the Book or Conference Name: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Editor(s): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Title of the Contribution: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Author(s) Name(s): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Corresponding Author's Name, Address, Affiliation and Email: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 1 . 1 Summary of gathered data about production development meeting spaces Company Meeting place type Purpose Visualization tools Meeting time (Minutes) A Single place for all lines Kaizen projects Predefined A4 boards forms, A3 reports, 5 -10 Predefined A4 re- B Multiple cus-tomized places for each de-partment Kaizen projects, General develop-ment projects ports in the lines, for maintenance customized reports, schedules and charts Up to 10 department, boards Predefined A4 re- Single room for ports, problem re- C single produc- Kaizen projects porting tags, digital Up to 30 tion line screen for produc- tion status, boards Multiple places Production, quali- Simple quality and D for each assem- ty and safety production A4 re- Up to 10 bly line control ports, boards
25,302
[ "1002220", "1002221", "1002222" ]
[ "301185", "301185", "301185" ]
01472306
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472306/file/978-3-642-40352-1_84_Chapter.pdf
Stefan Wellsandt Thorsten Wuest Christopher Durugbo email: [email protected] Klaus-Dieter Thoben The Internet of Experiences -Towards an Experience-Centred Innovation Approach Keywords: Experience, Innovation, Internet of Things, Artificial Consciousness des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Driven by globalization, competition among enterprises and enterprise networks led to the advent of the knowledge worker responsible for constant innovation. Being one step in front of competitors can be a significant core competence resulting in economical, societal and ecological returns. As an effect of shortened product-lifecycles, partially caused by the rapid developments in information and communication technologies, companies had to improve innovation frequency and quality. In order to realize this goal, the innovation process itself was re-thought to take all available sources of innovation into account, be it inside or outside of the enterprisethe idea of open innovation was born [START_REF] Chesbrough | Open Innovation[END_REF]. One of the richest sources for the new innovation paradigm is the user. The user's experience, created during his daily life and interaction with products and services, is the ideal source for enterprises to learn how to satisfy needs of the people. Importance of the users' experience is also expressed through the development of various tools to capture it, e.g. Living Labs [START_REF] Eriksson | State-of-the-art in utilizing Living Labs approach to user-centric ICT innovation -a European approach[END_REF]. Changes in the innovation domain were also influenced by prominent societal shifts, induced by technological paradigms such as the participatory internet. Over the last years, with the development of web 2.0 after the bursting dot-com bubble [START_REF] O'reilly | What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software[END_REF] and the rise of social networks, the internet itself became more social and a place for communication and social interaction [START_REF] Gosling | Manifestations of Personality in Online Social Networks: Self-Reported Facebook-Related Behaviors and Observable Profile Information[END_REF]. The social component is steadily becoming more important and builds a basis for new and innovative advances. Within this development, concepts and tools for gathering, sharing and distribution of information and knowledge appeared and became widely popular, Wikipedia being the most famous and most commonly used, LycosIQ [START_REF] Alby | Web 2.0 -Konzepte, Anwendungen, Technologien[END_REF] or ResearchGate. Another example for knowledge sharing through the internet is Amazon's review function allowing customers to share their experience-based knowledge about products. User-created content within web 2.0 appears in different forms and qualities ranging from data and information to knowledge and experience. The relation between different qualities of content is illustrated in Fig. 1. Fig. 1. Illustration of information and knowledge management terms (based on [6] & [7]) From a general point of view, most authors support the definition of Probst et. al. that linking of information allows its use in a certain field of activity, which can be interpreted as knowledge [START_REF] Probst | Wissen managen -Wie Unternehmen ihre wertvollste Ressource optimal nutzen[END_REF]. In Fig. 1, contextualization of data and information towards basic knowledge and know-how is shown. Until know-how, it is challenging but possible to save and share, though it becomes hard, if not impossible, for experience and expertise due to their individual character. Referring back to the participative character of web 2.0, another development is interesting called Internet of Things (IoT). One of the central ideas of IoT is the extension of the internet into the physical world to embrace everyday objects [START_REF] Mattern | From the Internet of Computers to the Internet of Things[END_REF]. The IoT is realized through networked systems of self-organizing objects that interact autonomously, and related processes that lead to an expected convergence of physical things with the virtual world of the internet [START_REF] Brand | Internet der Dinge -Perspektiven für die Logisitk[END_REF]. One of the central aspects of the IoT is that objects are able to process information, communicate amongst each other and with their environment, and make autonomous decisions, thus becoming "intelligent" [START_REF] Mattern | From the Internet of Computers to the Internet of Things[END_REF], [START_REF] Isenberg | The Role of the Internet of Things for Increased Autonomy and Agility in Collaborative Production Environments[END_REF], [START_REF] Hribernik | Service-oriented, Semantic Approach to Data Integration for an Internet of Things Supporting Autonomous Cooperating Logistics Processes[END_REF]. Closely related to the principles of IoT are intelligent products (IP). A well-accepted definition for intelligent products is the following [START_REF] Mcfarlane | Auto ID systems and intelligent manufacturing control[END_REF]: "[...] a physical and information based representation of an item [...] which possesses a unique identification, is capable of communicating effectively with its environment, can retain or store data about itself, deploys a language to display its features, production requirements, etc., and is capable of participating in or making decisions relevant to its own destiny." Up to now, IPs are not intelligent in a human sense [START_REF] Erickson | Social systems: designing digital systems that support social intelligence[END_REF]. They can be subject of interaction but typically lack complex learning abilities. However, the ability to perceive and communicate experienced situations raises the question how benefits can be taken out of these contents. One idea is to systematically consider their individual experience in order to complement existing open innovation approaches mainly based on user-experience. This paper intends to identify similarities between two inputs of open innovation processes, i.e. user-experience and potential object-experience, in order to depict a future platform to share experiencesthe Internet of Experiences (IoE). In the second section, the approach towards an Internet of Experiences will be introduced. This section covers an overview into experience from a knowledge management perspective and an elaboration on experience in natural and artificial conscious systems. The third section provides a description of the experience-centred approach of the IoE and answers the question of what the IoE could look like. Finally, the paper is concluded and an outlook is given, as well as a short paragraph of the limitations of the approach. Approach The scientific approach that is applied in this paper consists of two aspects: the recent understanding of "experience" from the perspective of knowledge management, and similarities between user-experience and the experience gained by artificial systems such as Intelligent Products. Experience from a knowledge management perspective The term "experience" is used and defined differently among research fields. Some of the more prominent fields dealing with experience are cognitive sciences (e.g. enactive framework) and open innovation research (e.g. Living Labs). In cognitive sciences, definitions for (human) experience can be found by arguing that experience is closely related to questions about what a situation or an activity feels like [START_REF] Rhode | Enaction, Embodiment, Evolutionary Robotics: Simulation Models for a Post-Cognitivist Science of Mind[END_REF]. The concept of experience therefore is strongly defined by its subjective character, making it difficult to be addressed in a formal and systematical way in science. This is especially true for scientific disciplines that primarily focus on measurable results like those commonly used in engineering and information technology contexts. While cognitive sciences deal with experience in a broader way, other domains try to focus on certain subjects or categories of experience. In the area of open innovation research, subject of experience are people that interact with products or servicesthis experience is stated as user-experience. Within innovation research, user-experience is frequently utilized in the context of Living Lab approachesinnovation ecosystems typically utilizing user-experience with ICT technology and related artefacts [START_REF] Folstad | Living Labs for Innovation and Development of Information and Communication Technology: A Literature Review[END_REF]. User-experience is defined in ISO 9241-210 as "[…] a person's perceptions and responses that result from the use or anticipated use of a product, system or service". It can be expressed through feedback from the users in a codified way (e.g. questionnaire) or interviews. Within innovation ecosystems, formalized user-experience is evaluated and used to create or adapt ICT-services and products respecting user requirements. Other domains specify experience according to different content such as software-experience in computer sciences. According to Conradi and Dybå, software-experience is a composition of experimental data and aggregated models (i.e. knowledge) on these data [START_REF] Conradi | An empirical study on the utility of formal routines to transfer knowledge and experience[END_REF]. The final example for experience raises an important point about the ambiguous relation between knowledge and experience. In order to better distinguish the different terms, especially related to knowledge, the point of origin of different intellectual capital (IC) types is used as illustrated in Fig. 2. Fig. 2. Point of origin for different kinds of intellectual capital (based on [6] & [7]) Points of origin are differentiated into non-interactive (observation and extraction) and interactive ones. Based on this separation, a major difference between knowledge and experience is the fact that the latter is only created during interaction. However, an aspect that makes clear segmentation of IC types difficult is the ambiguous nature of knowledge in literature. As Nonaka proposed in the early 1990s, knowledge consists of explicit and tacit (i.e. implicit) elements [START_REF] Nonaka | The Knowledge-Creating Company. Managing for the long term: Best of HBR[END_REF]. Tacit knowledge can be gained through observation, imitation and practice. In this paper, tacit and explicit character of intellectual capital is seen as a continuum across the four IC types in knowledge management. The main purpose of the continuum is to address difficulties arising from tacit knowledge in relation to the question where it fits best in Fig. 2. Different scientific perspectives on "experience" result in different understandings and definitions of the term. In order to avoid discussions digressing into the domain of philosophy, or scientific domains where in-depth discussions about "experience" are unavoidable, a broad definition is suggested for the purpose of this work. The defini-tion takes into account findings in cognitive science, open innovation research, points of origin from IC types and is influenced by findings of Davis in [START_REF] Davis | Theoretical Foundations for Experiential Systems Design[END_REF]: "Experience is an individual and in-tangible consequence of an interaction between a conscious system and real or digital entities inside or outside the system. Experience is related to explicit or tacit elements in the form of associated data, information, basic knowledge, or know how." Experience in natural and artificial conscious systems In the previous sections, it was pointed out that user-experience is a high value source for innovation processes. In this section, it will be examined whether there is evidence for object-experience or not. According to the definition proposed in section 2.1, several conditions need to be evaluated in order to assume that experience can be made by artificial systems such as intelligent products: ─ The artificial system has to be conscious ─ There has to be interaction between artificial system and other entities ─ Consequence of the interaction must be individual and in-tangible The consciousness of artificial systems is subject of investigation in the scientific domain of artificial consciousness [START_REF] Haikonen | The Cognitive Approach To Conscious Machines[END_REF], [START_REF] Koch | Can Machines Be Conscious?[END_REF]. Assuming that, for example, an intelligent product is some kind of machine, we likewise assume that there is a general possibility that it can be conscious. The second condition refers to the interaction between artificial system and other entities. As described in the introduction, IPs have communication and decision-making abilities enabling interactive behaviour. Therefore, we consider the second condition as fulfilled. Since the first two conditions are met, artificial systems such as IPs can be seen as generally capable of drawing consequences from interaction. In order to make clear that consequences belong to a specific artificial system, each system needs to have an identifier making it an individual element. For IPs, identifiers can be, for example, RFID tags or barcodes. Grounded on the artificial nature of intelligent products, their storage unit (e.g. hard disk) contains digital content. This leads to the conclusion that consequences related to interaction are in-tangible. Based on the examination of the three proposed conditions above, it is concluded that artificial conscious systems, like intelligent products, are capable of making experience. The Internet of Experiencesexperience-centred innovation The general capability of natural (user) and artificial systems (smart product) to make their own experience during interaction, leads to the question how these kinds of experience can be transformed into benefits. Referring back to the introduction of this work, innovation is a key driver of sustaining competitive advantages. While common innovation processes take user-experience into account, it is reasonable to ask if object-experience can be considered likewise. With respect to the developments in the open innovation domain, it is assumed that the quality of innovations increases with the number of experiencing systems participating in the innovation process thus creating more valuable outputs. This assumption, but also the important role of experience innovation processes, is supported by findings of Taylor and Greve [START_REF] Taylor | Superman or the Fantastic Four? Knowledge Combination and Experience in Innovative Teams[END_REF]. Artificial conscious systems, such as intelligent products, are connected through the Internet of Things. Through this network, data and information are shared to allow new product-based services and enable new product functionalities. With the experience-making ability of IPs, "things" in the IoT can go beyond simple sensing (collection of data and information). From a knowledge perspective, they can become actors, sharing through the internet what they learned or explored. Since the participative internet is already a place where experience is discussed, new actors providing additional input from a new perspective seem to be promising. The networked character of the internet can also help to handle interrelated user-and object-experience in order to derive further conclusions. The joint consideration of user-experience and objectexperience could be beneficial, for example, to identify requirements for new products and services. User behaviour or specific requirements that aren't articulated by users (e.g. through questionnaire or interview) might be revealed by considering the perspective that intelligent products, as interaction counter-parts, can take. IPs could reason interaction behaviour of formerly experienced situations and consolidate with similar or complementary products through the internet. The consolidation process is meant to identify whether an experience is related to a single or multiple spatial-temporal contexts. Based on the consolidation, the artificial system proposes aspects with hidden innovation potential. Examples for these aspects can be complementary functions of an existing product (incremental innovation) or novel products (radical innovation). These suggestions can be further elaborated and consolidated based on userexperience in the internet, potentially leading to better and/or faster innovation. This depiction of the Internet of Experiences is summarized in Fig. 3. Fig. 3. Depiction of the Internet of Experiences Conclusion and Outlook The paper depicted an innovation approach that is centred on experience utilizing an Internet of Experiences. Based on developments in the areas of web 2.0, IoT, knowledge management and open innovation, an experience-centred approachthe Internet of Experiencesseems promising to complement human-centred innovation with experiences from artificial systems. Different understanding of "experience" in scientific domains was presented in order to suggest a wider definition for the term. Grounded on this definition, the general experience-making ability of artificial systems was argued. Concluded from these findings, an experience-centred innovation approach and the Internet of Experiences are depicted. Since the experience-centred approach of this work is still under development, further effort needs to be done to provide sufficient foundation for the assumptions of this work (e.g. experience-making ability) and derived theoretical concepts. Some of the unaddressed challenges related to this work are closely based on the current state of research about intelligent artificial systems, e.g. machine learning, artificial consciousness or Internet of Things. For example, artificial systems, such as intelligent products, often-times lack cognitive abilities compared to artificial systems inside of laboratory environments. Other issues are the ontological relationships between experiences as well as the importance of experience and other intellectual capital types for the innovation process. Furthermore, it needs to be elaborated how userexperience and object-experience can be combined on operational level in order to facilitate innovation. Limitations The intention of the introduced approach in this work is not to deeply elaborate what "experience" is, especially in relation to the domains of neurosciences and the human brain. Furthermore, the large field of cognitive sciences is only covered briefly to give a basic understanding of aspects that should be considered when dealing with experience as such.
18,969
[ "990157", "991770", "1000950", "989864" ]
[ "217679", "217679", "220393", "217679" ]
01472307
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472307/file/978-3-642-40352-1_85_Chapter.pdf
Morten Lund email: [email protected] Innovating a business model for services with storytelling Keywords: Business models, Free-business models, narratives, storytelling, business model innovation, archetypes In recent years, the notion of business models has been able to innovate the way companies create new business opportunities. However, because business models most often rest on a complex interplay of several actors, there is a need to be able to explore the nature of a business model. This paper will propose to describe a business model by means of storytelling. Also, the paper will introduce the notion of archetypes of business models with the aim to seek a pattern in the light of the numerous business models available. Two cases will illustrate and discuss storytelling and archetypes, giving rise to conclude that they represent a valuable approach to understanding and innovating business models. Introduction The growing interest in understanding and innovating business models in recent years is most likely a result of an increased recognition that a successful business model can be a game changing factor in competition or in entering a new market. However, as a novel concept, many questions arise of both theoretical and practical nature. To explore some of these issues, a Danish research program "ICI" was initiated aimed to inspire and assist participants in a development process of innovating new global business models in a network of smaller, traditional industrial companies and new ebusiness companies. The research program has shown that a novel approach is needed to illustrate how a new business model may look, especially to provide a picture of an emerging business model that can persuade interested parties what they may gain and which role they would be supposed to play. We have experienced that storytelling represents a fruitful approach which we intend to discuss in this paper. Furthermore, both practice and theory include quite a large number of different business models. This has led us to explore if patterns may be identified by way of the notion of business model archetypes. From the research program we have selected two related cases that will illustrate elements of storytelling. In addition, they will form a basis for discussing the notion of archetypes that will also be illustrated by existing business models. In this paper, we shall first introduce the notion of business models, storytelling and archetypes. Then the two cases will be presented, followed by a discussion. Finally, the paper will be concluded. Business models Business model theory as a separate research area is relatively young. Until 2000, the notion of business models was largely related to the preserve of internet-based businesses [START_REF] Mason | The sites and practices of business models[END_REF]]. But since then, research on business models has intensified accompanied by an escalating quantity of literature from both practitioners and academia. The area of business models is thus still young and also quite dispersed. The field as a stand-alone is just starting to make inroads into top management journals, but the conceptual base is still thin [START_REF] Zott | The Business Model: Recent Developments and Future Research[END_REF]. The definition of the term business model has been discussed substantially over the last decade. From a simple definition, e.g. "Business model is a statement of how a firm will make money and sustain its profit stream over time" [Zhao 2010] to definitions including partners or stakeholders, e.g. "A conceptual tool that contains a set of elements and their relationships and allows expressing the business logic of a specific firm. It is a description of the value that a company offers to one or several segments of customers and the architecture of the firm and its network of partners for creating, marketing and delivering this value and relationship capital, to generate profitable and sustainable revenue stream" [Osterwalder et al., 2004] We have come to the conclusion that a business model is too multifaceted to be defined in any simplistic way. Overall, a business model consists of two elements; what the business does, and the way in which the business gains profit. There have been attempts to describe business models as systems consisting of a variation of building blocks, e.g. the Business Model Canvas [Osterwalder XXXX], which describes a business model by means of nine interrelated building blocks. Osterwalder's work has provided a popular framework for describing, understanding and innovating business models for a company. The framework has been used successfully in our research program but has also shown limitations. We have found that several companies and individuals can be considered actors forming a network in a new business model. There is a need to develop a framework for each partner [stakeholder] and for the network as a whole. Another limitation is the static nature of the business model canvas, in view of the desire to generate new ideas. Storytelling Storytelling exists throughout all cultures and is an inevitable part of human communication and interaction. Stories can pass on accumulated knowledge, ideas, and values, as for example used in anthropology. Through stories we are creating a narrative image of constructions enabling us to explain complex things, for example proposed as part of corporate strategy development, [START_REF] Kotter | A force for change: How leadership differs from management[END_REF], [START_REF] Riis | Developing a manufacturing vision[END_REF]. In this paper narratives and stories are treated as synonyms, ignoring the semantic discussion on the distinction between narrative and storytelling. [START_REF] Magretta | Why Business Models Matter[END_REF] gained considerable attention by identifying business models as "stories that explain how enterprises work". According to Magretta, business models did not only show how the firm made money but also answered fundamental questions such as: "who is the customer? and "what does the customer value?" In comparison to a traditional strategy statement, a story is told focusing on what actors do and through such a process description it is explained how money, information and goods and services flow between actors, including customers. In this way, it becomes clear to actors what their role will be, as well as their expected benefits and obligations. In our research program we have learned that a story evolves through interaction with actors, as they contribute with ideas and own experience and express their preferences. Furthermore, a story can be told in many different ways. On the one hand, it is important to be open for new and innovative ways of expressing a story; on the other hand, we should seek generic elements or questions expected to be addressed in a story. We found that storytelling may be an important approach for developing a business model in a network of companies and individuals with different backgrounds and qualifications. The process of developing a story serves as a platform for a constructive dialogue for combining different opinions into a coherent business model. Archetypes The notion of archetypes represents an attempt to identify generic patterns or classes that may be used as inspiration for developing a specific business model. Although it is desirable to develop archetypes for successful business models, there is no single, well-defined classification of business model archetypes in the literature. Osterwalder applies the term pattern as an expression that comes from the world of architecture. I his use of the term it stands for the idea of capturing architectural design ideas as archetypal and reusable descriptions. One may argue that patterns can be formed at a macro and a micro level in an industry or a business. The macro level of business models archetype may express roles in an industry, e.g. wholesaler, consultants, distributor, production, banks, etc. They show how corporations interact with stakeholders of their business model in distinct patterns. For example, [START_REF] Miles | Organizational Strategy, Structure and Process[END_REF] showed how companies could exists side by side in an industry [books for the educational market] with generic different business models. The macro level business models may consist of variations in patterns. For example, differences between a supermarket and a food wholesale basically lie in the costumer segment. Although the basic business model of "primarily selling products manufactured by others" and the revenue models seem to be alike, only differentiated by quantity and assortment, the two business models are very different. The relationship with suppliers [stakeholders in the business model] is different. In the wholesale business model, suppliers often have relations directly with the customers, discussing price, exclusivity etc. The cost structure is different based on the average turnover per customer. These are examples of small variations in business models at the micro level. Focusing on micro level business models enables development of specific archetypes of business models describing typical patterns in one or more interrelated building blocks of a business model. As an example, [START_REF] Anderson | Free, the future of radical price[END_REF] presents four revenue model archetypes. He explores how things can be "Free" implying how a product may be provided for free, yet still supporting a viable business model. Sometimes "free" is not really free. "Buy one, get one for free" is just another way of saving 50 per cent off when you buy two. "Free gift inside" really means that the cost of the gift has been included in the overall product. "Free shipping" typically means that the price of shipping has been built into the product's markup. He defines the Free business model as cross-subsidies essentially based on the phrase "there's no such thing as a free lunch." This means that one way or another the food must be paid for, if not by you directly then by someone else whose interest it is to give you free food. Andersen demonstrates that there are different "Free business models archetypes". Within the broad world of cross-subsidies, Andersen describes four main categories or archtypes. One of the archetypes is "Freemium", a common revenue model on the Internet. It can be described as a revenue model where 5% of the customers pays for 95%, e.g. Skype. The reason for this model to function is that the cost of a Skype customer is close to nothing, and the revenue on the 5% is enough to cover the operation cost. In this example the revenue model archetype provides universally understood pattern enabling us to ask the question "could your business adopt this cost structure?" This classification focuses on the value paid by customers. Based on our research, this represents an important dimension for identifying archetypes of business models. Other dimensions may be added, e.g. from the Business Model Canvas. Two case companies Methodology Two comparative case studies will be presented aimed to illustrate the notion of storytelling and archetypes. The first case study is based on a longitudinal in-depth qualitative case study over a period of two and a half years of a Danish start-up in the media industry, C-Spot. The network of companies and individuals behind C-Spot developed a clever business model for outdoor advertising through a new IT platform. The second case study is based on interviews and a workshop in the global industrial enterprise, Otis Elevator Company.The longitudinal study of Cspot was an interventionist research project [START_REF] Lukka | Approaches to case research in management accounting: the nature of empirical interventio and theory lunkage[END_REF]]. Our research group followed CSpot from before the company was founded until now, involving the founders, the CEO and senior staff from the company, as well as four business partners, consultants and researchers. The project had a defined goal to invent a new global business model for the company. During the research project, there have been numerous meetings, workshops, reports and semi-structured interviews, which are recorded and/or documented with minutes, pictures or video. The terminology of the business model was introduced to all participants, and especially the use of the Business Model Canvas [Osterwalder 200x], and narratives exemplifying existing, successful business models. The second case study, Otis, is based on semi-structured interviews and a workshop. The semistructured interviews were conducted with a senior sales manager from the Danish division of Otis. Background information on the Danish elevator industry is based on semi-structured interviews with three industry professionals, statistics and data from official public databases. CSpot is a Danish start-up company, founded in 2009. The business idea was to establish a new advertising channel consisting of a network of physical advertising screens in shop windows set up in areas with a high frequency of pedestrian traffic and showing a constant flow of live messages. The idea of CSpot originated from an idea of using all the empty shop windows that increased in numbers as a result of the financial crisis, but it quickly became apparent that existing shops were just as interested. The screens are connected by a genius virtual platform offering inexpensive advertising opportunities at affordable prices targeting small and local businesses as well as large campaigns. The business model success is based on a radically different cost structure and at the same time a new value proposition to advertisers. Compared to existing advertising channels, the CSpot's channel is different in many ways; e.g. their key value proposition is instant advertising. Usually, e.g. at AFA JCDecaux and Clear Channel, marketing has to plan ahead, produce posters, distribute and put them up. With the Cspot system, advertisers simply go to their website. Here, advertisers choose where and when to show their campaign. Either the advertiser uploads existing material or uses the free online spot builder. This enables customers to advertise instantly and relevant. For example, a local restaurant could put out an offer, if it is a slow night, or if the weather turns to rain. The local department store can attract customers with an attractive offer. The cost structure is also different. In addition to the obvious savings from production and distribution cost of a static media, CSpot came up with a clever model, reducing infrastructure cost drastically. The competitors like AFA JCDecaux and Clear Channel have great costs placing their billboards on house ends or by paying for public exterior. CSpot managed to attract more than 100 sites based on the model: your window space and power in exchange for a quantity of free advertising on the system in your local area. In fact the only cost for Cspot is setting up the screen, maintenance and the GSM Internet connection. Despite the immediate success in creating a good business model, it was difficult to attract large advertisers that often are managed by advertising agencies. The main reason for this is lack of documentation of the effect and the number of people who view the screens. A solution was discussed to install surveillance in every CSpot, providing automatic counting. But the investment was simply too high. The Otis Elevator Company is the worlds largest manufacturer of elevators and escalators. The Danish branch's business model is a typical "service business" where the key activities are to install, modernize and perform services on elevators, escalators and moving walkways. The key resources are knowhow, skilled employees and access to spare parts and tools from the main company. The Danish elevator industry seems to be segmented into two types of businesses. The global actors that are present nationally and local/regional actors. The global actors such as Otis, Thyssen-Krupps, Schindler and Kone all seem to follow the same "services business model" as described. The local actors primarily focus on service and renovation of existing installed elevators, a few on new elevators. From interviews with industry professionals including the case company Otis, they all seemed to agree that the business is all about the services contracts. It is well known in the industry that the global actors often sell new elevators near cost prices, making money on the service. Empirical discussion The two cases represent different situations with respect to innovating a business model. The first case, CSpot, developed a new idea from scratch by a few core members of a network that was gradually expanded as the business model emerged. The second case, OTIS, took the existing business model as point of departure and sought to expand it by augmenting new activities, new actors, and new revenue models. In this section we shall discuss how storytelling and the notion of archetypes were used in the process of developing an innovative business case, and in particular discuss some of the challenges experienced. It is interesting to note that the second case was originated as an offspring of the first case. The cases serve multiple purposes; primarily to introduce various principles of applying a storytelling approach that may give rise to conclude that they represent a valuable approach to understanding and innovating business models. Second, to show how two business models can be merged in a complimenting business model. Using storytelling to tackle business model challenges The CSpot case emerged as a trial-and-error, explorative process with many themes being addressed simultaneously. It was difficult to find a common revenue model that could demonstrate the benefit to investors and customers. Two approaches were tried. Inspired by the Osterwalder Business Model Canvas, the notion of storytelling was introduced as a means of combining different ideas into a story telling what would take place when a service is offered, and how actors would interact. The second approach was to be inspired by existing, successful business models. In the CSpot case, the story of Google's business model served to understand new facets of a business model, in particular to use knowledge about users as an asset. For example, Google uses the location of users when a search is made, providing a more relevant link between customers and advertisers. This generates additional income, because search becomes more effective. This story inspired the development of a business model in the CSpot case. For example, how could CSpot location data create additional value for advertisers and also generate more revenue for CSpot, and how could CSpot provide value to users so they would interact with spots. This led to the idea of getting users to take a picture with their smartphone, and to provide spots with 2D barcodes opening for virtual connection to customers. As a result, a new revenue model emerged where advertisers would pay a provision if they get a customer. At the same time, it created a potential relationship to customers with new business opportunities. The story of Google generated many questions that served as a vehicle for a structured process of understanding the challenges and potentials of the CSpot business model, and of developing a story of an innovative business model. This also proved useful when new spot partners were introduced during the process. At the end of the development of the CSpot business model, an idea came up that elevators were a perfect place for a CSpot because of the attention you have from people using an elevator and the knowledge it is possible to generate about their background and interests. As a result, it was decided to contact the OTIS company for an explorative discussion. Why should Otis implement CSpots in elevators? The Danish branch of Otis has a clear business model that may be defined at a macro level as a "services business model". The revenue of the company is among the best in the industry, and the company has a substantial market share and track record. All in all, the business model seems to be working, and Otis profits from this. Why should Otis implement CSpots in elevators? Or could the question be rephrased: why should Otis add a new business model or change an existing that is working. At a workshop, an Osterwalder Business Model Canvas was created identifying the existing business model. It revealed that customers were unlikely to pay more for the service of Otis. How could this be addressed? Based on the archetypes introduced by [START_REF] Anderson | Free, the future of radical price[END_REF], the facilitator introduced several existing business models of offering services or products for free. This generated a constructive dialogue on how installation of CSpots in elevators could create revenue for both the owner of an elevator and Otis. Otis showed great interest in different cross-subsidies models, e.g. CSpot paying for the service in exchange for advertising space, enabling Otis to offer Free service to their customers. During the workshop the facilitator introduced the "business model questions" used in the CSpot Case. This enabled validation on the integration between two business models. E.g. the technology present in Otis elevators offers different value proportions to the CSpot business model. Otis elevators can count passengers based on a weight average and count the numbers of trips. This providing advertisers with more accurate data, increasing profit for CSpot. Although not yet implemented, the workshop showed that innovation of an existing business model may be facilitated by use of existing, successful business models organized in a spectrum of archetypes ensuring a broad explorative process. Conclusion In view of the complex interplay of several actors forming a business model, this paper has discussed the idea of describing a business model by means of storytelling, explaining how the interplay of enterprises works to generate value for customers as well as partners. Two case studies illustrated how storytelling could serve as a means of combining different ideas and perspectives into a unified presentation of a business model. Also the story of existing, successful business models may serve as a vehicle for an innovative, collaborative process with many actors. The notion of archetypes was introduced to inspire a broad innovative process. We chose to focus on revenue models and presented and used a special category under the heading "Free service". In the Otis case, this provided a new way of looking at the existing business model. However, there is a need to further explore the notion of archetypes, for example to identify other revenue models and to introduce other dimensions of a classification. A pragmatic approach would be to use existing successful business models as a basis for identifying patterns and classes. The emerging area of business models has been able to innovate the way companies create new business opportunities. However, many challenges arise in connection with working in practice with business models. This paper has pointed to new directions of carrying out a collaborative process involving several actors by introducing the notion of storytelling and archetypes and by showing how they may serve as vehicles for creating an overall image combining different ideas and viewpoints.
23,663
[ "1002223" ]
[ "300821" ]
01472308
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472308/file/978-3-642-40352-1_86_Chapter.pdf
Gündüz Ulusoy Gürhan Günday email: [email protected] Kemal Kılıç Lütfihak Alpkan email: [email protected] BUSINESS STRATEGY AND INNOVATIVENESS: RESULTS FROM AN EMPIRICAL STUDY Keywords: Business strategy, innovativeness, empirical study, structural equation modeling This study reports on the testing of the hypothesis that there is a positive relationship between business strategy and innovativeness. Business strategy is defined here to include market focus strategy, technology development strategy, and operations priorities -including cost, quality, delivery / dependability, and flexibility. An empirical study is conducted based on data collected using a questionnaire developed. 184 manufacturing firms from different industries in the Northern Marmara region in Turkey participated in the study. Multivariate statistics techniques and structural equation modeling are employed. The results have been affirmative supporting the hypothesis. Market focus and technology development factors are found to mediate the effects of operations priorities on innovativeness. That market focus, technology development and cost efficiency have direct effects on innovativeness is another finding of managerial importance. INTRODUCTION This study aims to test the hypothesis that there is a positive relationship between business strategy and innovativeness. Business strategy is defined here to include market focus strategy, technology development strategy, and operations prioritiesalso called manufacturing capabilities in the literature-including cost, quality, delivery/dependability, and flexibility (see e.g., Hayes and Wheelwright, 1984; [START_REF] Leong | Research in process and content of manufacturing strategy[END_REF]. The foremost aim of firms is to survive in the market while generating profit. In the highly dynamic market conditions of today, firms are under the pressure of strong competition in order to gain competitive advantage and to upgrade the efficiency of work and innovation provides them with an effective tool for that purpose, since innovations are among the essential resources through which firms contribute to increased employment, economic growth, and competitive strength. The purpose of innovation is to launch newness into the economic area. As stated by [START_REF] Metcalfe | Evolutionary Economics and Creative Destruction[END_REF], when the flow of newness and innovations desiccate, the firm's economic structure settles down in an inactive state with little growth. Four different innovation types are employed in this research: product, process, marketing and organizational innovations (OECD, 2005). We define here innovativeness to embody some kind of measurement contingent on an organization's proclivity towards innovation [START_REF] Salavou | The concept of innovativeness: Should we need to focus[END_REF]). DATA COLLECTION AND MEASUREMENT OF VARIABLES 2.1 Data Collection A questionnaire consisting of 311 individual questions was developed to be filled in by the upper managers of manufacturing companies. The questionnaire was updated based on the experience gained through a pilot test phase covering 10 firms. Afterwards data was collected over a 7-month period in 2006-2007 in textile, chemical, metal products, machinery, domestic appliances and automotive industries in the Northern Marmara region of Turkey. These industries were selected to represent the major manufacturing sectors in Turkey. A manufacturing business unit was selected as the unit of analysis. A total of 184 usable questionnaires were obtained resulting in a response rate of 11%. All the respondents completing the questionnaire were from the top (52%) or middle management (48%). For each sector, number of firms in the sample turned out to be representative, since no significant difference has been detected between the population and the sample percentages. The profile of the resulting sample presented in Figure 1 illustrates its diversity in terms of firm characteristics. Firm size is determined by the number of full-time employees (up to 50: small, 50≤medium <250, ≥250: large). In addition to the number of full-time employees, for a firm to be classified as large it is required to have an annual revenue ≥50 M€. For small and medium firms, four annual revenue brackets are defined so as to have a balance between small and medium firms. Firm age is determined by the year production had started (up to 1975: old; 1975≤moderate<1992; ≥1992: young). Joint stock companies constitute 73% of the sample with the remaining being limited companies. 19% of the firms in the sample have some level of foreign direct investment. Measurement of Variables As we will see in the following, the questions for measurement purpose are asked using a 5-point Likert scale. Such subjective measures possibly bring in manager bias, but are widespread practice in empirical researches [START_REF] Khazanchi | Innovation-supportive culture: The impact of organizational values on process innovation[END_REF]. First we will deal with the questions concerning the business strategy constructs. The variables of market focus are given in Table 1. A 5-point Likert scale is em-ployed to assess how important each one has been for the firm in the last three years with a scale ranging from 1=extremely unimportant to 5= extremely important. Decreasing the rejection rate of product orders with non-standard specifications. 4 Increasing the capability to change the current machine schedule depending on changing the order priorities. 5 Increasing the capability of flexibility in product processes. 6 Increasing the capability of flexibility to change the order priorities depending on the status of the orders. 7 Increasing the capability of manufacturing personnel to work in varying operations and processes. Variables Associated with Dependability/Delivery 1 Increasing the delivery speed of end products. 2 Decreasing the duration from start of manufacturing process to the end of delivery. 3 Increasing the ability to meet the delivery commitments. 4 Decreasing the duration from taking an order to the end of delivery. 5 Increasing the ability for just in time delivery. [START_REF] Kathuria | Competitive priorities and managerial performance: A taxonomy of small manufacturers[END_REF] Decreasing the difficulties associated with delivery to a minimum. Variables Associated with Quality 1 Increasing the customers' perception for product and service quality. 2 Increasing the product and service quality compared to competitors. 3 Decreasing the customer complaints. 4 Decreasing the quantity of waste, scrap and rework. 5 Decreasing the quantity of defective intermediate and end products. 6 Decreasing the number of returns from customers. For the measurement of different types of innovative capabilities the respondents are asked to indicate "to what extent the innovations implemented in their organization in the last three years related to the following kinds of activities" on a 5-point Likert scale ranging from 1=not implemented, 2=imitated from national markets, 3=imitated from international markets, 4=currently practiced endogenous innovations are improved, 5=original indigenous innovations are implemented. Due to space limitation we will refer the reader to Gunday et al. (2011) for a complete list of variables used for the measurement of product, process, market and organizational innovations. ANALYSIS AND RESULTS The multivariate data analysis is performed in three stages using statistical software packages SPSS v17 and AMOS v16. In the first stage, principal component analysis (PCA) with varimax rotation is conducted to find out the underlying dimensions of business strategy items and innovativeness. PCA on business strategy items produced 6 factors with latent root criterion and the average of communalities was 0.551. All the variables given in Tables 1-3 are included in the factors. The six factors obtained are assigned the following titles: quality, flexibility, delivery and dependability, cost efficiency, market focus, and technology development strategies. The total variance explained is 55.1%. The Cronbach α values are ≥ 0.62 suggesting construct reliability. The PCA on innovativeness extracted 5 factors with eigenvalues > 1, which are labeled based on the variables involved: Organizational, marketing, process, incremental product, and radical product innovations. The total variance explained is 63.7%. The Cronbach α values are ≥ 0.7 suggesting construct reliability. Then we construct an aggregate innovativeness factor as the average of five innovation factors obtained with a Cronbach α=0.812, indicating acceptable reliability. The second stage involves correlation and regression analysis. The correlation analysis indicates a strong positive association between innovativeness and business strategy factors (Table 4). Significant one-to-one positive relationships of the aggregated factors are extracted from the correlation analysis. All business strategy factors correlate very significantly to innovativeness with p<0.01 except quality and dependability/delivery (p<0.05). Therefore, we can generally deduce that the higher importance given to operations priorities, market focus and technology development are associated with increased innovative capabilities. 5). Thus, despite the fact that the model is significant, multiple regression analysis reveals only some business strategies have statistically significant effects over innovativeness. Moreover, correlation analysis already indicated all business strategy factors had significant one-to-one correlation to innovativeness. Hence, post hoc analysis reveals that market focus and technology development factors mediated the effects of cost efficiency, dependability/delivery, quality and flexibility factors on innovativeness. In the third stage, based on the arguments above, a single-step Structural Equation Modeling (SEM) is performed to depict the relationship between business strategies and innovativeness with the simultaneous estimation of both measurement and structural models by AMOS v16 and analyzed according to goodness-of-fit indices. The resulting proposed paths of relations matching business strategies to innovativeness are presented in Figure 2. It summarizes the main findings of SEM analysis. The estimates on the arrows are regression weights and the estimates on the box corners are the squared multiple correlations. Each regression weight estimate in the model is statistically significant (p<0.05). 23% of the innovativeness can be explained by that model. Market focus and cost efficiency have direct effects on innovativeness. CONCLUSIONS In this paper, the hypothesis that there is a positive relationship between business strategy and innovativeness has been tested. The analysis is based on an empirical study conducted covering 184 manufacturing companies from the Northern Marmara region of Turkey. The findings summarized above expose the positive relationship between business strategy and innovativeness despite mediating effects between variables. Hence, the hypothesis put forward is supported. The finding that market focus and technology development factors mediate the effects of operations priorities on innovativeness reveals the supporting role of operations priorities on these factors. That market focus, technology development and cost efficiency have direct effects on innovativeness is a further verification of this web of interactions of great managerial importance. But cost efficiency also depends on the manufacturing capabilities -quality, delivery/dependability, and flexibility. In order to be cost efficient, the firm has to manage all these capabilities in a complementary way rather than trading one against the other. ACKNOWLEDGEMENT This research was supported by a grant from the Scientific and Technological Research Council of Turkey (TUBITAK) (SOBAG-105K105). Fig. 1 . 1 Fig. 1. Sample profile Fig. 2 . 2 Fig. 2. Path analysis of business strategy components and innovativeness Table 1 . 1 Variables Associated with Market Focus Strategy 1 Making incremental changes in current products for current markets. 2 Developing new products for current products. 3 Entering new markets with current products. 4 Entering new markets with new products. Table 2 . 2 Variables Associated with Technology Development Strategy 1 Developing new technology. 2 Improving its own technology. 3 Improving technology developed by others. 4 Using technology developed by others. The questions about operations priorities are provided in Table 3. They are asked using a 5-point Likert scale and inquiring how important each operations priority is for the firm with the scale ranging from 1=extremely unimportant to 5= extremely important. Here we adopt cost, quality, flexibility and delivery/dependability as operations priorities, which have become widely used as statements of the competi- tive dimensions of manufacturing firms. The variables of the four different opera- tions priorities' measures are adapted from existing OM literature. The base of items asked regarding these operations priorities are adapted mainly from Boyer and Lew- is (2002), Alpkan et al. (2003), Noble (1997), Ward et al. (1998), Vickery et al. (1993), Kathuria (2000) and Olson et al. (2005). For technology development strategy, on the other hand, for responding to the question on "the level of resource allocated to execute technology development strategy over the last three years" the 5-point Likert scale employed ranges from 1= no resource allocated to 5= all available resources are allocated. The variables are listed in Table 2 . For both market focus and technology development the variables are adapted from [START_REF] Akova | New Product Development Capabilities of the Turkish Electronics Industry[END_REF] . Table 3 . 3 Variables Associated with Operations Priorities Variables Associated with Cost Efficiency 1 Decreasing the total cost of manufacturing processes. 2 Decreasing the total cost of internal and external logistics processes. 3 Decreasing the operating costs. 4 Increasing the personnel productivity. 5 Decreasing the input costs. 6 Decreasing the personnel costs. Variables Associated with Flexibility 1 Increasing the capability of flexible use of current personnel and hard- ware for non-standard products. 2 Increasing the capability of producing non-standard products. 3 Table 4 . 4 Correlation Analysis of Business Strategies Std Mean Dev Inn Qual CEff Flex Dep MFoc Tech Innovativeness 2.81 0.84 1 0.193 0.228 0.206 0.178 0.373 0.323 (*) (**) (**) (*) (**) (**) Quality 4.68 0.43 (*) 1 0.551 0.240 0.415 0.130 0.222 (*) (**) (**) (**) Cost Efficien- 4.40 0.51 (**) (**) 1 0.346 0.457 0.154 0.191 cy (**) (**) (*) (*) Flexibility 3.72 0.73 (**) (**) (**) 1 0.517 0.195 0.091 (**) (*) Depend 4.36 0.57 (*) (**) (**) (**) 1 0.203 0.120 /Delivery (**) Market Focus 3.67 0.82 (**) (*) (*) (**) 1 0.235 (**) Tech. Dev. 2.80 0.82 (**) (**) (*) (**) 1 (**) p<0.01; (*) p<0.05 Table 5 . 5 Effects of Business Strategies on Innovativeness Independent Variables Standard Beta p-Value Cost Efficiency 0.115 0.190 Quality 0.051 0.547 Depend/Delivery -0.058 0.511 Flexibility 0.108 0.189 Market Focus 0.315 0.000 Technology Development 0.209 0.004 This regression model is statistically very significant (p<0.01) and the independent variables express 24.6% (R 2 =0.246) of innovativeness. However, when business strategies have entered together to the multiple regression, only market focus (β=0.315; p<0.01) and technology development (β=0.209; p<0.01) have significant positive effects on innovativeness (Table
15,923
[ "1002224", "1002225", "1002226", "1002227" ]
[ "474045", "474045", "474045", "472563" ]
01472333
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01472333/file/978-3-642-40352-1_83_Chapter.pdf
Roman Mihal email: [email protected] Iveta Zolotová email: [email protected] Peter Kubičko email: [email protected] Lenka Landryová email: [email protected] Roman Mihaľ Measurement, Classification and Evaluation of the Innovation Process and the Identification of Indicators in Relation to the Performance Assessment of Company's Innovation Zones Keywords: innovation process, measurement, classification and evaluation of the innovation process, innovation zone, performance indicators de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Measurement and evaluation in the company's (enterprise) innovation zone have an irreplaceable role. In this article we approach the real example of the need for measuring and assessing various levels of innovation zones. As regards the ideas it is often considered too difficult a task as many ideas and suggestions can only be part of an innovation or cannot be directly converted to monetary value. The actual value can be measured by the end customer only, who is the best evaluator of outputs. However, measurement, evaluation and classification were not applied only to the ideas themselves, but our goal is to find the measurable areas of an innovation zone and possible ways of their evaluation. Evaluation of innovation outputs The metrics (measurable indicators) that we have identified so far in terms of future production values by estimate only are: the number of outputs of the innovation project, the usefulness in implementation projects (repeatability using one of the outputs), and the number of identified outputs as business opportunities. The number of contracts (signed contracts) transformed from the business opportunities created by innovation outputs is yet the indicator of fair value defined by the customer. The indicator of a number of outputs of the innovation project indicates only the number regardless of the quality of outputs demonstrating the insufficiency of this indicator. It is therefore advisable to use another indicator that either captures the interest of the future use of some outputs, or measures the actual use thereof. Of course, such a measurement in the first instance requires a questionnaire form of capturing the interest of the individual outputs that provides only an indicative view. If the approach to measure is to capture the used outputs only, so we might get a high accuracy of the indicator, but over time we get information quite late, usually after completion of the project. Therefore, use of another combined indicator (interest survey results, the number of results used) seems to be optimal for the evaluation of ideas. A specific indicator in terms of commercial exploitation of the outputs is the identifier of business opportunities, which, like the previous indicator, has a dual form: 1. identified business opportunity, 2. implemented business opportunity. In addition to measuring the value of innovation ideas it is important to apply the measurement of actual innovation processes and the procedural steps thereof. The process system measurement and process control are natural in the innovation zone because of the dynamically adapting needs of production and the market. 3 Measurement and evaluation of the innovation process We describe the process evaluation by assessing the individual process steps and the possibilities of their improvement. Each step has its own difficulty: its evaluation in terms of automated support, the necessity of investing human effort and possible improvements. It is also necessary to capture weaknesses and existence of defects. To visualize the evaluation process, we chose a sub-process of innovation idea processing. The following assessment is based on the knowledge of the authors of the article, and accordingly we defined the following parameters (KPI-Key Performance Indicators [START_REF] Kaplan | Balanced Scorecard: Strategický systém měření výkonnosti podniku[END_REF]): The Level of Automation (LOA) indicates the level of automated support of the specified step. The aim is to use automated functions for the specified step that focus on usefulness, speed, accuracy and flexibility compared to manual processing. The Investment of Human Efforts (IHE) indicates the level of human efforts in the sense that the smaller the effort, the higher the value of the level. Automation does not always reduce the investment of efforts. The Linking with Business Process (LBP) is a natural indication of the specific step being incorporated into the normal business process. The Existence of Deficiency (EOD) is an indication of redundancy or inefficiency in the operations step or the existence of apparent opportunities for improvement, which have not been applied yet. The Added Value (AV) indicates visible or measurable benefits. Under the added value we understand the benefits that are in line with the company strategy and can either improve the quality of outputs, or bring a reduction of direct costs while the quality remains unchanged, or result in reduced labor. The indicators can take only three values 0, 1, and 2 representing the minimum, medium, or maximum value of the corresponding indicator. Submission of the innovation matter The step of submitting ideas by innovators is easily accessible from the main site of the innovation zone, which is quite comfortable for the promoter of the subject matter when he launches the main site innovation zone. During the normal work day, however, when the user is in one of the many portal sites it is no longer so easy to access. The submission of ideas is available for unregistered users as well as for registered ones. This step could be gradually integrated into the other steps such as the project portal sites. Matter publishing Publishing ideas according to internal rules is trying to capture public comments or getting people interested in sharing the implementation project. Publishing ideas is implemented in an automated way, but the completeness is supervised by the facilitator who before selecting the evaluation team ensures the integrity of the published ideas. In absence of a description of the solution that may not yet be in possession of the innovator, it also helps the innovator to search for a solution. The communication between the innovator and facilitator is very important and in the current solution this communication is not controlled, but it is left onto the personal activity of the participating actors. Registration of the subject matter is implicitly secured by the portal features and inherently when inserting the idea by the innovator and releasing the same for publication by the facilitator. Selections of evaluators Selection of evaluators of the subject matter is the role of innovation manager. However, the evaluators conduct the evaluation on a voluntary basis within their capabilities and workload. Time synchronization and securing the independence of the evaluation is yet in a position of ethical values. In this step, it is possible to increase the support for the innovation manager in organizing evaluators and acquiring their binding commitment that they are taking on the role of the evaluator. The system also takes into account the motivating factor for the evaluators. However, it does not depend only on the subjective decisions of the innovation manager and the way he applies it. Our knowledge can be summarized into allegations that a motivating factor should be defined with easily enforceable rules, without the unnecessary uncertainty of evaluators about the use or non-use of a motivating factor by the innovation managers. Evaluation of the matter Soon after the first evaluations of ideas it was necessary to improve the evaluation, either compared with a particular goal, or with a specific business issue. Today, the evaluation is in the hands of three randomly selected evaluators that in their perception of reality take into account the potential benefits of the evaluated ideas from different perspectives. For example, the idea that has a society-wide importance need not have business importance for the company. For that reason for the evaluation of ideas a regulation was created that is to guide the evaluators in order to evaluate the prescribed three perspectives: Originality: We are looking for innovative IT solutions (knowledge intensive collaboration solutions: business process repositories, process simulations, large-scale pro-ject collaboration systems, scenario-based learning, intelligent helpdesk solutions, etc.). Is your idea unique? Feasibility: We are looking for solutions that can enjoy success in the marketplace or in a company. Is your idea cost-effective, or can it be made so? Impact: If successfully realized, will your idea help turn our current IT challenge into an opportunity? Evaluation forms (templates) are indefinitely available for evaluators. It happens that two evaluators comply with the deadline while the third one delays his or her assessment resulting in a drop in trust from the innovator which is another drawback of this step. All evaluations are automatically recorded and supported by the portal of the innovation zone. Evaluation reminder Reminding the evaluation of the output section allows to create feedback of the production teams and to provide information that is important not only for the implementation phase, but also for the enrichment of information about the production environment and activation of further impulses for new ideas. The output section prepares its viewpoint on the evaluation. This step is purely of an organizational nature and we consider ways to improve this step by the increased collaboration support of the production section that is dedicated to this activity so as to not disturb other planned activities with higher priority. Matter reassessment In conclusion, the final step in the assessment of the previous evaluations and the result is either a rejection or acceptance of the viewpoint, which allows thereafter going into the preparation of the implementation phase of the project. 3.7 Searching for solution Finding solutions is a step that is necessary if exists an idea which doesn't contain design of technical solution yet. It is used when for the idea there is a need to find appropriate solution and the cooperation with other applicants is welcome. This step has not automated support, which is difficult to establish in this creative activity. It is still necessary to make some effort to find a solution. Demonstration of process measurement The following example contains application of defined indicators on the word evaluation of the particular process steps described in the previous chapter. In the table (Table 1) and at the Figure (Fig. 3), there are shown two interpretations of the KPI values for all process steps, matrix and graphical interpretation. Such measurement provides a more realistic vision of the process and options for the optimization and improvement. Process LOA 2 1 1 1 2 1 0 IHE 1 1 0 1 1 0 0 LBP 1 1 1 1 0 1 1 EOD 1 0 1 0 1 1 1 AV 2 2 1 2 1 2 2 Table 1. Measurement matrix of the sub-process of innovation idea processing Conclusion By the means of the article we demonstrate measurability of general processes of innovation zone, which may be applicable to different types of businesses and organizations dealing with generating innovative ideas in a controllable way and to maximize the added value of its future business. We intend to apply the action of measurement to all of the processes of innovation zone. We consider as important to simulate different business environments in the application verification in order to demonstrate a wide usability of general definition of innovation zone processes (Figure 4) and potential differences related to measurements. We expect that measured values will have different significance (different weight) for different environments, what will be the subject of further research. Fig. 1 . 1 Fig.1. Sub-process of innovation idea processing[START_REF] Zolotová | A design of a reference model of an innovation process and its implementation in business using an innovation zone[END_REF] Fig. 2 . 2 Fig. 2. Webpart idea evaluation report from idea evaluators [7]. Fig. 3 .Fig. 4 . 34 Fig. 3. Chart measurements of process steps Acknowledgments This work was supported by grant KEGA No. 021TUKE-4/2012 (100%). The team also thanks Novitech Company for a willingness, which allowed us to examine conditions for implementation of innovation zones in the real business environment, especially thank the President and Chairman of the Board (Chairman of the Board of Directors), Dr. Attila Toth, for his inspiring advice and transfer of experience to us.
12,937
[ "1002244", "991810", "1002245", "991404" ]
[ "155410", "153920", "155410", "155410" ]
01472485
en
[ "phys" ]
2024/03/04 23:41:46
2016
https://hal.science/hal-01472485/file/InBiSn_1.pdf
Sabine Bottin-Rousseau Melis S ¸erefoglu B,⇤ Sinan Yücetürk Gabriel Faivre Silvère Akamatsu Stability of three-phase ternary-eutectic growth patterns in thin sample Keywords: Eutectic solidification, directional solidification, solidification microstructures, ternary eutectics, three phase microstructures Near-eutectic ternary alloys subjected to thin-sample directional solidification can exhibit stationary periodic growth patterns with an ABAC repeat unit, where A, B and C are the three solid phases in equilibrium with the liquid at the eutectic point. We present an in-situ experimental study of the dynamical features of such patterns in a near-eutectic In-In 2 Bi-Sn alloy. We demonstrate that ABAC patterns have a wide stability range of spacing at given growth rate. We study quantitatively the -di↵usion process that is responsible for the spacing uniformity of steady-state patterns inside the stability interval. The instability processes that determine the limits of this interval are examined. Qualitatively, we show that ternary-eutectic ABAC patterns essentially have the same dynamical features as two-phase binary-eutectic patterns. However, lamella elimination (low-stability limit) occurs before any Eckhaus instability manifests itself. We also report observations of stationary patterns with an [AB] m [AC] n superstructure, where m and n are integers larger than unity. Introduction The solidification microstructures of ternary eutectic alloys take many di↵erent forms depending on the composition and grain structure of the alloy, the geometrical and thermal features of the solidification device, and the solidification history. The most important of these factors are the number of growing phases and the dimensionality of the samples. We are concerned here with two-dimensional three-phase microstructures. These are typically observed in near-eutectic ternary alloys subjected to thin-sample directional solidification (thin-DS). Solidification microstructures are, we recall, nothing else than the trace left behind in the solid by out-of-equilibrium self-organized patterns formed during solidification. At constant solidification rate V and applied thermal gradient G, these patterns generally reach, or, at least, asymptotically tend towards a steady-state. The most wellknown example of a steady-state eutectic growth pattern is the periodic (lamellar) two-phase solidification pattern of near-eutectic binary alloys analyzed by Jackson and Hunt (JH) a long time ago [START_REF] Jackson | Lamellar and rod eutectic growth[END_REF]. The repeat unit of such a pattern is an AB pair of lamellae, where A and B are the two solid phases in equilibrium with the liquid at the eutectic point. These AB patterns have mirror symmetry with respect to the mid-plane of the lamellae, which is actually a condition for their steadiness. Regarding ternary eutectic alloys, the basic repeat unit of stationary thin-DS patterns is ABAC, where A, B and C are the three eutectic solid phases. It has mirror symmetry with respect to the mid-plane of the B and C lamellae (Fig. 1). This was previously highlighted by Witusiewicz and coworkers in the In-In 2 Bi-Sn alloy [START_REF] Witusiewicz | In situ observation of microstructure evolution in low-melting bi-in-sn alloys by light microscopy[END_REF] and a transparent organic alloy [START_REF] Witusiewicz | Phase equilibria and eutectic growth in quaternary organic alloys amino-methyl-propanediol-(d)camphor-neopentylglycol-succinonitrile (ampd-dc-npg-scn)[END_REF], and numerically demonstrated by Choudhury et al. [START_REF] Choudhury | Theoretical and numerical study of lamellar eutectic three-phase growth in ternary alloys[END_REF] (also see Refs. [START_REF] Boettinger | Surface relief cinemicrography of the unsteady solidification of the lead-tin-cadmium ternary eutectic[END_REF][START_REF] Ruggiero | Origin of microstructure in the 332 k eutectic of the bi-in-sn system[END_REF] regarding bulk solidification). However, the large-scale dynamics of the ABAC patterns has not yet been studied. Here we present an experimental study of the morphological stability of ABAC growth patterns during directional solidification of a near-eutectic In-In 2 Bi-Sn alloy. We used very thin (⇡ 13 µm thick) samples in order for the system to be quasi two-dimensional and rid of convection flows in the liquid. Quantitative results that are presented below were obtained by studying in real time the dynamic response of pre-uniformized ABAC patterns to upward or downward V-jumps. Qualitatively, the results may be best understood through a comparison with the known dynamical features of the binary AB patterns [START_REF] Jackson | Lamellar and rod eutectic growth[END_REF][START_REF] Karma | Morphological instabilities of lamellar eutectics[END_REF][START_REF] Ginibre | Experimental determination of the stability diagram of a lamellar eutectic growth front[END_REF][START_REF] Akamatsu | Overstability of lamellar eutectic growth below the minimum-undercooling spacing[END_REF][START_REF] Akamatsu | Determination of the jacksonhunt constants of the in-in(2)bi eutectic alloy based on in situ observation of its solidification dynamics[END_REF]: (i) binary AB patterns have a wide stability range of spacing at given V; (ii) any spatial variation of that is confined within the stability interval is damped out over time through a long-range process called spacing-di↵usion or -di↵usion, which leads asymptotically to a perfectly periodic (uniform) pattern; (iii) the upper limit of the -range of stability ( sup ) is due to the onset of oscillations leading, at still larger -values, to lamella splitting; (iv) the lower stability limit ( in f ) corresponds to an instability of the -di↵usion process, namely, a change of sign of the -di↵usion coe cient (sometimes referred to as an Eckhaus instability [START_REF] Manneville | Dissipative structures and weak turbulence[END_REF]), which leads to lamella elimination; (v) sup varies with V as V 1/2 and is independent of G; in other words, it obeys the well-known Jackson and Hunt (JH) scaling law sup / m , where m is a scaling length that varies with V as V 1/2 ; (vi) in f does not obey the JH scaling law, but depends on both V and G in such a way that, at fixed G, the width of the stability range relative to m , i.e. ( sup in f )/ m , increases as V decreases. In the following, we shall call spacing, and denote it by , the width of an ABAC repeat unit in a three-phase eutectic-growth pattern. Which of the features listed above hold true for ternary-eutectic ABAC patterns? The main results of this study may be summed up as follows: ternary ABAC patterns have the same dynamical features as binary AB patterns, except for one significant aspect, namely, lamella elimination is not provoked by an Eckhaus instability but by a qualitatively di↵erent phenomenon, most probably, some short-wavelength instability of the ABAC pattern. In the remainder of the text, we first present the experimental methods (Section 2). We then give some important details about the preparation of extended ABAC lamellar patterns in large eutectic grains (Section 3). The main results (stability diagram and instability mechanisms, complex patterns) are presented and discussed in Section 4. A conclusion is proposed in the last section. Experimental methods The In-Bi-Sn ternary phase diagram has a nonvariant eutectic point at the temperature of 332 K (T E ) and the composition of In-20.7at%Bi-19.1at%Sn (C E ) [START_REF] Witusiewicz | Thermodynamic reoptimisation of the bi-in-sn system based on new experimental data[END_REF]. The solid phases in equilibrium with the liquid at this point are the intermetallic compound In 2 Bi, the -In phase and the -Sn phase. For brevity, we will generally call these phases A, B, and C, respectively. In 2 Bi has a hexagonal structure; -In and -Sn have body-centered tetragonal structures. An alloy of nominal composition C E was prepared by weighting the appropriate quantities of 99.999 % pure indium, bismuth and tin (Goodfellow), and mixing them in the liquid state under a primary vacuum. The actual composition was within less than 0.05 at% of C E . Glass-wall samples with inner dimensions of 4 ⇥ 50 ⇥ 0.013 mm 3 were filled with molten alloy using a vacuum-suction method. Details about the thin-DS stage and the method of observation used can be found elsewhere [START_REF] Akamatsu | Real-time study of thin and bulk eutectic growth in succinonitrile-(d)camphor alloys[END_REF][START_REF] Akamatsu | Determination of the jacksonhunt constants of the in-in(2)bi eutectic alloy based on in situ observation of its solidification dynamics[END_REF]. Let us simply mention that the thin-DS stage is basically made of two temperature-regulated copper blocks separated by a 5-mm gap. The thermal gradient in the region of the solidification front is G = 8±0.9Kmm 1 . The growth direction z is parallel to the thermal gradient and opposite to the pulling direction. The y-axis is perpendicular to the sample plane, whereas the x-axis is parallel to the isotherms. The V-range explored is 0.01 0.8µms 1 . The solidification front is observed in real time in the y direction (side view) with a reflectedlight optical microscope (Leica DMI 5000) equipped with a monochrome digital camera (Scion), connected to a PC for image capturing, processing and analysis. With this method, what is actually observed is the surface of contact between the metallic film and a flat glass wall. After contrast enhancing, the -In/glass surfaces appeared white and the In 2 Bi/glass surfaces black; both the -Sn/glass and the liquid/glass surfaces appeared light-grey (Fig. 1). We also performed ex-situ metallographic observations of transverse cross-sections in some samples. These showed that the interphase boundaries were perpendicular to the glass walls, as expected, validating the use of side-view images for dynamical studies of isotropic grains, as we explain shortly. Extended ABAC lamellar patterns Concerning binary AB eutectic patterns, it is known that near-uniform eutectic patterns can only be grown in samples consisting of "floating" eutectic grains with a size much larger than the spatial average of . To explain the origin for this requirement, we must recall the following facts [START_REF] Caroli | Lamellar eutectic growth of cbr4c2cl6: e↵ect of crystal anisotropy on lamellar orientations and wavelength dispersion[END_REF][START_REF] Akamatsu | A theory of thin lamellar eutectic growth with anisotropic interphase boundaries[END_REF]: (i) a (eutectic) grain is a portion of the solid, inside which the crystal-lattice orientation of each of the eutectic phases is uniform; (ii) eutectic growth patterns are sensitive to the degree of anisotropy of the surface energies of the various interfaces present, especially, AB interphase boundaries; this degree of anisotropy depends on the orientation of the di↵erent phases with respect to one another and the sample, and therefore varies from grain to grain; (iii) eutectic grains can be classified into two broad categories called "floating" and "locked". The floating grains are those in which anisotropy e↵ects are su ciently weak for the dynamical features of the eutectic patterns to be those that are reviewed in the Introduction (including, in particular, uniformisation over time by -di↵usion). In locked grains, on the contrary, surface tension anisotropy has dramatic e↵ects on the pattern formation. These grains most probably have a special orientation relationship between A and B, and lowenergy planes for the AB boundaries. The interphase boundaries become locked onto these low-energy directions entailing that -di↵usion is blocked; (iv) the boundaries between floating eutectic grains are a source of perturbations (lamella terminations, long-range spacing gradients) preventing the growth pattern to approach a fully steady state [START_REF] Caroli | Lamellar eutectic growth of cbr4c2cl6: e↵ect of crystal anisotropy on lamellar orientations and wavelength dispersion[END_REF]. The same considerations apply to ternary ABAC patterns. In this case, however, three di↵erent orientation relationships (AB, BC, CA) come into play. The floating grains are those, in which the AB, BC and CA boundaries have weak anisotropy. A large floating ABAC grain in near-eutectic In-In 2 Bi-Sn is shown in Fig. 2. The experimental procedure used to create such grains was as follows. After partial directional melting, the system was maintained at rest (V = 0) in order to let it return to equilibrium. During this period, two thin solid layers progressively formed between the liquid and the unmelted part of the solid: a polycrystal layer of C (i.e. -Sn) in equilibrium with the liquid, and, immediately below it, a twophase layer of AC (i.e. In 2 Bi+ -Sn). This indicated that the actual composition of the alloy was slightly o↵-eutectic with a small excess of Sn and an even smaller excess of Bi. The deviation of the alloy composition from C E was estimated from the observed phase fractions of A, B and C during stationary growth to be less than 0.05 at%. Immediately after the onset of the pulling, the solid-liquid interface recoiled along the z-axis due to solute redistribution until a few AC grains overgrew the C layer through an invasion/splitting process (see Ref. [START_REF] Akamatsu | The formation of lamellar-eutectic grains in thin samples[END_REF]). We repeated this melting-annealing-invasion procedure several times in order to decrease the number of AC grains. We then performed a two-phase (univariant) AC solidification until a three-phase solid appeared and overgrew the AC-liquid front, as previously reported by Rex et al. [START_REF] Rex | Transient eutectic solidification in in-bi-sn: Two-dimensional experiments and numerical simulation[END_REF]. Figure 3a shows the tip region of a three-phase solid invading the surface of a floating AC eutectic grain. In the wake of the invasion, lamella splitting generated a disordered three-phase pattern with a very small spacing, which slowly reorganized through lamella elimination and -di↵usion (Fig. 3b) and eventually formed a regular ABAC pattern. In the case of the experiment of Fig. 2, the sample contained two distinct floating grains, both of about 2 mm of lateral extension (one of them is shown in the figure). The boundary between the two grains is visible on the left-hand side, and the contact with the edge of the sample on the right-hand side of Fig. 2. The subdivision of the field of view into three areas indicated in the (left) inset is justified by the presence of two low-angle grain boundaries. All these defects generated lamella termination events and spacing gradients in their vicinity (regions I and III). However, and more importantly, the ABAC pattern inside the unperturbed region II clearly underwent a -di↵usion smoothing process. The quantitative studies reported below were performed in this type of regions. We noticed a slight outward curvature of the isotherms (radius of curvature is about 10 cm), but this had no detectable e↵ect on the growth dynamics. We checked that the volume fractions of the eutectic phases measured in the micrographs did not vary with x or , and were close to nominal values ( f B = 0.475, f C = 0.185). Results and discussion Stability diagram Each experimental run basically consisted of growing a near steady-state ABAC lamellar pattern at a given pulling velocity and then applying one, or a series of, upward or downward V-jumps. The initial pattern was characterized by its average spacing value 0 (equal to the width of the region of interest divided by the number of repeat units in this region). After a V-jump, the pattern either remained stable over a su ciently long period of time or exhibited some instabilities. Downward V-jumps revealed only one kind of instability, namely, lamella termination, leading, self-evidently, to an increase in average spacing. Upward V-jumps gave rise to a more complex response combining oscillations and lamella splitting. We will first leave aside the oscillations, and assume that in f coincides with the threshold el for the occurrence of lamella elimination, and sup with the threshold br for lamella splitting. To construct a stability diagram, we represented any near steadystate ABAC pattern studied by a point in the -V coordinates using di↵erent symbols according to the long-term evolution of the pattern (stable pattern, lamella splitting, lamella elimination). As shown in Fig. 4a, this revealed the existence of two clearly distinct stability limits, namely, el (V) and br (V). All the observed patterns that initially had a spacing distribution lying within these limits remained stable for the duration of the experiment. As an illustration, Fig. 5 shows two stable ABAC patterns at the same V-value, with -values that di↵er by a factor of about 1.4. Incidentally, we note that, in this example, the highest -value was only slightly smaller than br and the pattern exhibited externally-sustained oscillations. We also plotted the data in terms of the dimensionless variable ⇤ = / m , where m = K m V 1/2 and K m = 11.6 µm 3/2 s 1/2 (see below). As can be seen in Fig. 4b, ⇤ br (⇡ 1.9) was independent of V, within experimental uncertainty. In other words, br obeyed the JH scaling law. By contrast, el clearly deviated from this law at the lowest V-values explored. This is qualitatively similar to what was observed in binary AB patterns. Instability mechanisms Figure 6 shows a typical example, in which two successive upward V-jumps triggered oscillations and lamella splitting. The oscillations were coherent on a large scale and clearly belonged to an oscillatory mode which preserves the spatial period (1 O). We did not observe any other mode of oscillation during this study. In Fig. 6, a 1 O oscillation transiently appeared after a first upward Vjump and then was damped out. The second upward V-jump reinstalled the 1 O oscillations, which amplified over time until a series of lamella splitting events occurred, thus decreasing 0 and killing the oscillations. It is worth noting that the attenuation/amplification times after the two V-jumps were of the same order of magnitude. This is consistent with the view that sup actually corresponded to the instability threshold spacing for the 1 O mode, as is the case in near-eutectic AB patterns in binary alloys [START_REF] Karma | Morphological instabilities of lamellar eutectics[END_REF][START_REF] Ginibre | Experimental determination of the stability diagram of a lamellar eutectic growth front[END_REF]. However, contrary to what occurs in binary eutectics, there does not exist any stability domains of the 1 O oscillations in the ABAC patterns. Concerning lamella splitting, little can be said of the underlying mechanism, which is intrinsically 3D. On the other hand, we note that each lamella splitting event resulted in the creation of a single new AB grain disrupting the ABAC stacking. In the example of Fig. 6, the motif of the resulting pattern is no longer ABAC but ABABAC, or, more briefly, [AB] 2 AC. We will discuss the stability of such complex patterns later on. Incidentally, it can be deduced from Fig. 6 that the error made by merging sup with br was 10%, or less. The mechanism of lamella elimination showed some variability, as illustrated by the two examples of Fig. 7. Nevertheless, the following features were common to all the examples studied: (i) the process was strictly local before the elimination event occurred, but, on the contrary, gave rise to a long-range -di↵usion Circles: lamella elimination. Diamonds: six stable patterns, in which the -di↵usion coe cient was measured (see Table 1). Continuous line: best-fitting br / V 1/2 curve. Broken line: the -di↵usion instability threshold c as given by Eq. 6 with D = 0. Vertical bar: error margin. process after the event (see Section 4.3); (ii) globally, the process resulted in the elimination of a whole repeat unit and thus in a restoration of the ABAC stacking. = 60µm. V 1# V 2# Spacing di↵usion in stable ABAC patterns The best method of proving the stability of a dynamical pattern is probably to demonstrate that any imperfection it may contain spontaneously disappears over time. We therefore studied the time evolution of spatial modulations of the spacing profile (x) in ABAC patterns with a value of the average spacing 0 belonging to the stability range, as defined in Fig. 4. In all cases, we indeed observed a uniformisation of (x). A clear manifestation of this smoothing process could be observed after isolated lamella elimination events, as shown in Fig. 8, and in the upper part of Fig. 7a. This definitely identifies el and br as being the stability limits of basic ABAC patterns. A few explanatory remarks will be useful before we present our quantitative results. Let us consider a -distribution consisting of a sinusoidal modulation of amplitude A and wavevector k (or wavelength L = 2⇡/k) about an average value 0 . It is a general property of 1D dynamical patterns that, if A is su ciently small and L >> 0 , the time evolution of (x) is governed by an equation of the form: @ @t = D @ 2 @x 2 , ( 1 ) where t is time, and D is a 0 -dependent coe cient called -di↵usion coe cient. The di↵usion equation imposes that the amplitude of the -modulation evolves according to A(t) / exp( D k 2 t). (2) Thus A(t) either increases or decreases exponentially over time depending on the sign of D . It has been shown experimentally and numerically that, in two-phase solidification patterns, D changes sign at a critical spacing value c [D ( c ) = 0)], and that the spontaneous amplification of the -modulation that occurs when 0 < c leads to lamella elimination events. This entails that, in thin-DS binary eutectics, el and c , while theoretically distinct, cannot be distinguished from one another experimentally. We will show shortly that this is not the case as regards the ABAC patterns. We performed six independent experimental determinations of D . In that procedure, we chose as an initial time (t = 0) a moment at which a peaked modulation of existed as a result of a recent elimination event, or some other occurrence. The overall results are displayed in Table 1. Two exemplary cases are shown in . In Fig. 9a (which corresponds to the pattern in Fig. 8), a time series of eight (x)-plots were measured (for clarity, only 3 of them are plotted in Fig. 9a). Functions of the form (x) = 0 + A 1 (t)sin(2⇡x/L) + A 2 (t)sin(2⇡x/(2L)) (3) could be fitted satisfactorily to each (x)-plot, thus yielding a set of A 1 (t) and A 2 (t) data for a given value of L (⇡ 239 µm). We then fitted a decreasing exponential function to the A 1 (t) data, in accordance with Eq. 2. This yielded D ⇡ 0.85±0.02 µm 2 s 1 . In the second case (Fig. 9b), (x) measurements were made after the lamella elimination shown in Fig. 7b. Then, the (x)-plots turned out to be best fitted by the gaussian function (x) = 0 + A(t)exp " x 2 2 (t) # , (4) where A(t) = A p t t 0 , (t) = p 4D (t t 0 ). (5) which is also a well-known solution of the di↵usion equation. In this case, we found D ⇡ 0.43 ± 0.02 µm 2 s 1 . In overall, the margin of error yielded by the best-fit procedures used was about 15% for all the data. The errors from other origins (grain boundaries, sample edges, curved isotherms) were most probably substantially smaller than this value. V (µms 1 ) 0 (µm) L (µm) D (µm In the case of binary AB patterns, it has previously been shown that the dependence of D on 0 in the vicinity of c can be approximated by D = K r 0 V 2 G ✓ 1 1 ⇤ 2 0 ◆ + BV 0 ⇤ 0 , (6) where ⇤ 0 = 0 / m , m = p K c /K r V 1/2 , and B is an empirical dimensionless constant [START_REF] Akamatsu | Overstability of lamellar eutectic growth below the minimum-undercooling spacing[END_REF]. K r and K c are called the Jackson-Hunt constants of the alloy (the coe cient K m introduced in Section 4.1 is equal to p K c /K r ). Let us assume that Eq. 6 is also valid for ABAC patterns, and consider K r , K c , and B as adjustable parameters. A least-squares fitting of Eq. 6 to the data of Table 1 yielded the values displayed in Table 2. Due to the dispersion of the experimental data, the error margin is rather large (' 25%). In spite of this uncertainty, the data of Table 2 clearly have the same order of magnitude as those generally found in binary eutectics [START_REF] Akamatsu | Overstability of lamellar eutectic growth below the minimum-undercooling spacing[END_REF][START_REF] Akamatsu | Determination of the jacksonhunt constants of the in-in(2)bi eutectic alloy based on in situ observation of its solidification dynamics[END_REF]. K r K c B 2 m V [Ksµm 2 ] [Kµm] [µm 3 s 1 ] 0. V = K c /K r . According to Eq. 6, there exists, at given V and G, a particular value c of the spacing at which D vanishes, that is, for 0 < c , one has D < 0. However, as : Time series of -plots measured in stable ABAC patterns (the time origin was defined just after a lamella elimination event). (a) V = 0.15 µms 1 (same experiment as in Fig. 8). Circles: t = 440 s. Squares: t = 1340 s. Triangles: t = 3890 s. Thick lines: best-fitting curves to the -plots using Eq. 3. Inset: Decreasing exponential function fitted to A 1 (t) using Eq. 2. (b) V= 0.21 µms 1 (same experiment as in Fig. 7b). Circles: t = 1710 s. Squares: t = 2330 s. Triangles: t = 2940 s. Diamonds: t = 4130 s. Thick lines: best-fitting curves to the -plots using Eq. 4. Inset: best-fitting curves to A(t) (continuous line) and (t) (broken line) using Eq. 5. mentioned above, we did not observe the amplification of a modulation, even in those ABAC patterns, in which lamella elimination occurred. This contrasts to what has been observed previously in two-phase patterns during thin-DS of various binary-eutectic alloys. Therefore, we are led to conclude that, in ABAC growth patterns, the threshold el for lamella elimination is not related to c . We obtained a c (V) curve by plugging the data of Table 2 into Eq. 6, and setting D = 0. This curve has been plotted in Fig. 4b as a dashed line. As can be seen, the calculated values of c are close to, and probably smaller than el . This supports the view that a -di↵usion instability exists, but is hidden by the instability process leading to lamella elimination. What then was this process? Close-up examinations of the lamella elimination events were only capable of showing that it was very fast and highly localized. To sum up, lamella elimination was triggered by a short-wavelength instability occurring at a -value slightly larger than c . The same has already been observed in other 1D dynamical systems (for instance cellular solidification patterns, see [START_REF] Kopczynski | Critical role of crystalline anisotropy in the stability of cellular array structures in directional solidification[END_REF]). In our case, the instability involved seems to have a wavelength shorter than (i.e. the width of the ABAC motif). ABC patterns and complex patterns In addition to the ABAC patterns, we also observed, under certain conditions, other types of extended growth patterns. Recent numerical simulations demonstrated the possible existence of stationary patterns with an ABC repeat unit in ternary eutectics [START_REF] Choudhury | Theoretical and numerical study of lamellar eutectic three-phase growth in ternary alloys[END_REF]. We indeed observed small domains of ABC patterns in the In-In 2 Bi-Sn eutectic, but only as transients following large-amplitude V-jumps (Fig. 10). Interestingly, these patterns were drifting laterally, in accordance with the fact that the ABC motif does not have a mirror symmetry. The order of magnitude of the drift angles observed was 10 o with respect to the growth direction. We also observed three-phase growth patterns with a large variety of superstructures, which remained stable over the duration of the experiments. The simplest of these superstructures was [AB] 2 AC (see Fig. 6). Patterns with much wider superstructures could be obtained by applying a particular velocity program during the melting-annealing-invasion procedure (see Section 3). Two examples, one with an [AB] 2 [AC] 2 motif, the other with an [AB] 4 [AC] 6 B motif are shown in Fig. 11. These patterns, although essentially periodic, contained numerous (stationary) "stacking faults". We note the presence of stationary BC interfaces in Fig. 11b, which was exceptional. The stability of such patterns is somewhat surprising given that an [AB] m [AC] n pattern actually consists of wide adjacent two-phase domains, which, under other conditions, could invade each other. The explanation Conclusion We have shown experimentally that three-phase ABAC growth patterns in thin-sample directional solidification of a ternary eutectic alloy have a finitewidth stability range of spacing. The upper stability limit corresponds to lamellar branching, and the lower limit to lamella elimination. We have measured the value of the -di↵usion coe cient in a series of stable patterns with di↵erent V and values in the particular case of the In-In 2 Bi-Sn eutectic. We have evaluated the ABAC Jackson-Hunt constant of the alloy to be 2 m V = 135 ± 20 µm 3 s 1 . In terms of the reduced spacing ⇤ = / m , the upper instability threshold was shown to be independent of V (⇤ sup = 1.9 ± 0.2). The lower instability threshold ⇤ in f (of about 0.6 at the smallest pulling velocity used in this study, that is, V = 0.01 µms 1 ) increases with V, and is roughly equal to 1 above V = 0.2 µms 1 . In Ref [START_REF] Witusiewicz | In situ observation of microstructure evolution in low-melting bi-in-sn alloys by light microscopy[END_REF], Witusiewicz et al. have measured an average value < > of the ABAC spacing in the In-In 2 Bi-Sn eutectic under the same thermal gradient as we used, over a large V range. Using the above mentioned estimate of m , we may sum up their results as < ⇤ >⇡ 1.5 at high V, which is well inside the stability range we have measured and hence agrees with the results of this study. This investigation opens up new questions such as: What is the precise nature of the instability process which triggers lamella elimination and thereby determines the value of in f in ABAC patterns? How can three-phase patterns with a complex superstructure grow in a stationary way? Numerical studies will probably be necessary in order to elucidate these questions. Figure 1 : 1 Figure 1: Sketch of the three-phase ABAC repeat unit of a ternary-eutectic directionalsolidification pattern. The thermal gradient is oriented vertically. In a thin sample of the eutectic In-In 2 Bi-Sn alloy, the letters A, B, and C refer to the In 2 Bi, -In, and -Sn crystal phases, respectively. L: liquid. : local spacing value. Figure 2 : 2 Figure 2: Near steady-state In-In 2 Bi-Sn solidification pattern. V = 0.35 µms 1 . Bar: 100 µm. Leftmost inset: (x)-plots at the Z1 and Z2 time points. The subdivision of the field of view into three areas along the x axis is explained in the text. Rightmost inset: close-up view of an ABAC pattern at V = 0.11 µms 1 (Horizontal dimension: 90 µm). Figure 3 : 3 Figure 3: a) Invasion of an AC two-phase growth front by a three-phase solid during the early stages of thin-DS. V = 0.53 µms 1 . b) Snapshot of the same area 300s later (see the early formation of an ABAC pattern on the left). The pronounced curvature of the growth front is due to the fact that it has not yet grown out of the funnel-shaped crystal selector placed at the cold end of the sample. Figure 4 : 4 Figure 4: Stability diagram of the ABAC lamellar patterns: (a) spacing , and (b) dimensionless spacing ⇤ = / m of the pattern (see text), as a function of velocity V. Triangles: lamella splitting.Circles: lamella elimination. Diamonds: six stable patterns, in which the -di↵usion coe cient was measured (see Table1). Continuous line: best-fitting br / V 1/2 curve. Broken line: the -di↵usion instability threshold c as given by Eq. 6 with D = 0. Vertical bar: error margin. Figure 5 : 5 Figure 5: Near-uniform ABAC lamellar patterns observed at V = 0.11 µms 1 (bar: 50µm), with two di↵erent average spacing values: a) 0 = 42.5µm; b) 0 = 60µm. Figure 6 : 6 Figure 6: Oscillations and lamella splitting events triggered by two successive velocity-jumps. V 0 = 0.11 µms 1 . V 1 = 0.13 µms 1 . V 2 = 0.20 µms 1 . The lowest line of the field of view corresponds to the time of the jump from V 0 to V 1 . Image width: 610 µm. Figure 7 : 7 Figure 7: Two di↵erent examples of lamella elimination events occurring after a long pulling time at constant velocity. a) V = 0.177 µms 1 . b) V = 0.21 µms 1 . Bar: 50 µm. Figure 8 : 8 Figure 8: Time evolution of an ABAC pattern in the wake of an elimination event. Arrows: time points corresponding to the lambda plots shown in Fig. 9a. V = 0.15 µms 1 . The image has been compressed by a factor of 3 along the growth direction in order to highlight the time evolution of (x). Image width: 468 µm Figure 9 9 Figure9: Time series of -plots measured in stable ABAC patterns (the time origin was defined just after a lamella elimination event). (a) V = 0.15 µms 1 (same experiment as in Fig.8). Circles: t = 440 s. Squares: t = 1340 s. Triangles: t = 3890 s. Thick lines: best-fitting curves to the -plots using Eq. 3. Inset: Decreasing exponential function fitted to A 1 (t) using Eq. 2. (b) V= 0.21 µms 1 (same experiment as in Fig.7b). Circles: t = 1710 s. Squares: t = 2330 s. Triangles: t = 2940 s. Diamonds: t = 4130 s. Thick lines: best-fitting curves to the -plots using Eq. 4. Inset: best-fitting curves to A(t) (continuous line) and (t) (broken line) using Eq. 5. Figure 10 :Figure 11 : 1011 Figure 10: Transient ABC patterns observed in two distinct experiments: (a) a short time after the onset of pulling (V = 0.531 µms 1 ); (b) after a velocity decrease of a factor 10 (V = 0.071 µms 1 ). Bars: 50 µm. Table 1 : 1 Measured values of the -di↵usion coe cient D at the indicated values of V and 0 . L: wavelength of the -modulation. ⇤ Fit by a gaussian function. 2 s 1 ) Table 2 : 2 Best-fit values of the adjustable coe cients of Eq. 6 and of 2 m 0166 2.23 0.054 135 Acknowledgements We thank Patricia Ott for her help in the preparation of the samples. We thank the French Center for Spatial Research (CNES) for financial support. This work was financially funded by T ÜBITAK 3501 (Grant no: 212M013) and European Commission Marie Curie Career Integration Grant FP7-PEOPLE-2012-CIG (NEUSOL 334216) for authors MS and SY. One of us (SY) benefited from an Erasmus grant.
34,795
[ "1001716" ]
[ "439880", "101592", "439880", "101592", "439880", "101592", "439880", "101592", "439880", "101592" ]
01472598
en
[ "spi", "info" ]
2024/03/04 23:41:46
2016
https://hal.science/hal-01472598/file/Article.pdf
Louis Hawley Wael Suleiman External Force Observer for Medium-sized Humanoid Robots In this paper, we introduce a method to estimate the magnitude of an external force applied on a humanoid robot. The approach does not require using force/torque sensors but instead uses measurements from commonly available forcesensing resistors (FSR) inserted under the feet of the humanoid robot. This approach is particularly interesting for affordable medium-sized humanoid robots such as Nao and Darwin-OP. The main idea is to use a simplified dynamic model of a linear inverted pendulum model (LIPM) subjected to an external force, and the information from the robot inertial measurement unit (IMU) and FSR sensors. The proposed method was validated on a Nao humanoid robot to estimate the external force applied in the sagittal plane through two experimental scenarios, and the results pointed out the efficiency of the proposed observer. I. INTRODUCTION Humanoid robots are good candidates to perform manipulation and transport tasks since they possess articulated arms. These tasks require the robot to be able to adjust its gait in order to take into account the external forces exerted on it. In such situations, the robot usually uses his interoceptive/exteroceptive sensors to estimate those forces and a representative dynamic model to generate stable patterns. A situation in which the knowledge of the transported mass or applied force could be very useful to generate a more stable gait is the transportation of an object on a cart by a robot. This scenario is considered in [START_REF] Rioux | Humanoid Navigation and Heavy Load Transportation in a Cluttered Environment[END_REF], where a motion planner uses different sets of motion primitives depending on the mass transported on the cart. The estimation of the load is done by making the robot execute a turning in place motion and by looking at the error between the planned motion and the actual robot position. Although this approach can effectively differentiate a light load from a heavy one, the differentiation is done by roughly applying a binary operator to the error. The planning algorithm then chooses the heavy load or small load primitives set accordingly. A better estimation of the load could allow a more optimal motion planning as more primitive sets adapted to different load could be used. Moreover, by integrating the estimated external force into the pattern generation module, more stable motions can be obtained even with a relatively heavy load. A. Relevant works Improving the robustness of humanoids walk against disturbance is a topic of interest since these robots are expected to perform tasks in a variety of human environments. In [START_REF] Kaneko | Disturbance observer that estimates external force acting on humanoid robots[END_REF], a disturbance observer that estimates the magnitude of an external force is presented. The observer uses measurement from an IMU and six-axis force/torque (F/T) sensors located at the ankles to detect and estimate the magnitude of a strong force such as a kick to the chest of the robot or a collision with the environment. Whereas a kick to the chest can be represented by an impulse input applied to the system, a pushing motion is equivalent to a step input and it is therefore expected that the observer would not have the same performance in the latter case. Some research work were also conducted to estimate an external force applied on a small humanoid robot. In [START_REF] Berger | Dynamic Mode Decomposition for perturbation estimation in human robot interaction[END_REF] and [START_REF] Berger | Inferring guidance information in cooperative human-robot tasks[END_REF], the authors are interested in predicting the perturbations transmitted to a robot in a human-robot interaction. Essentially, the proposed approach is to generate a probabilistic model of the sensors output to predict future readings. Then, if the measurements are not in accordance with the model, it can be concluded that an external force is applied. The authors were able to use the system in a humanrobot interaction to infer the human intention and move the robot accordingly. However, in a robot-human interaction, the robot does not need to modify its walking gait since the human will apply more or less force depending on the robot reaction. But if the humanoid is pushing or transporting an object, it must adjust its gait to remain stable. In this case, an estimation of the force magnitude would be necessary. In [START_REF] Harada | ZMP Analysis for Arm / Leg Coordination[END_REF], [START_REF]Pushing manipulation by humanoid considering two-kinds of ZMPs[END_REF], the ZMP dynamic is analysed for a humanoid robot performing a pushing manipulation task. In this case, the external force is directly measured through F/T sensors located in the wrist of a HRP-2 robot. Their results reveal that the robot falls if the motion generator does not consider the exerted force on the grippers. On the other hand, the required compensation can be extracted by computing two ZMPs. The first ZMP, referred as the generalized ZMP, is computed by considering the gravity and the reaction force and moment on the floor. A second ZMP is computed by considering the external force applied on the grippers. The difference between those two ZMPs is added to the desired ZMP trajectory, which is generated using the LIPM model. This procedure was successfully applied in simulation to generate a stable motion. There are also numerous works [?] [?] where the ZMP position error is monitored and integrated into the control law. In these cases, quantifying the perturbation and determining its origin were not addressed. As the ZMP was successfully used in all these previous work to make humanoid walk more robustly against disturbance, a plausible approach to approximate an external force would be monitoring the ZMP variation and linking it to an external force. The objective is then to have a dynamic model of the task being executed and to be able to measure the actual ZMP with enough precision. In this work, assuming the robot is equipped with forcesensing resistors (FSR) under the feet, we propose a method for estimating the magnitude and direction of an external force in the sagittal plane using the LIPM model and ZMP measurements using the FSR. The proposed observer was designed for and validated on a Nao humanoid robot. Nao [START_REF] Gouaillier | Mechatronic design of NAO humanoid[END_REF] is a medium-sized humanoid robot manufactured by Aldebaran. On the contrary of complex and highly sophisticated humanoid robots, such as HRP-2 or Atlas, Nao does not have six-axis F/T sensors and possesses only FSR under the feet. The LIPM model and the dynamic equations with an external force are presented in Section II. Section III deals with estimating the ZMP using the available sensors. The external force-observer architecture and implementation is addressed in section IV. Finally, in section V, experimental results are presented and the observer performance is analyzed and discussed. II. DYNAMIC MODELS A. LIPM dynamic LIPM model has been widely used to generate stable walking patterns [START_REF] Kajita | Experimental study of biped dynamic walking in the linear inverted pendulum mode[END_REF]. According to this model, the Center of Mass (CoM) only moves under the action of the gravity. The dynamics of the LIPM can be decoupled within each axis and therefore we only show hereafter the equation in the sagittal plane. The motion dynamic can be written as M c ẍc = M c g x c Z c (1) where M c is the mass of inverted pendulum, Z c is the height of CoM, g is the magnitude of gravity acceleration, x c and ẍc are the position and the acceleration of the projection of CoM on the sagittal axis. Note that x is expressed in the pivot frame, which corresponds to the ankle on the real system. B. Dynamic model with external force The general dynamic model of a robot walking with an external force applied is presented in Fig. 1, and is defined as follows: M c ẍc = M c g x c Z c -F ext (2) where F ext is an external force. For instance, this force might be the result of the robot pushing/lifting an object or interacting/collaborating with a human. Here, we have a system identification problem where we need to estimate F ext in order to generate a stable motion. Although monitoring ẍc or x c might give us an insight into the external force that is applied, it would not be very useful in the case of a position-controlled robot such as Nao. Unless the external force is strong enough that the motors of the robot are unable to keep their position, the CoM position will not be affected. However, it is worth mentioning that a strong punctual disturbance such as a push will affect the CoM position and acceleration as it is the main idea behind the external force observer in [START_REF] Kaneko | Disturbance observer that estimates external force acting on humanoid robots[END_REF]. 1) ZMP without external force (x ZM P ): First, we consider the ZMP according to its general definition for a humanoid robot, approximated by a LIPM, without external force. In other words, we only consider the gravity and the inertial force of the robot. Recalling that the ZMP is a point on the ground where the inertial and gravity moments cancel out, it is defined as x ZM P = x c - Z c ẍc g (3) We will refer to the ZMP computed without considering the external force as x ZM P . x ZM P can be directly obtained from the desired ZMP trajectory. It can also be computed using the acceleration from the IMU to approximate ẍc and the direct kinematic to get x c . 2) ZMP: The true ZMP can be found by considering every force acting on the system. From (2), the ZMP is defined as: x ZM P = x c - Z c ẍc g - F ext Z c M c g (4) We will refer to the ZMP computed considering the external force as x ZM P . The basic idea of our external force observer is that the difference between the actual ZMP (x ZM P ) and the planned ZMP (x ZM P ) is proportional to the external force. The main challenge is then to get a good estimate of the actual ZMP. The next section deals with estimating the ZMP with measurements from the available sensors. III. ZMP ESTIMATION As presented in [START_REF] Shim | Study of Zmp Measurement for Bipedrobot Using Fsr Sensor[END_REF], the ZMP can be estimated with force-sensing resistors located under the feet. The main idea is that by measuring the applied force on each force sensor, it is possible to estimate the center of pressure (CoP) by calculating the position of the equivalent force. Then, recalling that the ZMP and the CoP are the same point if the robot is in a stable configuration [START_REF] Vukobratović | Zero-Moment Point -Thirty Five Years of Its Life[END_REF], we can estimate the ZMP by computing the CoP x ZM P = x CoP (5) A. Center of pressure measurements Each foot of Nao possesses 4 FSR that each returns a force between 0 and 25N. .Using these measurements, one can easily compute the center of pressure on one foot as x CoP = 4 i=1 F i x i 4 i=1 F i = 4 i=1 F i x i F T ( 6 ) where F i is the force measured at the i th FSR, x i is the position of the i th FSR in the foot frame and F T is the total force applied on the foot. During walking, we assume that the robot is always in a single support phase. Therefore, the support foot must be detected to apply the previous formula. A possible solution to determine the support foot is to consider the total force applied on each foot (measured with the FSR) as the decision variable. However, using a simple boolean operator results into false support foot detection since the force measured by the FSR becomes highly noisy when a foot lands on the floor. The adopted solution, similar to [START_REF] Xinjilefu | Center of Mass Estimator for Humanoids and its Application in Modelling Error Compensation, Fall Detection and Prevention[END_REF], is to implement a Schmitt trigger to process the support foot state. The Schmitt trigger uses a low and a high threshold to determine the value of the output. In our case, the input to the trigger is the total force on the right foot minus the total force on the left foot. Hence, positive value means that the support foot is the right and a zero value means the support foot is the left. Satisfactory results were obtained by setting the positivegoing threshold at 2N and the negative-going threshold at -2N . However, when the robot is immobile and in doublesupport mode, the ZMP (or CoP) can be simply found using the following formula x CoP = x l CoP F l T + x r CoP F r T F l T + F r T ( 7 ) where x * CoP and F * T are computed as in ( 6) for the left and right feet. B. External force observer From (3) and ( 4) , the external force F ext can be easily found F ext = M c g (x ZM P -x ZM P ) Z c (8) where x ZM P is the ZMP computed using the LIPM model and x ZM P is measured with the FSR. IV. IMPLEMENTATION The architecture of the observer is summarized in Fig. 2. Implementation details are presented in the following subsections. A. x ZM P Computation As mentionned before, x ZM P corresponds to the position at which the ZMP would be if no external force is applied. For humanoid robots that use a ZMP-based control scheme to generate the walking gait, it typically corresponds to the desired ZMP trajectory. However, to compute the ZMP with (3) the position and acceleration of the CoM must be approximated. 1) CoM acceleration: The acceleration of the CoM can be extracted from the IMU, which is a standard part of a humanoid robot sensors. As for Nao, the IMU provides measurements from a three-axis accelerometer and a twoaxis gyroscope. Also, an existing on-board algorithm provides an estimation of the torso orientation. In this work, we make the assumption that the CoM coincides with the IMU. Therefore, incoming data from the accelerometer can be used to approximate the acceleration of the CoM. Fig. 3 presents a block diagram of the raw acceleration data processing. The projection into an inertial frame and the gravity compensation steps are implemented as presented in [START_REF] Bellaccini | Manual guidance of humanoid robots without force sensors: Preliminary experiments with NAO[END_REF]. 2) CoM Position: In order to determine the CoM position, we make the assumption that the trunk of Nao is the center of mass. This assumption is also made in the built-in walk engine of Nao as presented in [START_REF] Gouaillier | Omni-directional closedloop walk for NAO[END_REF]. Therefore, simple direct kinematic computation with encoder readings from the servo motors is used to determine the position of the CoM in the support foot frame. The support foot is determined using the method presented in Section III-A. Obviously, the ZMP will also oscillate in the same manner. If this swaying oscillation is not removed at some point, it will highly degrade the performance of the observer. This problem has already been tackled in [START_REF] Oriolo | Vision-based trajectory control for humanoid navigation[END_REF], where the filtering of the Nao robot sway motion is analysed. It is mentionned that the frequency of the sway motion for the Nao robot is close to 1 Hz. This claim was validated on our robot, and a first order low-pass filter with a cut-off frequency of 0.6 Hz has been used. Fig. 4 presents the error signal between the two ZMPs before and after filtering during a sagittal walk. V. EXPERIMENTAL RESULTS The observer was tested in two scenarios : stationary robot with an external force and a walking robot with an external force. All the experiments results were generated with the following parameters: g = 9.81 m/s 2 , Z c = 0.315m, sampling period (T ) = 16.7 ms and M c = 4.5Kg. A. Case Study 1 : Stationary with external force The dynamic model of the first scenario considered is presented in Fig. 6. The used setup for this experiment is presented in Fig. 5. In this experiment, the robot is standing still and attached from the waist to a mass in the form of a water bottle. At some point, the mass is released and the force propagates to the robot through a basic pulley system. 8 presents the results of an experiment where a constant force of approximately 3.8N was applied on a static robot. Fig. 8(a) shows that the external force has no significant effect on the CoM acceleration and that the servo motors are able to keep their position despite the added force since the x ZM P is not affected. On the other hand, the ZMP measured with the FSR is rapidly shifted by more than 3 cm. In Fig. 8(b), the difference between the ZMPs is presented after low-pass filtering. Since the robot is not walking, the filter presented in Section IV-B is not necessary here. Instead, a low-pass filter with a cut-off frequency of 4 Hz was used to smooth the signal. Fig. 8(c) shows that the observer estimated force is close to the actual one. In order to characterize the performance of the observer, we define the settling time as the time needed for the output to reach and stay within a 20% margin of the reference value. In this case, the settling time is close to 0.5s. Similar results were obtained in experiments performed with external force ranging from 1N to 4N. In each case, the observer was able to estimate the external force within a 20% error margin. B. Case Study 2 : Walking with external force The experimental setup used in this case is the same as in Case 1 (Fig. 5). Initially, the robot is standing still and attached from the waist to a mass which is in full contact with the ground. The robot then starts to walk forward and at some point, the rope is completely tensed and the mass is lifted off the ground by the walking robot, as shown in Fig. 7. The lifted mass acts as an external force pulling the robot backwards. The experimental results of a forward walk with an external force of 3N is presented in Fig. 9. As shown in Fig. 9(a), the two ZMPs are similar until the external force is applied at around 10s. At this point, the x ZM P is shifted to the back as opposed to the x ZM P that continues to oscillate around 0. However, as demonstrated in Fig. 9(b), there is a small error between the two ZMPs even when no force is applied. Thus, a threshold operation is applied on the error signal to avoid detecting a false external force. Accordingly, the observer output is more stable but external forces of magnitude less than 0.5N are not detected. In Fig. 8(c), the observer estimated force is given along with the true external force. One can figure out that the observer successfully estimated the true external force. Also, as might be seen, the settling time is slightly more than 1s. In this experiment, it corresponds to two walking steps for the robot. During similar experiments, the observer was able to estimate an external force ranging from 0.8N to 3.8N within a 20% error margin. Note that the maximum force that we could apply on the walking Nao, using the robot built-in walk engine, without making it fall, was 3.8N. VI. CONCLUSION In this paper, we introduced a method to estimate the magnitude of an external force acting on a humanoid robot without using expensive 6-axis force/torque sensors. Essentially, it uses measurement from force-sensing resistors located under the feet of the robot to estimate the position of the ZMP and compare it to a reference ZMP that is computed using the linear inverted pendulum model. This approach is mainly interesting for medium-sized humanoid robots, and it was successfully validated on a Nao robot in two different scenarios. Future work will focus on integrating the external force observer into the pattern generation module. Fig. 1 . 1 Fig. 1. Simplified model of a humanoid robot with an external force Fig. 2 . 2 Fig. 2. Block diagram of the observer Fig. 3 . 3 Fig. 3. Block diagram of the processing done on the raw acceleration data BFig. 4 . 4 Fig. 4. ZMP error during a sagittal walk before and after filtering. At T = 10s, a constant force was applied Fig. 5 .Fig. 6 . 56 Fig. 5. Experimental setup used to apply a known external force Fig. 7 . 7 Fig. 7. Snapshots of an experiment where a mass is lifted off the floor by a Nao robot walking forward xFig. 8 .xFig. 9 . 89 Fig. 8.Case Study 1: ZMP variation and force estimation when an external force of approximately 3.8N is pulling a stationary Nao backward. At approximately 1 second, the force is applied. (a) The displacement of x ZM P and x ZM P when the force is applied (b) The difference between the two ZMPs after applying a low-pass filter (c) The actual external force and the observer estimation ACKNOWLEDGMENT This research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
20,890
[ "4778" ]
[ "10810", "10810" ]
01472602
en
[ "math" ]
2024/03/04 23:41:46
2017
https://hal.science/hal-01472602/file/prelim_arxiv.pdf
G David J Feneuil S Mayboroda ELLIPTIC THEORY FOR SETS WITH HIGHER CO-DIMENSIONAL BOUNDARIES Keywords: harmonic measure, boundary of co-dimension higher than 1, trace theorem, extension theorem, degenerate elliptic operators, maximum principle, Hölder continuity of solutions, De Giorgi-Nash-Moser estimates, Green functions, comparison principle, homogeneous weighted Sobolev spaces AMS classification: 28A75, 28A78, 31B05, 35J25, 35J70, 42B20, 42B25, 42B37 Many geometric and analytic properties of sets hinge on the properties of harmonic measure, notoriously missing for sets of higher co-dimension. The aim of this manuscript is to develop a version of elliptic theory, associated to a linear PDE, which ultimately yields a notion analogous to that of the harmonic measure, for sets of codimension higher than 1. To this end, we turn to degenerate elliptic equations. Let Γ ⊂ R n be an Ahlfors regular set of dimension d < n -1 (not necessarily integer) and Ω = R n \ Γ. Let L =div A∇ be a degenerate elliptic operator with measurable coefficients such that the ellipticity constants of the matrix A are bounded from above and below by a multiple of dist(•, Γ) d+1-n . We define weak solutions; prove trace and extension theorems in suitable weighted Sobolev spaces; establish the maximum principle, De Giorgi-Nash-Moser estimates, the Harnack inequality, the Hölder continuity of solutions (inside and at the boundary). We define the Green function and provide the basic set of pointwise and/or L p estimates for the Green function and for its gradient. With this at hand, we define harmonic measure associated to L, establish its doubling property, non-degeneracy, change-of-the-pole formulas, and, finally, the comparison principle for local solutions. In another article to appear, we will prove that when Γ is the graph of a Lipschitz function with small Lipschitz constant, we can find an elliptic operator L for which the harmonic measure given here is absolutely continuous with respect to the d-Hausdorff measure on Γ and vice versa. It thus extends Dahlberg's theorem to some sets of codimension higher than 1. Past few years have witnessed remarkable progress in the study of relations between regularity properties of the harmonic measure ω on the boundary of a domain of R n (for instance, its absolute continuity with respect to the Hausdorff measure H n-1 ) and the regularity of the domain (for instance, rectifiability properties of the boundary). In short, the emerging philosophy is that the rectifiability of the boundary is necessary for the absolute continuity of ω with respect to H n-1 , and that rectifiability along with suitable connectedness assumptions is sufficient. Omitting for now precise definitions, let us recall the main results in this regard. The celebrated 1916 theorem of F.& M. Riesz has established the absolute continuity of the harmonic measure for a simply connected domain in the complex plane, with a rectifiable boundary [RR]. The quantifiable analogue of this result (the A ∞ property of harmonic measure) was obtained by Lavrent'ev in 1936 [Lv] and the local version, pertaining to subsets of a rectifiable curve which is a boundary of a simply connected planar domain, was proved by Bishop and Jones in 1990 [BJ]. In the latter work the authors also showed that some connectedness is necessary for the absolute continuity of ω with respect to H n-1 , for there exists a planar set with rectifiable boundary for which the harmonic measure is singular with respect to H n-1 . The situation in higher dimensions, n ≥ 3, is even more complicated. The absolute continuity of ω with respect to H n-1 was first established by Dahlberg on Lipschitz graphs [Da] and was then extended to non-tangentially accessible (NTA) domains with Ahlfors regular boundary in [DJ], [Se], and to more general NTA domains in [Ba]. Roughly speaking, the non-tangential accessibility is an assumption of quantifiable connectedness, which requires the presence of interior and exterior corkscrew points, as well as Harnack chains. Ahlfors regularity simply postulates that the measure of intersection with the boundary of every ball of radius r centered at the boundary is proportional to r n-1 , i.e., that the boundary is in a certain sense n -1 dimensional (we will provide a careful definition below). Similarly to the lower-dimensional case, counterexamples show that some topological restrictions are needed for the absolute continuity of ω with respect to H n-1 [Wu], [Z]. Much more recently, in [START_REF] Hofmann | Uniform rectifiability and harmonic measure I: uniform rectifiability implies Poisson kernels in L p[END_REF], [HMU], [AHMNT], the authors proved that, in fact, for sets with Ahlfors regular boundaries, under a (weaker) 1-sided NTA assumption, the uniform rectifiability of the boundary is equivalent to the complete set of NTA conditions and hence, is equivalent to the absolute continuity of harmonic measure with respect to the Lebesgue measure. Finally, in 2015 the full converse, "free boundary" result was obtained and established that rectifiability is necessary for the absolute continuity of harmonic measure with respect to H n-1 in any dimension n ≥ 2 (without any additional topological assumptions) [START_REF] Azzam | Rectifiability of harmonic measure[END_REF]. It was proved simultaneously that for a complement of an (n -1)-Ahlfors regular set the A ∞ property of harmonic measure yields uniform rectifiability of the boundary [HLMN]. Shortly after, it was established that in an analogous setting ε-approximability and Carleson measure estimates for bounded harmonic functions are equivalent to uniform rectifiability [START_REF] Hofmann | Uniform rectifiability, Carleson measure estimates, and approximation of harmonic functions[END_REF], [GMT], and that analogous results hold for more general elliptic operators [START_REF] Hofmann | Transference of scale-invariant estimates from Lipschitz to Non-tangentially accessible to Uniformly rectifiable domains[END_REF], [AGMT]. The purpose of this work is to start the investigation of similar properties for domains with a lower-dimensional boundary Γ. To the best of our knowledge, the only known approach to elliptic problems on domains with higher co-dimensional boundaries is by means of the p-Laplacian operator and its generalizations [LN]. In [LN] the authors worked with an associated Wiener capacity, defined p-harmonic measure, and established boundary Harnack inequalities for Reifenberg flat sets of co-dimension higher than one. Our goals here are different. We shall systematically assume that Γ is Ahlfors-regular of some dimension d < n -1, which does not need to be an integer. This means that there is a constant C 0 ≥ 1 such that (1.1) C -1 0 r d ≤ H d (Γ ∩ B(x, r)) ≤ C 0 r d for x ∈ Γ and r > 0. We want to define an analogue of the harmonic measure, that will be defined on Γ and associated to a divergence form operator on Ω = R n \ Γ. We still write the operator as L = -divA∇, with A : Ω → M n (R), and we write the ellipticity condition with a different homogeneity, i.e., we require that for some C 1 ≥ 1, dist(x, Γ) n-d-1 A(x)ξ • ζ ≤ C 1 |ξ| |ζ| for x ∈ Ω and ξ, ζ ∈ R n , (1.2) dist(x, Γ) n-d-1 A(x)ξ • ξ ≥ C -1 1 |ξ| 2 for x ∈ Ω and ξ ∈ R n . (1.3) The effect of this normalization should be to incite the analogue of the Brownian motion here to get closer to the boundary with the right probability; for instance if Γ = R d ⊂ R n and A(x) = dist(x, Γ) -n+d+1 I, it turns out that the effect of L on functions f (x, t) that are radial in the second variable t ∈ R n-d is the same as for the Laplacian on R d+1 + . In some sense, we create Brownian travelers which treat Γ as a "black hole": they detect more mass and they are more attracted to Γ than a standard Brownian traveler governed by the Laplacian would be. The purpose of the present manuscript is to develop, with merely these assumptions, a comprehensive elliptic theory. We solve the Dirichlet problem for Lu = 0, prove the maximum principle, the De Giorgi-Nash-Moser estimates and the Harnack inequality for solutions, use this to define a harmonic measure associated to L, show that it is doubling, and prove the comparison principle for positive L-harmonic functions that vanish at the boundary. Let us discuss the details. We first introduce some notation. Set δ(x) = dist(x, Γ) and w(x) = δ(x) -n+d+1 for x ∈ Ω = R n \ Γ, and denote by σ the restriction to Γ of H d . Denote by W = Ẇ 1,2 w (Ω) the weighted Sobolev space of functions u ∈ L 1 loc (Ω) whose distribution gradient in Ω lies in L 2 (Ω, w): (1.4) W = Ẇ 1,2 w (Ω) := {u ∈ L 1 loc (Ω) : ∇u ∈ L 2 (Ω, w)}, and set u W = ´Ω |∇u(x)| 2 w(x)dx 1/2 for f ∈ W . Finally denote by M(Γ) the set of measurable functions on Γ and then set (1.5) H = Ḣ1/2 (Γ) := g ∈ M(Γ) : ˆΓ ˆΓ |g(x)g(y)| 2 |x -y| d+1 dσ(x)dσ(y) < ∞ . Before we solve Dirichlet problems we construct two bounded linear operators T : W → H (a trace operator) and E : H → W (an extension operator), such that T • E = I H . The trace of u ∈ W is such that for σ-almost every x ∈ Γ, Note that the latter geometric fact is enabled specifically by the higher co-dimension (d < n -1), even though our boundary can be quite wild. In fact, a stronger property holds in the present setting and gives, in particular, Harnack chains. There exists a constant C > 0, that depends only on C 0 , n, and d < n -1, such that for Λ ≥ 1 and x 1 , x 2 ∈ Ω such that dist(x i , Γ) ≥ r and |x 1x 2 | ≤ Λr, we can find two points y i ∈ B(x i , r/2) such that dist([y 1 , y 2 ], Γ) ≥ C -1 Λ -d/(n-d-1) r. That is, there is a thick tube in Ω that connects the two B(x i , r/2). Once we have trace and extension operators, we deduce from the Lax-Milgram theorem that for g ∈ H, there is a unique weak solution u ∈ W of Lu = 0 such that T u = g. For us a weak solution is a function u ∈ W such that (1.9) ˆΩ A(x)∇u(x) • ∇ϕ(x)dx = 0 for all ϕ ∈ C ∞ 0 (Ω), the space of infinitely differentiable functions which are compactly supported in Ω. Then we follow the Moser iteration scheme to study the weak solutions of Lu = 0, as we would do in the standard elliptic case in codimension 1. This leads to the quantitative boundedness (a.k.a. Moser bounds) and the quantitative Hölder continuity (a.k.a. De Giorgi-Nash estimates), in an interior or boundary ball B, of any weak solution of Lu = 0 in 2B such that T u = 0 on Γ ∩ 2B when the intersection is non-empty. Precise estimates will be given later in the introduction. The boundary estimates are trickier, because we do not have the conventional "fatness" of the complement of the domain, and it is useful to know beforehand that suitable versions of Poincaré and Sobolev inequalities hold. For instance, for u ∈ W , x ∈ Ω = R n , r > 0, and p ∈ 1, 2n n-2 (if n ≥ 3) or p ∈ [1, +∞) (if n = 2). A substantial portion of the proofs lies in the analysis of the newly defined Sobolev spaces. It is important to note, in particular, that we prove the density of smooth functions on R n (and not just Ω) in our weighted Sobolev space W . That is, for any function f in W , there exists a sequence (f k ) k≥1 of functions in C ∞ (R n ) ∩ W such that ff k W tends to 0 and f k converges to f in L 1 loc (R n ). In codimension 1, this sort of property, just like (1.10) or (1.11), typically requires a fairly nice boundary, e.g., Lipschitz, and it is quite remarkable that here they all hold in the complement of any Ahlfors-regular set. This is, of course, a fortunate outcome of working with lower dimensional boundary: we can guarantee ample access to the boundary (cf., e.g., the Harnack "tubes" discussed above), which turns out to be sufficient despite the absence of traditionally required "massive complement". Or rather one could say that the boundary itself is sufficiently "massive" from the PDE point of view, due to our carefully chosen equation and corresponding function spaces. With all these ingredients, we can follow the standard proofs for elliptic divergence form operators. When u is a solution to Lu = 0 in a ball 2B ⊂ Ω, the De Giorgi-Nash-Moser estimates and the Harnack inequality in the ball B don't depend on the properties of the boundary Γ and thus can be proven as in the case of codimension 1. When B ⊂ R n is a ball centered on Γ and u is a weak solution to Lu = 0 in 2B whose trace satisfies T u = 0 on Γ ∩ 2B, the quantitative boundedness and the quantitative Hölder continuity of the solution u are expressed with the help of the weight w. There holds, if m(2B) = ´2B w(y)dy, (1.12) sup B u ≤ C 1 m(2B) ˆ2B |u(y)| 2 w(y)dy 1/2 and, for any θ ∈ (0, 1], (1.13) sup θB u ≤ Cθ α sup B u ≤ Cθ α 1 m(2B) ˆ2B |u(y)| 2 w(y)dy 1/2 , where θB denotes the ball with same center as B but whose radius is multiplied by θ, and C, α > 0 are constants that depend only on the dimensions d and n, the Ahlfors constant C 0 and the ellipticity constant C 1 . We establish then the existence and uniqueness of a Green function g, which is roughly speaking a positive function on Ω × Ω such that, for all y ∈ Ω, the function g(., y) solves Lg(., y) = δ(y) and T g(., y) = 0. In particular, the following pointwise estimates are shown: (1.14) 0 ≤ g(x, y) ≤      C|x -y| 1-d if 4|x -y| ≥ δ(y) C|x-y| 2-n w(y) if 2|x -y| ≤ δ(y), n ≥ 3 Cǫ w(y) δ(y) |x-y| ǫ if 2|x -y| ≤ δ(y), n = 2, where C > 0 depends on d, n, C 0 , C 1 and C ǫ > 0 depends on d, C 0 , C 1 , ǫ. When n ≥ 3, the pointwise estimates can be gathered to a single one, and may look more natural for the reader: if m(B) = ´B w(y)dy, (1.15) 0 ≤ g(x, y) ≤ C |x -y| 2 m(B(x, |x -y|)) whenever x, y ∈ Ω. The bound in the case where n = 2 and 2|x -y| ≤ δ(y) can surely be improved into a logarithm bound, but the bound given here is sufficient for our purposes. Also, our results hold for any d and any n such that d < n -1, (i.e., even in the cases where n = 2 or d ≤ 1), which proves that Ahlfors regular domains are 'Greenian sets' in our adapted elliptic theory. Note that contrary to the codimension 1 case, the notion of the fundamental solution in R n is not accessible, since the distance to the boundary of Ω is an integral part of the definition of L. We use the Harnack inequality, the De Giorgi-Nash-Moser estimates, as well as a suitable version of the maximum principle, to solve the Dirichlet problem for continuous functions with compact support on Γ, and then to define harmonic measures ω x for x ∈ Ω (so that ´Γ gdω x is the value at x of the solution of the Dirichlet problem for g). Note that we do not need an analogue of the Wiener criterion (which normally guarantees that solutions with continuous data are continuous up to the boundary and allows one to define the harmonic measure), as we have already proved a stronger property, that solutions are Hölder continuous up to the boundary. Then, following the ideas of [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Section 1.3], we prove the following properties on the harmonic measure ω x . First, the non-degeneracy of the harmonic measure states that if B is a ball centered on Γ, (1.16) ω x (B ∩ Γ) ≥ C -1 whenever x ∈ Ω ∩ 1 2 B and (1.17) ω x (Γ \ B) ≥ C -1 whenever x ∈ Ω \ 2B, the constant C > 0 depending as previously on d, n, C 0 and C 1 . Next, let us recall that any boundary ball has a corkscrew point, that is for any ball B = B(x 0 , r) ⊂ R n centered on Γ, there exists ∆ B ∈ B such that δ(∆ B ) is bigger than ǫr, where ǫ > 0 depends only on d, n and C 0 . With this definition in mind, we compare the harmonic measure with the Green function: for any ball B of radius r centered on Γ, (1.18) C -1 r 1-d g(x, ∆ B ) ≤ ω x (B ∩ Γ) ≤ Cr 1-d g(x, ∆ B ) for any x ∈ Ω \ 2B and (1.19) C -1 r 1-d g(x, ∆ B ) ≤ ω x (Γ \ B) ≤ Cr 1-d g(x, ∆ B ) for any x ∈ Ω∩ 1 2 B which is far enough from ∆ B , say |x-∆ B | ≥ ǫr/2, where ǫ is the constant used to define ∆ B . The constant C > 0 in (1.18) and (1.19) depends again only on d, n, C 0 and C 1 . The estimates (1.18) and (1.19) can be seen as weak versions of the 'comparison principle', which deal only with the Green functions and the harmonic measure and which can be proven by using the specific properties of the latter objects. The inequalities (1.18) and (1.19) are essential for the proofs of the next three results. The first one is the doubling property of the harmonic measure, which guarantees that, if B is a ball centered on Γ, ω x (2B ∩ Γ) ≤ Cω x (B ∩ Γ) whenever x ∈ Ω \ 4B. It has an interesting counterpart: ω x (Γ \ B) ≤ Cω x (Γ \ 2B) whenever x ∈ Ω ∩ 1 2 B. The second one is the change-of-the-pole estimates, which can be stated as (1.20) C -1 ω ∆ B (E) ≤ ω x (E) ω x (Γ ∩ B) ≤ Cω ∆ B (E) when B is a ball centered on Γ, E ⊂ B ∩ Γ is a Borel set, and x ∈ Ω \ 2B. The last result is the comparison principle, that says that if u and v are positive weak solutions of Lu = Lv = 0 such that T u = T v = 0 on 2B ∩ Γ, where B is a ball centered on Γ, then u and v are comparable in B, i.e., (1.21) sup z∈B\Γ u(z) v(z) ≤ C inf z∈B\Γ u(z) v(z) . In each case, i.e., for the doubling property of the harmonic measure, the change of pole, or the comparison principle, the constant C > 0 depends only on d, n, C 0 and C 1 . It is difficult to survey a history of the subject that is so classical (in the co-dimension one case). In that setting, that is, in co-dimension one and reasonably nice geometry, e.g., of Lipschitz domains, the results have largely become folklore and we often follow the exposition in standard texts [GT], [HL], [Maz], [MZ], [START_REF] Stampacchia | Formes bilinéaires coercitives sur les ensembles convexes[END_REF], [GW], [CFMS]. The general order of development is inspired by [Ken]. Furthermore, let us point out that while the invention of a harmonic measure which serves the higher co-dimensional boundaries, which is associated to a linear PDE, and which is absolutely continuous with respect to the Lebesgue measure on reasonably nice sets, is the main focal point of our work, various versions of degenerate elliptic operators and weighted Sobolev spaces have of course appeared in the literature over the years. Some versions of some of the results listed above or similar ones can be found, e.g., in [A], [FKS], [Haj], [HaK], [HKM], [Kil], [JW]. However, the presentation here is fully self-contained, and since we did not rely on previous work, we hope to be forgiven for not providing a detailed review of the corresponding literature. Also, the context of the present paper often makes it possible to have much simpler proofs than a more general setting of not necessarily Ahlfors regular sets. It is perhaps worth pointing out that we work with homogeneous Sobolev spaces. Unfortunately, those are much less popular in the literature that their non-homogeneous counterparts, while they are more suitable for PDEs on unbounded domains. As outlined in [DFM], , we intend in subsequent publications to take stronger assumptions, both on the geometry of Γ and the choice of L, and prove that the harmonic measure defined here is absolutely continuous with respect to H d |Γ . For instance, we will assume that d is an integer and Γ is the graph of a Lipschitz function F : R d → R n-d , with a small enough Lipschitz constant. As for A, we will assume that A(x) = D(x) -n+d+1 I for x ∈ Ω, with (1.22) D(x) = ˆΓ |x -y| -d-α dH d (y) -1/α for some constant α > 0. Notice that because of (1.1), D(x) is equivalent to δ(x); when d = 1 we can also take A(x) = δ(x) -n+d+1 I, but when d ≥ 2 we do not know whether δ(x) is smooth enough to work. In (1.22), we could also replace H d with another Ahlfors-regular measure on Γ. With these additional assumptions we will prove that the harmonic measure described above is absolutely continuous with respect to H d |Γ , with a density which is a Muckenhoupt A ∞ weight. In other words, we shall establish an analogue of Dahlberg's result [Da] for domains with a higher co-dimensional boundary given by a Lipschitz graph with a small Lipschitz constant. It is not so clear what is the right condition for this in terms of A, but the authors still hope that a good condition on Γ is its uniform rectifiability. Notice that in remarkable contrast with the case of codimension 1, we do not state an additional quantitative connectedness condition on Ω, such as the Harnack chain condition in codimension 1; this is because such conditions are automatically satisfied when Γ is Ahlfors-regular with a large codimension. The present paper is aimed at giving a fairly pleasant general framework for studying a version of the harmonic measure in the context of Ahlfors-regular sets Γ of codimension larger than 1, but it will probably be interesting and hard to understand well the relations between the geometry of Γ, the regularity properties of A (which has to be linked to Γ through the distance function), and the regularity properties of the associated harmonic measure. Acknowledgment. This research was supported in part by Fondation Jacques Hadamard and by CNRS. The first author was supported in part by the ANR, programme blanc GE-OMETRYA ANR-12-BS01-0014. The second author was partially supported by the ANR project "HAB" no. ANR-12-BS01-0013. The third author was supported also by the Alfred P. Sloan Fellowship, the NSF INSPIRE Award DMS 1344235, NSF CAREER Award DMS 1220089. We would like to thank the Department of Mathematics at Université Paris-Sud, the Ecole des Mines, and the Mathematical Sciences Research Institute (NSF grant DMS 1440140) for warm hospitality. Finally, we would like to thank Alano Ancona for stimulating discussions at the early stages of the project and for sharing with us the results of his work. The Harnack chain condition and the doubling property We keep the same notation as in Section 1, concerning Γ ⊂ R n , a closed set that satisfies (1.1) for some d < n -1, Ω = R n \ Γ, then σ = H d |Γ , δ(z) = dist(z, Γ), and the weight w(z) = δ(z) d+1-n . Let us add the notion of measure. The measure m is defined on (Lebesgue-)measurable subset of R n by m(E) = ´E w(z)dz. We may write dm(z) for w(z)dz. Since 0 < w < +∞ a.e. in R n , m and the Lebesgue measure are mutually absolutely continuous, that is they have the same zero sets. Thus there is no need to specify the measure when using the expressions almost everywhere and almost every, both abbreviated a.e.. In the sequel of the article, C will denote a real number (usually big) that can vary from one line to another. The parameters which the constant C depends on are either obvious from context or recalled. Besides, the notation A ≈ B will be used to replace C -1 A ≤ B ≤ CA. This section is devoted to the proof of the very first geometric properties on the space Ω and the weight w. We will prove in particular that m is a doubling measure and Ω satisfies the Harnack chain condition. First, let us prove the Harnack chain condition we stated in Section 1. Lemma 2.1. Let Γ be a d-ADR set in R n , d < n -1, that is, assume that (1.1) is satisfied. Then there exists a constant c > 0, that depends only on C 0 , n, and d < n -1, such that for Λ ≥ 1 and x 1 , x 2 ∈ Ω such that dist(x i , Γ) ≥ r and |x -y| ≤ Λr, we can find two points y i ∈ B(x i , r/2) such that dist([y 1 , y 2 ], Γ) ≥ cΛ -d/(n-d-1) r. That is, there is a thick tube in Ω that connects the two B(x i , r/2). Proof. Indeed, suppose x 2 = x 1 , set ℓ = [x 1 , x 2 ] , and denote by P the vector hyperplane with a direction orthogonal to x 2x 1 . Let ε ∈ (0, 1) be small, to be chosen soon. We can find N ≥ C -1 ε 1-n points z j ∈ P ∩ B(0, r/2), such that |z jz k | ≥ 4εr for j = k. Set ℓ j = z j + ℓ, and suppose that dist(ℓ j , Γ) ≤ εr for all j. Then we can find points w j ∈ Γ such that dist(w j , ℓ j ) ≤ εr. Notice that the balls B j = B(w j , εr) are disjoint because dist(ℓ j , ℓ k ) ≥ 4εr, and by (1.1) (2.2) NC -1 0 (εr) d ≤ j σ(B j ) = σ j B j ≤ σ(B(w, 2r + |x 2 -x 1 |)) ≤ C 0 (2 + Λ) d r d where w is any of the w j . Thus ε 1-n ε d ≤ CC 2 0 Λ d (recall that Λ ≥ 1), a contradiction if we take ε ≤ cΛ -d/(n-d-1) , where c > 0 depends on C 0 too. Thus we can find j such that dist(ℓ j , Γ) ≥ εr, and the desired conclusion holds with y i = x i + z j . Then, we give estimates on the weight w. Lemma 2.3. There exists C > 0 such that (i) for any x ∈ R n and any r > 0 satisfying δ(x) ≥ 2r, (2.4) C -1 r n w(x) ≤ m(B(x, r)) = ˆB(x,r) w(z)dz ≤ Cr n w(x), (ii) for any x ∈ R n and any r > 0 satisfying δ(x) ≤ 2r, (2.5) C -1 r d+1 ≤ m(B(x, r)) = ˆB(x,r) w(z)dz ≤ Cr d+1 . Remark 2.6. In the above lemma, the estimates are different if δ(x) is bigger or smaller than 2r. Yet the critical ratio δ(x) r = 2 is not relevant: for any α > 0, we can show as well that (2.4) holds whenever δ(x) ≥ αr and (2.5) holds if δ(x) ≤ αr, with a constant C that depends on α. Indeed, we can replace 2 by α if we can prove that for any K > 1 there exists C > 0 such that for any x ∈ R n and r > 0 satisfying (2.7) K -1 r ≤ δ(x) ≤ Kr we have (2.8) C -1 r d+1 ≤ r n w(x) ≤ Cr d+1 . However, since w(x) = δ(x) d+1-n , (2.7) implies w(x) ≈ r d+1-n which in turn gives (2.8). Proof. First suppose that δ(x) ≥ 2r. Then for any z ∈ B(x, r), 1 2 δ(x) ≤ δ(z) ≤ 3 2 δ(x) and hence C -1 w(x) ≤ w(z) ≤ Cw(x); (2.4) follows. The lower bound in (2.5) is also fairly easy, just note that when δ(x) ≤ 2r, δ(z) ≤ 3r for any z ∈ B(x, r) and hence (2.9) m(B(x, r)) ≥ ˆB(x,r) (3r) 1+d-n dz ≥ C -1 r d+1 . Finally we check the upper bound in (2.5). We claim that for any y ∈ Γ and any r > 0, (2.10) m(B(y, r)) = ˆB(y,r) δ(ξ) d+1-n ≤ Cr d+1 . From the claim, let us prove the upper bound in (2.5). Let x ∈ R n and r > 0 be such that δ(x) ≤ 2r. Thus there exists y ∈ Γ such that B(x, r) ⊂ B(y, 3r) and thanks to (2.10) (2.11) m(B(x, r)) ≤ ˆB(y,3r) w(z)dz ≤ C(3r) d+1 ≤ Cr d+1 , which gives the upper bound in (2.5). Let us now prove the claim. By translation invariance, we can choose y = 0 ∈ Γ. Note that δ(ξ) ≤ r in the domain of integration. Let us evaluate the measure of the set Z k = ξ ∈ B(0, r) ; 2 -k-1 r < δ(ξ) ≤ 2 -k r . We use (1.1) to cover Γ ∩ B(0, 2r) with less than C2 kd balls B j of radius 2 -k r centered on Γ; then Z k is contained in the union of the 3B j , so |Z k | ≤ C2 kd (2 -k r) n and ´ξ∈Z k δ(ξ) 1+d-n dξ ≤ C2 kd (2 -k r) n (2 -k r) d+1-n = C2 -k r d+1 . We sum over k ≥ 0 and get (2.10). A consequence of Lemma 2.3 is that m is a doubling measure, that is for any ball B ⊂ R n , m(2B) ≤ Cm(B). Actually, we can prove the following stronger fact: for any x ∈ R n and any r > s > 0, there holds (2.12) C -1 r s d+1 ≤ m(B(x, r)) m(B(x, s)) ≤ C r s n . Three cases may happen. First, δ(x) ≥ 2r ≥ 2s and then with (2.4), (2.13) m(B(x, r)) m(B(x, s)) ≈ r n w(x) s n w(x) = r s n . Second, δ(x) ≤ 2s ≤ 2r. In this case, note that (2.5) implies (2.14) m(B(x, r)) m(B(x, s)) ≈ r d+1 s d+1 = r s d+1 . At last, 2s ≤ δ(x) ≤ 2r. Note that (2.4) and (2.5) yield (2.15) m(B(x, r)) m(B(x, s)) ≈ r d+1 s n w(x) . Yet, 2s ≤ δ(x) ≤ 2r implies C -1 r d+1-n ≤ w(x) ≤ Cs d+1-n and thus (2.16) C -1 r s d+1 ≤ m(B(x, r)) m(B(x, s)) ≤ C r s n . which finishes the proof of (2.12). One can see that the coefficients d+1 and n are optimal in (2.12). The fact that the volume of a ball with radius r is not equivalent to r α for some α > 0 will cause some difficulties. For instance, regardless of the choice of p, we cannot have a Sobolev embedding W ֒→ L p and we have to settle for the Sobolev-Poincaré inequality (1.11). Another consequence of Lemma 2.3 is that for any ball B ⊂ R n and any nonnegative function g ∈ L 1 loc (R n ), (2.17) 1 |B| ˆB g(z)dz ≤ C 1 m(B) ˆB g(z)w(z)dz. Indeed, the inequality (2.17) holds if we can prove that (2.18) m(B) |B| ≤ Cw(z) ∀z ∈ B. This latter fact can be proven as follows: if r is the radius of B, (2.19) m(B) |B| ≤ m(B(z, 2r)) |B| ≤ Cr -n m(B(z, 2r)) If δ(z) ≥ 4r, then Lemma 2.3 gives r -n m(B(z, 2r)) ≤ Cw(z). If δ(z) ≤ 4r, then w(z) ≥ C -1 r d+1-n and Lemma 2.3 entails r -n m(B(z, 2r)) ≤ Cr d+1-n ≤ Cw(z). In both cases, we obtain (2.18) and thus (2.17). We end the section with a corollary of Lemma 2.3. Lemma 2.20. The weight w is in the A 2 -Muckenhoupt class, i.e. there exists C > 0 such that for any ball B ⊂ R n , (2.21) B w(z)dz B w -1 (z)dz ≤ C. Proof. Let B = B(x, r). If δ(x) ≥ 2r, then for any z ∈ B(x, r), C -1 w(x) ≤ w(z) ≤ Cw(x) and thus ffl B w • ffl B w -1 ≤ Cw(x)w -1 (x) = C. If δ(x) ≤ 2r, then (2.5) implies that ffl B w ≤ Cr -n r d+1 = Cr d+1-n . Besides, for any z ∈ B(x, r), δ(z) ≤ 3r and hence w -1 (z) ≤ Cr n-d-1 . It follows that if δ(x) ≤ 2r, ffl B w • ffl B w -1 ≤ C. The assertion (2.21) follows. Traces The weighted Sobolev space W = Ẇ 1,2 w (Ω) and H = Ḣ1/2 (Γ) are defined as in Section 1 (see (1.4), (1.5)). Let us give a precison. Any u ∈ W has a distributional derivative in Ω that belongs to L 2 (Ω, w), that is there exists a vector valued function v ∈ L 2 (Ω, w) such that for any ϕ ∈ C ∞ 0 (Ω, R n ) (3.1) ˆΩ v • ϕ = -ˆΩ u div ϕ. This definition make sense since v ∈ L 2 (Ω, w) ⊂ L 1 loc (Ω). For the proof of the latter inclusion, use for instance Cauchy-Schwarz inequality and (2.17). The aim of the section is to state and prove a trace theorem. But for the moment, let us keep discussing about the space W . We say that u is absolutely continuous on lines in Ω if there exists ū which coincides with u a.e. such that for almost every line ℓ (for the usual invariant measure on the Grassman manifold, but we can also say, given any choice of direction v and and a vector hyperplane plane P transverse to v, for the line x + Rv for almost every x ∈ P ), we have the following properties. First, the restriction of ū to ℓ ∩ Ω (which makes sense, for a.e. line ℓ, and is measurable, by Fubini) is absolutely continuous, which means that it is differentiable almost everywhere on ℓ ∩ Ω and is the indefinite integral of its derivative on each component of ℓ ∩ Ω. By the natural identification, the derivative in question is obtained from the distributional gradient of u. Lemma 3.2. Every u ∈ W is absolutely continuous on lines in Ω. Proof. This lemma can be seen as a consequence of [START_REF] Maz | Sobolev spaces with applications to elliptic partial differential equations[END_REF]Theorem 1.1.3/1] since the absolute continuity on lines is a local property and, thanks to (2.17), W ⊂ {u ∈ L 1 loc (Ω), ∇u ∈ L 2 loc (Ω)}. Yet, the proof of Lemma 3.2 is classical: since the property is local, it is enough to check the property on lines parallel to a fixed vector e, and when Ω is the product of n intervals, one of which is parallel to e. This last amounts to using the definition of the distributional gradient, testing on product functions, and applying Fubini. In addition, the derivative of u on almost every line ℓ of direction e coincides with ∇u • e almost everywhere on ℓ. Lemma 3.3. We have the following equality of spaces (3.4) W = {u ∈ L 1 loc (R n ), ∇u ∈ L 2 (R n , w)}, where the derivative of u is taken in the sense of distribution in R n , that is for any ϕ ∈ C ∞ 0 (R n , R n ), ˆ∇u • ϕ = -ˆu div ϕ. Proof. Here and in the sequel, we will constantly use the fact that with Ω = R n \ Γ and because (1.1) holds with d < n -1, (3.5) almost every line ℓ is contained in Ω. Let us recall that it means that given any choice of direction v and a vector hyperplane P transverse to v, the line x + Rv ⊂ Ω for almost every x ∈ P . In particular, for almost every (x, y) ∈ (R n ) 2 , there is a unique line going through x and y and this line is included in Γ. Lemma 3.2 and (3.5) implies that u ∈ W is actually absolutely continuous on lines in R n , i.e. any u ∈ W (possibly modified on a set of zero measure) is absolutely continuous on almost every line ℓ ⊂ R n . As we said before, ∇u = (∂ 1 u, . . . , ∂ n u), the distributional gradient of u in Ω, equals the 'classical' gradient of u defined in the following way. If e 1 = (1, 0, . . . , 0) is the first coordinate vector, then ∂ 1 u(y, z) is the derivative at the point y of the function u |(0,z)+Re 1 , the latter quantity being defined for almost every (y, z) ∈ R × R n-1 because u is absolutely continuous on lines in R n . If i > 1, ∂ i u(x) is defined in a similar way. As a consequence, for almost any (y, z) ∈ R n ×R n , u(z)-u(y) = ´1 0 (z-y)•∇u(y+t(z-y))dt and hence, (3.6) |u(y) -u(z)| ≤ ˆ1 0 |z -y||∇u(y + t(z -y))|dt. Let us integrate this for y in a ball B. We get that for almost every z ∈ R n , (3.7) y∈B |u(y) -u(z)|dy ≤ y∈B ˆ1 0 |z -y||∇u(y + t(z -y))|dt. Let us further restrict to the case z ∈ B = B(x, r); the change of variable ξ = z + t(y -z) shows that y∈B |u(y) -u(z)|dy = ˆ1 0 y∈B |y -z||∇u(z + t(y -z))|dydt = ˆ1 0 1 |B| ˆξ∈B(z+t(x-z),tr) |z -ξ| t |∇u(ξ)| dξ t n dt = ˆξ∈B |∇u(ξ)| |z -ξ| |B(z, r)| dξ ˆ1 |z-ξ|/2r dt t n+1 ≤ 2 n |B(0, 1)| -1 ˆξ∈B |∇u(ξ)||z -ξ| 1-n dξ, (3.8) where the last but one line is due to the fact that ξ ∈ B(z + t(x -z), tr) is equivalent to |ξ -z -t(x -z)| ≤ tr, which forces |ξ -z| ≤ tr + t|x -z| ≤ 2rt. Therefore, for almost any z ∈ B, (3.9) y∈B |u(y) -u(z)|dy ≤ C ˆξ∈B |∇u(ξ)||z -ξ| 1-n dξ, where C depends on n, but not on r, u, or z. With a second integration on z ∈ B = B(x, r), we obtain (3.10) ball B ⊂ R n , that is u ∈ L 1 loc (R n ). Since L 1 loc (R n ) ⊂ L 1 loc (Ω), we just proved that W = {u ∈ L 1 loc (R n ), ∇u ∈ L 2 (Ω, w)}, where ∇u = (∂ 1 u, . . . , ∂ n u) is distributional gradient on Ω. Let u ∈ W . Since Γ has zero measure, ∇u ∈ L 2 (R n , w ) and thus it suffices to check that u has actually a distributional derivative in R n and that this derivative equals ∇u. However, the latter fact is a simple consequence of [START_REF] Maz | Sobolev spaces with applications to elliptic partial differential equations[END_REF]Theorem 1.1.3/2], because u is absolutely continuous on lines in R n . The proof of Maz'ya's result is basically the following: for any i ∈ {1, . . . , n} and any φ ∈ C ∞ 0 (R n ), an integration by part gives ´u∂ i φ = -´(∂ i u)φ. The two integrals in the latter equality make sense since both u and ∂ i u are in L 1 loc (R n ); the integration by part is possible because u is absolutely continuous on almost every line. Remark 3.12. An important by-product of the proof is that Lemma 3.2 can be improved into: for any u ∈ W (possibly modified on a set of zero measure) and almost every line ℓ ⊂ R n , u |ℓ is absolutely continuous. This property will be referred to as (ACL). Proof. First, we want bounds on ∇u near x ∈ Γ, so we set (3.16) M r (x) = B(x,r) |∇u| 2 and estimate ´Γ M r (x)dσ(x). We cover Γ by balls B j = B(x j , r) centered on Γ such that the 2B j = B(x j , 2r) have bounded overlap (we could even make the B(x j , r/5) disjoint), and notice that for x ∈ B j , (3.17) M r (x) ≤ Cr -n ˆ2B j |∇u| 2 . We sum and get that ˆΓ M r (x)dσ(x) ≤ j ˆBj M r (x)dσ(x) ≤ C j σ(B j ) sup x∈B j M r (x) ≤ C j σ(B j )r -n ˆ2B j |∇u| 2 ≤ Cr d-n j ˆ2B j |∇u| 2 ≤ Cr d-n ˆΓ(2r) |∇u| 2 (3.18) because the 2B j have bounded overlap and where Γ(2r) denotes a 2r-neighborhood of Γ. Next set (3.19) N(x) = k≥0 2 -k M 2 -k (x); then ˆΓ N(x)dσ(x) = k≥0 2 -k ˆΓ M 2 -k (x)dσ(x) ≤ C k≥0 2 k(n-d-1) ˆΓ(2 -k+1 ) |∇u(z)| 2 dz ≤ C ˆΓ(2) |∇u(z)| 2 a(z)dz, (3.20) where a(z) = k≥0 2 k(n-d-1) 1 z∈Γ(2 -k+1 ) . For a given z ∈ Ω, z ∈ Γ(2 -k+1 ) only for k so small that δ(z) ≤ 2 -k+1 . The largest values of 2 k(n-d-1) are for k as large as possible, when 2 -k ≈ δ(z); thus a(z) ≤ Cδ(z) -n+d+1 = w(z), and (3.21) ˆΓ N(x)dσ(x) ≤ C ˆΓ(2) |∇u(z)| 2 w(z)dz. Our trace function g = T u will be defined as the limit of the functions g r , where (3.22) g r (x) = z∈B(x,r) u(z)dz. Our aim is to use the estimates established in the proof of Lemma 3.3. Notice that for x ∈ Γ and r > 0, z∈B(x,r) |u(z) -g r (x)|dz = z∈B(x,r) u(z) - ξ∈B(x,r) u(y)dy dz ≤ z∈B(x,r) y∈B(x,r) u(z)u(y) dy dz. (3.23) By (3.10), z∈B(x,r) |u(z) -g r (x)|dz ≤ z∈B(x,r) y∈B(x,r) u(z) -u(y) dy dz ≤ Cr -n+1 ˆξ∈B(x,r) |∇u(ξ)|dξ. (3.24) Thus for r/10 ≤ s ≤ r, |g s (x) -g r (x)| = z∈B(x,s) u(z)dz -g r (x) ≤ z∈B(x,s) |u(z) -g r (x)|dz ≤ Cr ξ∈B(x,r) |∇u(ξ)|dξ ≤ CrM r (x) 1/2 . (3.25) Set ∆ r (x) = sup r/10≤s≤r |g s (x) -g r (x)|; we just proved that ∆ r (x) ≤ CrM r (x) 1/2 . Let α ∈ (0, 1/2) be given. If N(x) < +∞, we get that k≥0 2 αk ∆ 2 -k (x) ≤ C k≥0 2 αk 2 -k M 2 -k (x) 1/2 ≤ C k≥0 2 -k M 2 -k (x) 1/2 k≥0 2 2αk 2 -k 1/2 ≤ CN(x) 1/2 < +∞. (3.26) Therefore, k≥0 ∆ 2 -k-2 (x) converges (rather fast), and since (3.21) implies that N(x) < +∞ for σ-almost every x ∈ Γ, it follows that there exists (3.27) g(x) = lim r→0 g r (x) for σ-almost every x ∈ Γ. In addition, we may integrate (the proof of) (3.26) and get that for 2 -j-1 < r ≤ 2 -j , g -g r 2 L 2 (σ) = ˆΓ |g(x) -g r (x)| 2 dσ(x) ≤ ˆΓ k≥j ∆ 2 -k (x) 2 dσ(x) ≤ C2 -2αj ˆΓ k≥j 2 αj ∆ 2 -k (x) 2 dσ(x) ≤ Cr 2α ˆΓ N(x)dσ(x) ≤ Cr 2α u 2 W (3.28) by (3.27) and the definition of ∆ r (x), then (3.26) and (3.21). Thus g r converges also (rather fast) to g in L 2 . Let us make an additional remark. Fix r > 0 and α ∈ (0, 1/2). For any ball B centered on Γ, (3.29) g L 1 (B,σ) ≤ C B g -g r L 2 (σ) + g r L 1 (B,σ) by Hölder's inequality. The first term is bounded with (3.28). Use (1.1) and Fubini's theorem to bound the second one by C r g L 1 ( B) , where B is a large ball (that depends on r and contains B). As a consequence, (3.30) for any u ∈ W , g = T u ∈ L 1 loc (σ). This completes the definition of the trace g = T (u). We announced (as a Lebesgue property) that (3.31) lim r→0 B(x,r) |u(y) -T u(x)|dy for σ-almost every x ∈ Γ, and indeed B(x,r) |u(y) -T u(x)|dy = B(x,r) |u(y) -g(x)|dy ≤ |g(x) -g r (x)| + B(x,r) |u(y) -g r (x)| ≤ |g(x) -g r (x)| + Cr B(x,3r) |∇u| ≤ |g(x) -g r (x)| + CrM 4r (x) 1/2 (3.32) by (3.24) and the second part of (3.25). The first part tends to 0 for σ-almost every x ∈ Γ, by (3.27), and the second part tends to 0 as well, because N(x) < +∞ almost everywhere and by the definition (3.19). Next we show that g = T u lies in the Sobolev space H = H 1/2 (Γ), i.e., that (3.33) g 2 H = ˆΓ ˆΓ |g(x) -g(y)| 2 |x -y| d+1 dσ(x)dσ(y) < +∞. The simplest will be to prove uniform estimates on the g r , and then go to the limit. Let us fix r > 0 and consider the integral (3.34) I(r) = ˆx∈Γ ˆy∈Γ;|y-x|≥r |g r (x) -g r (y)| 2 |x -y| d+1 dσ(x)dσ(y). Set Z k (r) = (x, y) ∈ Γ×Γ ; 2 k r ≤ |y-x| < 2 k+1 r and I k (r) = ´´Z k (r) |gr(x)-gr(y)| 2 |x-y| d+1 dσ(x)dσ(y). Thus I(r) = k≥0 I k (r) and (3.35) I k (r) ≤ (2 k r) -d-1 ˆˆZ k (r) |g r (x) -g r (y)| 2 dσ(x)dσ(y). Fix k ≥ 0, set ρ = 2 k+1 r, and observe that for (x, y) ∈ Z k (r), |g r (x) -g ρ (y)| = z∈B(x,r) ξ∈B(y,ρ) [u(z) -u(ξ)]dzdξ ≤ z∈B(x,r) ξ∈B(y,ρ) |u(z) -u(ξ)|dξdz ≤ 3 n z∈B(x,r) ξ∈B(z,3ρ) |u(z) -u(ξ)|dξ dz ≤ Cρ n z∈B(x,r) ζ∈B(z,3ρ) |∇u(ζ)||z -ζ| 1-n dζdz (3.36) because B(y, ρ) ⊂ B(z, 3ρ ) and by (3.9). We apply Cauchy-Schwarz, with an extra bit |z -ζ| -α , where α > 0 will be taken small, and which will be useful for convergence later |g r (x) -g ρ (y)| 2 ≤ Cρ 2n z∈B(x,r) ζ∈B(z,3ρ) |∇u(ζ)| 2 |z -ζ| 1-n+α z∈B(x,r) ζ∈B(z,3ρ) |z -ζ| 1-n-α ≤ Cρ n+1-α z∈B(x,r) ζ∈B(z,3ρ) |∇u(ζ)| 2 |z -ζ| 1-n+α dζdz. (3.37) The same computation, with g r (y), yields (3.38) |g r (y) -g ρ (y)| 2 ≤ Cρ n+1-α z∈B(y,r) ζ∈B(z,3ρ) |∇u(ζ)| 2 |z -ζ| 1-n+α dζdz. We add the two and get an estimate for |g r (x)g r (y)| 2 , which we can integrate to get that I k (r) ≤ Cρ -d-1 ρ n+1-α ˆˆ(x,y)∈Z k (r) z∈B(x,r) ζ∈B(z,3ρ) |∇u(ζ)| 2 |z -ζ| 1-n+α dζdzdσ(x)dσ(y) ≤ Cρ -d-α r -n ˆˆ(x,y)∈Z k (r) ˆz∈B(x,r) ˆζ∈B(z,3ρ) |∇u(ζ)| 2 |z -ζ| 1-n+α dζdzdσ(x)dσ(y) (3.39) by (3.35), (3.37), and (3.38), and where we can drop the part that comes from (3.38) by symmetry. We integrate in y ∈ Γ such that 2 k r ≤ |x -y| ≤ 2 k+1 r and get that I k (r) ≤ Cρ -α r -n ˆx∈Γ ˆz∈B(x,r) ˆζ∈B(z,3ρ) |∇u(ζ)| 2 |z -ζ| 1-n+α dζdzdσ(x) ≤ C ˆζ∈Ω |∇u(ζ)| 2 h k (ζ)dζ, (3.40) with (3.41) h k (ζ) = ρ -α r -n ˆx∈Γ ˆz∈B(x,r)∩B(ζ,3ρ) |z -ζ| 1-n+α dzdσ(x). We start with the contribution h 0 k (ζ) of the region where |x -ζ| ≥ 2r, where the computation is simpler because |z -ζ| ≥ 1 2 |x -ζ| there. We get that h 0 k (ζ) ≤ Cρ -α r -n ˆx∈Γ ˆz∈B(x,r)∩B(ζ,3ρ) |x -ζ| 1-n+α dzdσ(x) ≤ Cρ -α ˆx∈Γ∩B(ζ,4ρ) |x -ζ| 1-n+α dσ(x). (3.42) With ζ, r, and ρ fixed, h 0 k (ζ) vanishes unless δ(ζ) = dist(ζ, Γ) < 4ρ. The region where |x -ζ| is of the order of 2 m δ(ζ), m ≥ 0, contributes less than C(2 m δ(ζ)) d+1-n+α to the integral (because σ is Ahlfors-regular). If α is chosen small enough, the exponent is still negative, the largest contribution comes from m = 0, and h 0 k (ζ) ≤ Cρ -α δ(ζ) d+1-n+α . Recall that ρ = 2 k r, and k is such that δ(ζ) < 4ρ; we sum over k and get that (3.43) k h 0 k (ζ) ≤ C k≥0 ; δ(ζ)<4ρ ρ -α δ(ζ) d+1-n+α ≤ Cδ(ζ) d+1-n , because this time the smallest values of ρ give the largest contributions. We are left with (3.44) h 1 k (ζ) = h k (ζ) -h 0 k (ζ) = ρ -α r -n ˆx∈Γ∩B(ζ,2r) ˆz∈B(x,r)∩B(ζ,3ρ) |z -ζ| 1-n+α dzdσ(x). Notice that |z -ζ| ≤ |z -x| + |x -ζ| ≤ 3r; we use the local Ahlfors-regularity to get rid of the integral on Γ, and get that (3.45) h 1 k (ζ) ≤ Cρ -α r -n r d ˆz∈B(ζ,3r) |z -ζ| 1-n+α dz ≤ Cρ -α r d+1-n+α . We sum over k and get that k h for u ∈ W , x ∈ Γ, and r > 0 such that T u = 0 on Γ ∩ B(x, r). 1 k (ζ) ≤ Cr d+1-n ≤ Cδ(ζ) I(r) = k I k (r) ≤ C k ˆζ∈Ω |∇u(ζ)| 2 h k (ζ)dζ ≤ C ˆζ∈Ω |∇u(ζ)| 2 δ(ζ) Proof. To simplify the notation we assume that x = 0. We should of course observe that the right-hand side of (4.2) is finite. Indeed, recall that Lemma 2.3 gives (4.3) ˆξ∈B(0,r) w(ξ)dξ ≤ Cr 1+d ; then by Cauchy-Schwarz r -d ˆξ∈B(0,r) |∇u(ξ)|w(ξ)dξ ≤ r -d ˆξ∈B(0,r) |∇u(ξ)| 2 w(ξ) 1/2 ˆξ∈B(0,r) w(ξ) 1/2 ≤ r 1-d 2 ˆξ∈B(0,r) |∇u(ξ)| 2 w(ξ)dξ 1/2 . (4.4) The homogeneity still looks a little weird because of the weight (but things become simpler if we think that δ(ξ) is of the order of r), but at least the right-hand side is finite because u ∈ W . Turning to the proof of (4.2), to avoid complications with the fact that (3.6) and (3.7) do not necessarily hold σ-almost everywhere on Γ, let us use the g s again. We first prove that for s < r small, For x fixed, we can still prove as in (3.9) that (4.7) y∈B(0,r) |u(y) -u(z)|dy ≤ C ˆB(0,r) |∇u(ξ)||z -ξ| 1-n dξ (for x ∈ Γ ∩ B(0, r/2) and z ∈ B(x, s), there is even a bilipschitz change of variable that sends z to 0 and maps B(0, r) to itself). We are left with (4.8) I(s) ≤ C x∈Γ∩B(0,r/2) z∈B(x,s) ˆξ∈B(0,r) |∇u(ξ)||z -ξ| 1-n dξdzdσ(x). The main piece of the integral will again be called I 0 (s), where we integrate in the region where |ξ -x| ≥ 2s and hence |z -ξ| 1-n ≤ 2 n |x -ξ| 1-n . Thus I 0 (s) ≤ C ˆξ∈B(0,r) x∈Γ∩B(0,r/2) z∈B(x,s) |∇u(ξ)||x -ξ| 1-n dzdσ(x)dξ ≤ Cr -d ˆξ∈B(0,r) ˆx∈Γ∩B(0,r/2) |∇u(ξ)||x -ξ| 1-n dσ(x)dξ ≤ Cr -d ˆξ∈B(0,r)\Γ |∇u(ξ)|h(ξ)dξ, (4.9) where for ξ ∈ B(0, r) \ Γ we set (4.10) h(ξ) = ˆx∈Γ∩B(0,r/2) |x -ξ| 1-n dσ(x) ≤ Cδ(ξ) 1-n+d where for the last inequality we cut the domain of integration into pieces where |x -ξ| ≈ 2 m δ(ξ) and use (1.1). For the other piece of (4.8) where |ξ -x| < 2s, we get the integral I 1 (s) ≤ Cr -d s -n ˆξ∈B(0,r) ˆx∈Γ∩B(0,r/2)∩B(ξ,2s) ˆz∈B(x,s) |∇u(ξ)||z -ξ| 1-n dξdzdσ(x) ≤ Cr -d s d-n ˆξ∈B(0,r);δ(ξ)≤2s ˆz∈B(ξ,3s) |∇u(ξ)||z -ξ| 1-n dξdz ≤ Cr -d s 1+d-n ˆξ∈B(0,r);δ(ξ)≤2s |∇u(ξ)|dξ ≤ Cr -d ˆξ∈B(0,r) |∇u(ξ)|δ(ξ) 1+d-n dξ. (4.11) Altogether (4.12) I(s) ≤ Cr -d ˆξ∈B(0,r) |∇u(ξ)|δ(ξ) 1+d-n dξ, which is (4.5). When s tends to 0, g s (x) tends to g(x) = T u(x) = 0 for σ-almost every x ∈ Γ ∩ B(0, r/2), and we get (4.2) by Fatou. . Lemma 4.13. Let Γ be a d-ADR set in R n , d < n -1, that is, assume that (1.1) is satisfied. Let p ∈ 1, 2n n-2 (or p ∈ [1, +∞) if n = 2). Then for any u ∈ W , x ∈ R n and r > 0 (4.14) 1 m(B(x, r)) ˆB(x,r) u(y) -u B(x, Proof. In the proof, we will use dm(z) for w(z)dz and hence, for instance ´B u dm denotes ´B u(z)w(z)dz. We start with the following inequality. Let p ∈ [1, +∞). If u ∈ L p loc (R n , w) ⊂ L 1 loc (R n ), then for any ball B, (4.16) ˆB u - B u p dm ≈ ˆB u(z) - 1 m(B) ˆB u dm p dm. First we bound the left-hand side. We introduce m(B) -1 ´B u dm inside the absolute values and then use the triangle inequality: ˆB u - B u p dm ≤ C ˆB u(z) - 1 m(B) ˆB u dm p dm + Cm(B) B u - 1 m(B) ˆB u dm p ≤ C ˆB u(z) - 1 m(B) ˆB u dm p dm + C m(B) |B| ˆB u - 1 m(B) ˆB u dm p ≤ C ˆB u(z) - 1 m(B) ˆB u dm p dm, (4.17) where the last line is due to (2.17). The reverse estimate is quite immediate (4.18) which finishes the proof of (4.16). ˆB u - 1 m(B) ˆB u dm p dm ≤ C ˆB u(z) - B u p dm + Cm(B) 1 m(B) ˆB u dm - B u p ≤ C ˆB u(z) - B u p dm + Cm(B) 1 m(B) ˆB u - B u dm p ≤ C ˆB u(z) - B u p dm, In the sequel of the proof, we write u B for m(B) -1 ´B u dm. Thanks to (4.16), it suffices to prove (4.14) only for this particular choice of u B . We now want to prove a (1,1) Poincaré inequality, that is (4.19) ˆB |u(z) -u B | w(z)dz ≤ Cr ˆB |∇u(z)|w(z)dz. for any u ∈ W and any ball B ⊂ R n of radius r. In particular, u ∈ L 1 loc (R n , w). Let B ⊂ R n of I = ˆB(ξ,r) |z -ξ| 1-n w(z)dz ≤ rw(ξ). First, note that if δ(ξ) ≥ 2r, then w(z) is equivalent to w(ξ) for all z ∈ B(x, r). Thus I ≤ Cw(ξ) ´B(ξ,r) |z -ξ| 1-n dz ≤ Crw(ξ). It remains to prove the case δ(ξ) < 2r. We split I into I 1 + I 2 where, for I 1 , the domain of integration is restrained to B(ξ, δ(ξ)/2). For any z ∈ B(ξ, δ(ξ)/2), we have w(z) ≤ Cw(ξ) and thus (4.23) I 1 ≤ Cw(ξ) ˆB(ξ,δ(ξ)/2) |z -ξ| 1-n dz ≤ Cw(ξ)δ(ξ) ≤ Crw(ξ). It remains to bound I 2 . In order to do it, we decompose the remaining domain into annuli C j (ξ) := {z ∈ R n , 2 j-1 δ(ξ) ≤ |ξ -z| ≤ 2 j δ(ξ)}. We write κ for the smallest integer bigger than log 2 (r/δ(ξ)), which is the highest value for which C κ ∩ B(ξ, r) is non-empty. We have I 2 ≤ C κ j=0 2 j(1-n) δ(ξ) 1-n ˆCj (ξ) w(z)dz ≤ C κ j=0 2 j(1-n) δ(ξ) 1-n m(B(ξ, 2 j δ(ξ))). (4.24) The ball B(ξ, 2 j δ(ξ)) is close to Γ and thus Lemma 2.3 gives that the quantity m(B(ξ, 2 j δ(ξ))) is bounded, up to a constant, by 2 j(d+1) δ(x) d+1 . We deduce, since 2 + dn ≤ 1, I 2 ≤ C κ j=0 2 j(2+d-n) δ(ξ) 2+d-n ≤ Cδ(ξ) 2+d-n κ j=0 2 j ≤ Cδ(ξ) 2+d-n r δ(ξ) ≤ Crδ(ξ) 1+d-n = Crw(ξ), (4.25) which ends the proof of (4.22) and thus also the one of the Poincaré inequality (4.19). Now we want to establish (4.14). The quickest way to do it is to use some results of Haj lasz and Koskela. We say that (u, g) forms a Poincaré pair if u is in L 1 loc (R n , w), g is positive and measurable and for any ball B ⊂ R n of radius r, we have (4.26) m(B) -1 ˆB |u(z) -u B |dm(z) ≤ Crm(B) -1 ˆB g dm(z). In this context, Theorem 5.1 (and Corollary 9.8) in [HaK] states that the Poincaré inequality (4.26) can be improved into a Sobolev-Poincaré inequality. More precisely, if s is such that, for any ball B 0 of radius r 0 , any x ∈ B 0 and any r ≤ r 0 , (4.27) m(B(x, r)) m(B 0 ) ≥ C -1 r r 0 s then (4.26) implies for any 1 < q < s (4.28) m(B) -1 ˆB |u(z) -u B | q * dm(z) 1 q * ≤ Cr m(B) -1 ˆB g q dm(z) 1 q where q * = qs s-q and B is a ball of radius r. Combined with Hölder's inequality, we get (4.29) m(B) -1 ˆB |u(z) -u B | p dm(z) 1 p ≤ Cr m(B) -1 ˆB g 2 dm(z) 1 2 for any p ∈ [1, 2s/(s -2)] if s > 2 or any p < +∞ if s ≤ 2. We will use the result of Haj lasz and Koskela with g = |∇u|. We need to check the assumptions of their result. The bound (4.26) is exactly (4.19) and we already proved it. The second and last thing we need to verify is that (4.27) holds with s = n. This fact is an easy consequence of (2.12). Indeed, if B 0 is a ball of radius r 0 , x ∈ B 0 and r ≤ r 0 (4.30) m(B(x, r)) m(B 0 ) ≥ m(B(x, r)) m(B(x, 2r 0 ) ) . Yet, (2.12) implies that m(B(x,r)) m(B(x,2r 0 )) is bounded from below by C -1 ( r 2r 0 ) n , that is C -1 ( r r 0 ) n . Then (4.31) m(B(x, r)) m(B 0 ) ≥ C -1 r r 0 n , which is the desired conclusion. We deduce that (4.29) holds with g = |∇u| and for any p ∈ [1, 2n n-2 ] (1 ≤ p < +∞ if n = 2), which is exactly (4.14). To finish to prove the lemma, it remains to establish (4.15). Let B = B(x, r) be a ball centered on Γ. However, since x ∈ Γ, (2.5) entails that m(B) is equivalent to r d+1 . Thus, thanks to (4.14) and Lemma 4.1, We conclude thanks to (4.14) and the doubling property (2.12). r -d-1 ˆB |u(z)| p dm(z) 1 p ≤ C m(B) -1 ˆB u(z) - B u p dm(z) 1 p + C B |u(z)|dz ≤ Cr r -d-1 ˆB |∇u| 2 dm(z) Completeness and density of smooth functions In later sections we shall work with various dense classes. We prepare the job in this section, with a little bit of work on function spaces and approximation arguments. Most results in this section are basically unsurprising, except perhaps the fact that when d ≤ 1, the test functions are dense in W (with no decay condition at infinity). Let Ẇ be the factor space W/R, equipped with the norm • W . The elements of Ẇ are classes u = {u + c} c∈R , where u ∈ W . Lemma 5.1. The space Ẇ is complete. In particular, if a sequence of elements of W , {v k } ∞ k=1 , and u ∈ W are such that v ku W → 0 as k → 0, then there exist constants Let ( uk ) k∈N be a Cauchy sequence in Ẇ . We need to show that (i) for every sequence c k ∈ R such that v k -c k → u in L 1 loc (R n ). Proof. (v k ) k∈N in W , with v k ∈ uk for k ∈ N, there exists u ∈ W and (c k ) k∈N such that v k -c k → u in L 1 loc (R n ) and (5.2) lim k→∞ v k -u W = 0; (ii) if u and u ′ are such that there exist (v k ) k∈N and (v ′ k ) k∈N such that v k , v ′ k ∈ uk for all k ∈ N and (5.3) lim k→∞ v k -u W = lim k→∞ v ′ k -u ′ W = 0, then u′ = u. First assume that (i) is true and let us prove (ii). Let u, u ′ , (v k ) k∈N and (v ′ k ) k∈N be such that v k , v ′ k ∈ uk for any k ∈ N and (5.3) holds. Then the sequence (∇v k -∇v ′ k ) k∈N converges in L 2 (Ω, w) to ∇(uu ′ ) on one hand and is constant equal to 0 on the other hand. Thus ∇(uu ′ ) = 0 and u and u ′ differ only by a constant, hence u′ = u. Now we prove (i). By translation invariance, we may assume that 0 ∈ Γ. Let the v k ∈ uk be given, and choose c k = ffl B(0,1) v k . We want to show that v kc k converges in L 1 loc (R n ). Set B j = B(0, 2 j ) for j ≥ 0; let us check that for f ∈ W and j ≥ 0, (5.4) B j f - B 0 f ≤ C2 (n+1)j f W . Set m j = ffl B j f ; observe that B j |f -m j | ≤ 1 m(B j ) ˆBj |f (x) -m j |w(x)dx ≤ C2 j m(B j ) -1/2 ˆBj |∇f (y)| 2 w(y)dy 1/2 (5.5) ≤ C2 j m(B j ) -1/2 ||f || W ≤ C2 j ||f || W by (2.17), the Poincaré inequality (4.14) with p = 1, and a brutal estimate using (2.5), our assumption that 0 ∈ Γ, and the fact that B j ⊃ B 0 . In addition, (5.6) |m 0 -m j | = B 0 f -m j ≤ B 0 |f -m j | ≤ 2 jn B j |f -m j | ≤ C2 (n+1)j ||f || W by (5.5). Finally (5.7) B j f - B 0 f = B j f -m 0 ≤ B j |f -m j | + |m 0 -m j | ≤ C2 (n+1)j ||f || W , as needed for (5.4). Return to the convergence of v k . Recall that c k = ffl B 1 v k . By (5.4) with f = v k -c k -v l +c l (so that m 0 = 0), v k -c k is a Cauchy sequence in L 1 loc (B j ) for each j ≥ 0, hence there exists u j ∈ L 1 (B j ) such that v kc k converges to u j . By uniqueness of the limit, we have that for 1 ≤ j ≤ j 0 , (5.8) u j 0 = u j a.e. in B j and thus we can define a function u on R n as u (x) = u j (x) if x ∈ B j . By construction u ∈ L 1 loc (R n ) and v k -c k → u in L 1 loc (R n ). It remains to show that u is actually in W and v k → u in W . First, since L 2 (Ω, w) is complete, there exists V such that ∇v k converges to V in L 2 (Ω, w). Then observe that for ϕ ∈ C ∞ 0 (B j \ Γ, R n ), ˆBj V • ϕ = lim k→∞ ˆBj ∇v k • ϕ = -lim k→∞ ˆBj (v k -c k ) div ϕ = - ˆBj u j div ϕ. Hence by definition of a weak derivative, ∇u = ∇u j = V a.e. in B j . Since the result holds for any j ≥ 1, ∇u = V a.e. in R n , that is, by construction of V , u ∈ W and v ku W converges to 0. Lemma 5.9. The space (5.10) W 0 = u ∈ W ; T u = 0 , equipped with the scalar product u, v W := ´Ω ∇u(z) • ∇v(z) w(z)dz (and the norm . W ) is a Hilbert space. Moreover, for any ball B centered on Γ, the set (5.11) W 0,B = u ∈ W ; T u = 0 H d -almost everywhere on Γ ∩ B , equipped with the scalar product ., . W , is also a Hilbert space. Proof. Observe that W 0 and W 0,B are no longer spaces of functions defined modulo an additive constant. That is, if f ∈ W 0 (or W 0,B ) is a constant c, then c = 0 because (3.14) says that T u = c almost everywhere on Γ. Thus . W is really a norm on W 0 and W 0,B , and we only need to prove that these spaces are complete. We first prove this for W 0,B ; the case of W 0 will be easy deal with afterwards. Let B be a ball centered on Γ, and consider W 0,B . By translation and dilation invariance of the result, we can assume that B = B(0, 1). Let (v k ) k∈N be a Cauchy sequence of functions in W 0,B . We want first to show that v k has a limit in L 1 loc (R n ) and W . We use Lemma 5.1 and so there exists ū ∈ W and c k ∈ R such that (5.12) v k -ū W → 0 and (5.13) v k -c k → ū in L 1 loc (R n ) . By looking at the proof of Lemma 5.1, we can take c k = ffl B v k . Let us prove that (c k ) is a Cauchy sequence in R. We have for any k, l ≥ 0 (5.14) |c k -c l | ≤ B |v k -v l | ≤ Cm(B) -1 ˆB |v k (z) -v l (z)|w(z)dz with (2.17). Since T (v kv l ) = 0 on B, Lemma 4.13 entails (5.15) |c k -c l | ≤ C v k -v l W . Since (v k ) k∈N is a Cauchy sequence in W, (c k ) k∈N is a Cauchy sequence in R and thus converges to some value c ∈ R. Set u = ūc. We deduce from (5.13) that (5.16) v k → u in L 1 loc (R n ), and since u and ū differ only from a constant, (5.12) can be rewritten as (5.17) v k -u W → 0. We still need to show that u ∈ W 0,B , i.e., that T u = 0 a.e. on B. We will actually prove something a bit stronger. We claim that if u, v k ∈ W , then the convergence of v k to u in both W and L 1 loc (R n ) implies the convergence of the traces T v k → T u in L 1 loc (Γ, σ). That is, (5.18) v k → u in W and in L 1 loc (R n ) =⇒ T v k → T u in L 1 loc (Γ, σ). Recall that by (3.30), T f ∈ L 1 loc (Γ, σ) whenever f ∈ W . Our result, that is T u = 0 a.e. on B, follows easily from the claim: we already established that v k → u in W and in L 1 loc (R n ) and thus (5.18) gives that T u L 1 (B,σ) = lim k→∞ T v k L 1 (B,σ) = 0, i.e., that T u = 0 σ-a.e. in B. We turn to the proof of (5.18). Since T is linear, we may subtract u, and assume that v k tends to 0 and u = 0. Let us use the notation of Theorem 3.13, and set g k = T v k and g k r (x) = ffl B(x,r) v k . Since v k W tends to 0, we may assume without loss of generality that v k W ≤ 1 for k ∈ N. We want to prove that for every ball B ⊂ R n centered on Γ and every ǫ > 0, we can find k 0 such that (5.19) g k L 1 ( B,σ) ≤ ǫ for k ≥ k 0 . We may also assume that the radius of B is larger than 1 (as it makes (5.19) harder to prove). Fix B and ε as above, and α ∈ (0, 1/2), and observe that for r ∈ (0, 1), (5.20) where for the last line we used (3.28), Fubini, and the condition (1.1) on Γ. Recall that v k W ≤ 1; we choose r so small that C( B, α)r 2α u W ≤ ǫ/2, and since by assumption v k tends to 0 in L 1 loc , we can find k 0 such that Cr d-n ´2 B |v k (y)|dy ≤ ε/2 for k ≥ k 0 , as needed for (5.19). ˆ B g k dσ ≤ ˆ B |g k -g k r |dσ + ˆ B |g k r |dσ ≤ C( B) g k -g k r L 2 (σ) + ˆx∈ B y∈B(x,r) |v k (y)|dydσ(x) ≤ C( B, α)r 2α v k W + Cr d-n ˆ2 B |v k (y)|dy, This completes the proof of (5.18), and we have seen that the completeness of W 0,B follows. Since W 0 is merely an intersection of spaces W 0,B , it is complete as well, and Lemma 5.9 follows. Lemma 5.21. Choose a non-negative function ρ ∈ C ∞ 0 (R n ) such that ´ρ = 1 and ρ is supported in B(0, 1). Furthermore let ρ be radial and nonincreasing, i.e. ρ(x) = ρ(y) ≥ ρ(z) if |x| = |y| ≤ |z|. Define ρ ǫ , for ǫ > 0, by ρ ǫ (x) = ǫ -n ρ(ǫ -1 x). For every u ∈ W , we have: (i) ρ ǫ * u ∈ C ∞ (R n ) for every ǫ > 0; (ii) If x ∈ R n is a Lebesgue point of u, then ρ ǫ * u(x) → u(x) as ǫ → 0; in particular, ρ ǫ * u → u a.e. in R n ; (iii) ∇(ρ ǫ * u) = ρ ǫ * ∇u for ε > 0; (iv) lim ǫ→0 ρ ǫ * u -u W = 0; (v) ρ ǫ * u → u in L 1 loc (R n ). Proof. Recall that W ⊂ L 1 loc (R n ) (see Lemma 3.3). Thus conclusions (i) and (ii) are classical and can be found as Theorem 1.12 in [MZ]. Let u ∈ W and write u ǫ for ρ ǫ * u. We have seen that u ǫ ∈ C ∞ (R n ), so ∇u ǫ is defined on R n . One would like to say that ∇u ǫ = ρ ǫ * ∇u, i.e. point (iii). Here ∇u ǫ is the classical gradient of u ǫ on R n , thus a fortiori also the distributional gradient on R n of u ǫ . That is, for any ϕ ∈ C ∞ 0 (R n , R n ), there holds ˆRn ∇u ǫ • ϕ = - ˆRn u ǫ (x) div ϕ(x)dx = - ˆRn ˆRn ρ ǫ (y)u(x -y) div ϕ(x)dy dx = ˆB(0,ǫ) ρ ǫ (y) - ˆRn u(z) div ϕ(z + y)dz dy. (5.22) The function ϕ lies in C ∞ 0 (R n , R n ), and so does, for any y ∈ R n , the function z → ϕ(z + y). Recall that ∇u is the distributional derivative of u on Ω but yet also the distributional derivative of u on R n (see Lemma 3.3). Therefore (5.23) which gives (iii). ˆRn ∇u ǫ • ϕ = ˆB(0,ǫ) ρ ǫ (y) ˆRn ∇u(z) • ϕ(z + y)dz dy = ˆRn ρ ǫ (y) ˆRn ∇u(x -y) • ϕ(x)dx dy = ˆRn (ρ ǫ * ∇u) • ϕ, From there, our point (iv), that is the convergence of ρ ǫ * u to u in W , can be deduced with, for instance, [START_REF] Kilpeläinen | Weighted Sobolev spaces and capacity[END_REF]Lemma 1.5]. The latter states that, under our assumptions on ρ, the convergence ρ ǫ * g → g holds in L 2 (R n , w) whenever g ∈ L 2 (R n , w) and w is in the Muckenhoupt class A 2 (we already proved this fact, see Lemma 2.20). Note that Kilpelai's result is basically a consequence of a result from Muckenhoupt about the boundedness of the (unweighted) Hardy-Littlewood maximal function in weighted L p . Finally we need to prove (v). Just notice that u ∈ L 1 loc (R n ), and apply the standard proof of the fact that ρ ǫ * u → u in L 1 for f ∈ L 1 . The lemma follows Lemma 5.24. Let u ∈ W and ϕ ∈ C ∞ 0 (R n ). Then uϕ ∈ W and for any point x ∈ Γ satisfying (3.15) (5.25) T (uϕ)(x) = ϕ(x)T u(x). Proof. The function u lies in L 1 loc (R n ) and thus defines a distribution on R n (see Lemma 3.3). Multiplication by smooth functions and (distributional) derivatives are always defined for distributions and, in the sense of distribution, ∇(uϕ ) = ϕ∇u + u∇ϕ. Let B ⊂ R n be a big ball such that supp ϕ ⊂ B. Then uϕ W ≤ ϕ ∞ ∇u L 2 (Ω,w) + ∇ϕ ∞ u - B u L 2 (B,w) + ∇ϕ ∞ B u L 2 (B,w) ≤ ϕ ∞ ∇u L 2 (Ω,w) + C B ∇ϕ ∞ u W + C B ∇ϕ ∞ u L 1 (B) < +∞ (5.26) by the Poincaré inequality (4.14). We deduce uϕ ∈ W . Let take a Lebesgue point x satisfying (3.15). We have B(x,r) |u(z)ϕ(z) -ϕ(x)T u(x)| ≤ B(x,r) |u(z) -T u(x)||ϕ(z)| + |T u(x)| B(x,r) |ϕ(z) -ϕ(x)| ≤ ϕ ∞ B(x,r) |u(z) -T u(x)| + |T u(x)| B(x,r) |ϕ(z) -ϕ(x)|. (5.27) The first term of the right-hand side converges to 0 because x is a Lebesgue point. The second term in the right-hand side converges to 0 because ϕ is continuous. The equality (5.25) follows. Let F be a closed set in R n and E = R n \ F . In the sequel, we let (5.28) C ∞ c (E) = f ∈ C ∞ (E), ∃ǫ > 0 such that f (x) = 0 whenever dist(x, F ) ≤ ε denote the set of functions in C ∞ (E) that equal 0 in a neighborhood of F . Furthermore, we use the notation C ∞ 0 (E) for the set of functions that are compactly supported in E, that is (5.29) C ∞ 0 (E) = {f ∈ C ∞ c (E), ∃R > 0 : suppf ⊂ B(0, R)}. Lemma 5.30. The completion of C ∞ 0 (Ω) for the norm . W is the set (5.31) W 0 = u ∈ W ; T u = 0 of (5.10). Moreover, if u ∈ W 0 is supported in a compact subset of the open ball B ⊂ R n , then u can be approximated in the W -norm by functions of C ∞ 0 (B \ Γ). Proof. The proof of this result will use two main steps, where (i) we use cut-off functions ϕ r to approach any function u ∈ W 0 by functions in W that equal 0 on a neighborhood of Γ; (ii) we use cut-off functions φ R to approach any function u ∈ W 0 by functions in W that are compactly supported in R n . Part (i): For r > 0 small, we choose a smooth function ϕ r such that ϕ(x) = 0 when δ(x) ≤ r, ϕ(x) = 1 when δ(x) ≥ 2r, 0 ≤ ϕ ≤ 1 everywhere, and |∇ϕ(x)| ≤ 10r -1 everywhere. Let u ∈ W 0 be given. We want to show that for r small, ϕ r u lies in W and (5.32) lim r→0 u -ϕ r u 2 W = 0. Notice that ϕ r u ∈ L 1 loc (Ω), just like u, and its distribution gradient on Ω is locally in L 2 and given by (5.33) ∇(ϕ r u)(x) = ϕ r (x)∇u(x) + u(x)∇ϕ r (x). So we just need to show that (5.34) lim r→0 ˆ|∇(ϕ r u)(x) -∇u(x)| 2 w(x)dx = lim r→0 ˆ|u(x)∇ϕ r (x) + (1 -ϕ r (x))∇u(x)| 2 w(x)dx = 0. Now ´|∇u(x)| 2 w(x)dx = u 2 W < +∞, so ´|(1 -ϕ r )∇u(x)| 2 w(x) dx tends to 0, by the dominated convergence theorem, and it is enough to show that (5.35) lim r→0 ˆ|u(x)∇ϕ r (x)| 2 w(x)dx = 0. Cover Γ with balls B j , j ∈ J, of radius r, centered on Γ, and such that the 3B j have bounded overlap, and notice that the region where ∇ϕ r = 0 is contained in ∪ j∈J 3B j . In addition, if x ∈ 3B j is such that ∇ϕ r = 0, then |∇ϕ r (x)| ≤ 10r -1 , so that (5.36) ˆ3B j |u(x)∇ϕ r (x)| 2 w(x)dx ≤ 100r -2 ˆ3B j |u(x)| 2 w(x)dx ≤ C ˆ3B j |∇u(x)| 2 w(x)dx, where the last part comes from (4.15), applied with p = 2 and justified by the fact that T u = 0 on the whole Γ. We may now sum over j. Denote by A r the union of the 3B j ; then ˆΩ |u(x)∇ϕ r (x)| 2 w(x)dx ≤ j∈J ˆ3B j |u(x)∇ϕ r (x)| 2 w(x)dx ≤ C j∈J ˆ3B j |∇u(x)| 2 w(x)dx ≤ C ˆAr |∇u(x)| 2 w(x)dx (5.37) because the 3B j have bounded overlap. The right-hand side of (5.37) tends to 0, because ´Ω |∇u(x)| 2 w(x)dx = u 2 W < +∞ and by the dominated convergence theorem. The claim (5.35) follows, and so does (5.32). This completes Part (i). Part (ii). By translation invariance, we may assume that 0 ∈ Γ. Let R be a big radius; we want to define a cut-off function φ R . If we used the classical cut-off function built as φR = φ x R with φ supported in B(0, 1), the convergence would work with the help of Poincaré's inequality on annuli. But since we we did not prove this inequality, we will proceed differently and use the 'better' cut-off functions defined as follows. Set φ R (x) = φ ln |x| ln R , where φ is a smooth function defined on [0, +∞), supported in [0, 1] and such that φ ≡ 1 on [0, 1/2]. In particular, one can see that ∇φ R (x) ≤ C ln R 1 |x| and that ∇φ R is supported in {x ∈ R n , √ R ≤ |x| ≤ R}. We take u := φ R u and we want to show that u ∈ W and uu W is small. Notice that u ∈ W 0 , by Lemma 5.24, and in addition u is supported in B(0, R). We want to show that (5.38) lim R→+∞ u -u 2 W = 0. But u ∈ L 1 loc (Ω), just like u, and its distribution gradient on Ω is locally in L 1 and given by (5.39) ∇ u(x) = φ R (x)∇u(x) + u(x)∇φ R (x). Hence u -u 2 W = ˆ|∇ u(x) -∇u(x)| 2 w(x)dx = ˆ|u(x)∇φ R (x) + (1 -φ R (x))∇u(x)| 2 w(x)dx. (5.40) Now ´|∇u(x)| 2 w(x)dx = u 2 W < +∞, so ´|(1 -φ R )∇u(x)| 2 w(x) dx tends to 0, by the dominated convergence theorem, and it is enough to show that (5.41) lim R→+∞ ˆ|u(x)∇φ R (x)| 2 w(x)dx = 0. Let C j be the annulus {x ∈ R n , 2 j < |x| ≤ 2 j+1 }. The bounds on ∇φ R yield (5.42) ˆ|u(x)∇φ R (x)| 2 w(x)dx ≤ C (ln R) 2 1+log 2 R j=0 2 -2j ˆCj |u(x)| 2 w(x)dx. The integral on the annulus C j is smaller than the integral in the ball B(0, 2 j+1 ). Since u ∈ W 0 and 0 ∈ Γ, (4.15) yields ˆ|u(x)∇φ R (x)| 2 w(x)dx ≤ C (ln R) 2 1+log 2 R j=0 ˆB(0,2 j+1 ) |∇u(x)| 2 w(x)dx ≤ C (ln R) 2 u 2 W 1+log 2 R j=0 1 ≤ C | ln R| u W . (5.43) Thus ´|u(x)∇φ R (x)| 2 w(x)dx converges to 0 as R goes to +∞, which proves (5.41) and ends Part (ii). We are now ready to prove the lemma. If u ∈ W 0 and ε > 0 is given, we can find R such that φ R uu 2 W ≤ ε (by (5.38)). Notice that φ R u ∈ W 0 , by Lemma 5.24, and now we can find r such that ϕ r φ R uφ R u 2 W ≤ ε (by (5.32)). In turn ϕ r φ R u is compactly supported away from Γ, and we may now use Lemma 5.21 to approximate it with smooth functions with compact support in Ω. It follows that W 0 is included in the completion of C ∞ 0 (Ω). Since W 0 is complete (see Lemma 5.9), the reverse inclusion is immediate. For the second part of the lemma, we are given u ∈ W 0 with a compact support inside B, we can use Part (i) to approximate it by some ϕ r u with a compact support inside B. A convolution as in Lemma 5.21 then makes it smooth without destroying the support property; Lemma 5.30 follows. Remark 5.44. We don't know how to prove exactly the same result for the spaces W 0,B of (5.11). However, we have the following weaker result. Let B ⊂ R n be a ball and B 1 2 denotes the ball with same center as B but half its radius. For any function u ∈ W 0,B , there exists a sequence (u k ) k∈N of functions in C ∞ c (R n \ B 1 2 ∩ Γ) such that u k -u W converges to 0. Indeed, take η ∈ C ∞ 0 (B) such that η = 1 on B 3 4 . Write u = ηu + (1 -η)u; it is enough to prove that both ηu and (1-η)u can be approximated by functions in C ∞ c (R n \B 1 2 ∩ Γ) . Notice first that ηu ∈ W 0 and thus can be approximated by functions in C ∞ 0 (Ω) ⊂ C ∞ c (R n \B 1 2 ∩ Γ), according to Lemma 5.30. Besides, (1-η)u is supported outside of B 3 4 and thus, if ǫ is smaller than a quarter of the radius of B, then the functions ρ ǫ * [(1 -η)u] are in C ∞ c (R n \ B 1 2 ) ⊂ C ∞ c (R n \ B 1 2 ∩ Γ). Lemma 5.21 gives then that the family ρ ǫ * [(1 -η)u] approaches (1 -η)u as ǫ goes to 0. Next we worry about the completion of C ∞ 0 (R n ) for the norm . W . We start with the case when d > 1; when 0 < d ≤ 1, things are a little different and they will be discussed in Lemma 5.64. Lemma 5.45. Let d > 1. Choose x 0 ∈ Γ and write B j for B(x 0 , 2 j ). Then for any u ∈ W (5.46) Lemmata 5.30 and5.45 imply that W 0 ⊂ W 0 . In particular, we get that (5.49) lim u 0 := lim j→+∞ B j u exists and is finite. The completion of C ∞ 0 (R n ) for the norm . W can be identified to a subspace of L 1 loc (R n ), which is (5.47) W 0 = {u ∈ W, u 0 = 0}. Remark 5.48. Since C ∞ 0 (Ω) ⊂ C ∞ 0 (R n ), j→+∞ B j u = 0 for u ∈ W 0 . Remark 5.50. Since the completion of C ∞ 0 (R n ) doesn't depend on our choice of x 0 , the value u 0 doesn't depend on x 0 either. Similarly, with a small modification in the proof, we could replace (2 j ) with any other sequence that tends to +∞. Remark 5.51. The lemma immediately implies the following result: for any u ∈ W , uu 0 ∈ W 0 and thus can be approximated in L 1 loc (R n ) and in the W -norm by function in C ∞ 0 (R n ). Proof. Let d > 1 and choose u ∈ W . Let us first prove that u 0 is well defined. By translation invariance, we can choose x 0 = 0, that is B j = B(0, 2 j ). For j ∈ N, set u j = ffl B j f and V j = ´Bj w(z)dz. The bounds (2.5) give that V j is equivalent to 2 j(1+d) and (2.18) gives that for any z ∈ B j , V j |B j | ≤ Cw(z). Then by Lemma 4.13 |u j+1 -u j | ≤ C B j+1 |u -u j+1 | ≤ CV -1 j+1 ˆBj+1 |u(z) -u j+1 |w(z)dz ≤ C2 j(1-d+1 2 ) ˆBj+1 |∇u(z)| 2 w(z)dz 1 2 ≤ C2 j 1-d 2 u W . (5.52) Since d > 1, (u j ) j∈N is a Cauchy sequence and converges to some value (5.53) u 0 = lim j→+∞ u j . Moreover (5.52) also entails (5.54) |u j -u 0 | ≤ C2 j 1-d 2 u W . Let us prove additional properties on u 0 . Set v = |u|. Notice that (5.55) |u j | ≤ v j := B j |u| ≤ |u j | + B j |u -u j | ≤ |u j | + C2 j 1-d 2 u W , where the last inequality follows from (5.52) (with j -1). As a consequence, for any j ≥ 1, |v j -|u j || ≤ C2 j 1-d 2 u W and by taking the limit as j → +∞, (5.56) |u 0 | = lim j→+∞ B j |u|. In addition, B j |u| -|u 0 | ≤ |v j -|u j || + ||u j | -|u 0 || ≤ |v j -|u j || + |u j -u 0 | ≤ C2 j 1-d 2 u W . (5.57) Let us show that . W is a norm for W 0 . Let u ∈ W 0 be such that u W = 0, then since W 0 ⊂ W , u ≡ c is a constant function. Yet, observe that in this case, u 0 = c. The assumption u ∈ W 0 forces u ≡ c ≡ 0, that is . W is a norm on W 0 . We now prove that (W 0 , . W ) is complete. Let (v k ) k∈N be a Cauchy sequence in W 0 . Since (v kv l ) 0 = 0, we deduce from (5.57) that for j ≥ 1 and k, l ∈ N, (5.58) B j |v k -v l | ≤ C2 j 1-d 2 v k -v l W . Consequently, (v k ) k∈N is a Cauchy sequence in L 1 loc and thus there exists u ∈ L 1 loc (R n ) such that v k → u in L 1 loc (R n ). Since (∇v k ) k∈N is also a Cauchy sequence in L 2 (Ω, w), there exists V ∈ L 2 (Ω, w) such that ∇v k → V in ∈ L 2 (Ω, w). It follows that v k and ∇v k converge in the sense of distribution to respectively u and V , thus u has a distributional derivative in Ω and ∇u equals V ∈ L 2 (Ω, w). In particular u ∈ W . It remains to check that u 0 = 0. Yet, notice that (5.59) |u 0 | ≤ u 0 - ˆBj u + ˆBj (u -v k ) + ˆBj v k . The first term and the third term in the right-hand side are bounded by C2 j 1-d 2 u W and C2 j 1-d 2 u k W respectively (thanks to (5.54)), the second by C2 j 1-d 2 uu k W (because of (5.58)). By taking k and j big enough, we can make the right-hand side of (5.59) as small as we want. It follows that u 0 = |u 0 | = 0 and u ∈ W 0 . The completeness of W 0 follows. It remains to check that the completion of C ∞ 0 (R n ) is W 0 . However, it is easy to see that any function u in C ∞ 0 (R n ) satisfies u 0 = 0 and thus lies in W 0 . Together with the fact that W 0 is complete, we deduce that the completion of C ∞ 0 (R n ) with the norm . W is included in W 0 . The converse inclusion will hold once we establish that any function in W 0 can be approached in the W -norm by functions in C ∞ 0 (R n ). Besides, thanks to Lemma 5.21, it is enough to prove that u ∈ W 0 can be approximated by functions in W that are compactly supported in R n . Fix φ ∈ C ∞ ((-∞, +∞)) such that φ ≡ 1 on (-∞, 1/2], φ ≡ 0 on [1, +∞). For R > 0 define φ R by φ R (x) = φ(ln |x|/ ln R). Observe that that φ R (x) ≡ 1 if |x| ≤ √ R, φ R (x) ≡ 0 if |x| ≥ R and, for any x ∈ R n , (5.60) |∇φ R (x)| ≤ C ln R 1 |x| . The approximating functions will be the φ R u, which are compactly supported in R n . Now uφ R -u 2 W = u(1 -φ R ) 2 W ≤ ˆΩ(1 -φ R (z)) 2 |∇u(z)| 2 w(z)dz + ˆΩ |u(z)| 2 |∇φ R (z)| 2 w(z)dz ≤ ˆ|z|≥ √ R |∇u(z)| 2 w(z)dz + ˆΩ |u(z)| 2 |∇φ R (z)| 2 w(z)dz. (5.61) By the dominated convergence theorem, the first term of the right-hand side above converges to 0 as R goes to +∞. It remains to check that the second term also tends to 0. Set C j = B j \ B j-1 . We have if R > 1, ˆΩ |u(z)| 2 |∇φ R (z)| 2 w(z)dz ≤ C | ln R| 2 ˆ√R<|z|<R |u(z)| 2 |z| 2 w(z)dz ≤ C | ln R| 2 log 2 R+1 j=0 2 -2j ˆCj |u(z)| 2 w(z)dz ≤ C | ln R| 2 log 2 R+1 j=0 2 -2j ˆBj |u(z)| 2 w(z)dz ≤ C | ln R| 2 log 2 R+1 j=0 2 -2j ˆBj |u(z) -u j | 2 w(z)dz + V j |u j | 2 . (5.62) Lemma 4.13 gives that ´Bj |u(z) - 1+d) because of (2.5) and we get that |u j | 2 ≤ 2 j(1-d) u W , by (5.54). Hence (5.63) which converges to 0 as R goes to +∞. This concludes the proof of Lemma 5.45. u j | 2 w(z)dz is bounded, up to a harmless constant, by 2 2j ´Bj |∇u(z)| 2 w(z)dz ≤ 2 2j u 2 W . In addition, V j = m(B j ) is bounded by C2 j( ˆΩ |u(z)| 2 |∇φ R (z)| 2 w(z)dz ≤ C | ln R| 2 u 2 W log 2 R+1 j=0 2 -2j 2 2j + 2 j(d+1) 2 j(1-d) ≤ C | ln R| 2 u 2 W log 2 R+1 j=0 1 ≤ C | ln R| u 2 W , As we shall see now, the situation in low dimensions is different, essentially because when d ≤ 1, the constant function 1 can be approximated by functions of C ∞ 0 (R n ). Lemma 5.64. Let d ≤ 1. For any function u in W , we can find a sequence of functions (u k ) k∈N in C ∞ 0 (R n ) such that u k converges, in L 1 loc (R n ) and and for the semi-norm . W , to u. Remark 5.65. The fact that the function 1 can be approached with the norm . W by functions in C ∞ 0 means that the completion of C ∞ 0 with the norm . W is not a space of distributions. We can legitimately say that the completion of C ∞ 0 is embedded into the space of distri- butions D ′ = (C ∞ 0 ) ′ ⊃ L 1 loc if the convergence u k ∈ C ∞ 0 ⊂ L 1 loc to u ∈ W ⊂ L 1 loc in the norm . W implies, for ϕ ∈ C ∞ 0 , that ´uk ϕ tends to ´uϕ. Take u k ∈ C ∞ 0 (R n ) such that u k tends to 1 in L 1 loc (R n ) and W . Then since . W doesn't see the constants, u k tends to 0 in W ; but the convergence of u k to 1 in L 1 loc (R n ) implies that ´uk ϕ tends to ´ϕ = 0 for some function ϕ ∈ C ∞ 0 (R n ). Proof. As before, we may assume that 0 ∈ Γ. Let us first prove that for d ≤ 1, the constant function 1 (and thus any constant function) is the limit in W and L 1 loc (R n ) of test functions. Choose φ ∈ C ∞ ([0, +∞)) such that φ ≡ 1 on [0, 1/2] and φ ≡ 0 on [1, +∞). For R > 1, define ψ R as ψ R (x) = φ(ln ln |x|/ ln ln R) if |x| > 1 and ψ R (x) = 1 if |x| ≤ 1. This cut-off function is famous for being used by Sobolev, and is useful to handle the critical case (that is, for us, d = 1). It can be avoided if d < 1 but we didn't want to separate the cases d < 1 and d = 1. Let us return to the proof of the lemma. We have: ψ R (x) ≡ 1 if |x| ≤ exp( √ ln R), ψ R (x) ≡ 0 if |x| ≥ R and for any x ∈ R n satisfying |x| > 1, (5.66) |∇ψ R (x)| ≤ C ln ln R 1 |x| ln |x| . It is easy to see that ψ R converges to 1 in L 1 loc (R n ) as R goes to +∞. We claim that (5.67) ψ R W converges to 0 as R goes to +∞. Let us prove (5.67). As in Lemma 5.45, we write B j for B(0, 2 j ) and C j for B j \ B j-1 . Then for R large, ψ R 2 W ≤ C | ln ln R| 2 ˆ2<|z|≤R 1 |z| 2 | ln |z|| 2 w(z)dz ≤ C | ln ln R| 2 +∞ j=1 2 -2j | ln 2 j | -2 ˆCj w(z)dz ≤ C | ln ln R| 2 +∞ j=1 1 j 2 2 -2j 2 j(d+1) ≤ C | ln ln R| 2 . (5.68) Our claim follows, and it implies that 1ψ R W tends to 0. We will prove now that any function in W can be approached by functions in C ∞ 0 (R n ). Let u ∈ W be given. Let u 0 = ffl B 0 u denote the average of u on the unit ball. We have just seen how to approximate u 0 by test functions, so it will be enough to show that uu 0 can be approached by test functions. For this we shall proceed as in Lemma 5.45. We shall use the product ψ R (uu 0 ), where ψ R is the same cut-off function as above, and prove that ψ R (uu 0 ) lies in W and (5.69) lim R→+∞ (u -u 0 )ψ R W = 0. Notice that ψ R (u-u 0 ) is compactly supported, and converges (pointwise and in L 1 loc ) to u-u 0 . Thus, as soon as we prove (5.69), Lemma 5.21 will allow us to approximate (uu 0 )ψ R by smooth, compactly supported functions, and the desired approximation result will follow. As for the proof of (5.69), of course we shall use Poincaré's inequality, and the the key point will be to get proper bounds on differences of averages of u. These will not be as good as before, because now d ≤ 1, and instead of working directly on the balls B j we shall use strings of balls D j that do not contain the origin, so that their overlap is smaller. Fix any unit vector ξ ∈ ∂B(0, 1), and consider the balls (5.70) D = D ξ = B(ξ, 9/10) and, for j ∈ N, D j = D ξ j = B(2 j ξ, 9 10 2 j ). We will later use the D ξ j to cover our usual annuli C j , but in the mean time we fix ξ and want estimates on the numbers m j = ffl D j u j . The Poincaré inequality (4.14), applied with with p = 1, yields (5.71) m(D j ) -1 ˆDj |u -m j |w(z)dz ≤ C2 j m(D j ) -1 ˆDj |∇u(z)| 2 w(z)dz 1 2 . Of course we have a similar estimate on D j+1 ; observe also that D j ∩ D j+1 contains a ball D ′ j of radius 2 j-2 (we may even take it centered at 2 j ξ); then |m j -m j+1 | = m(D ′ j ) -1 ˆD′ j |m j -m j+1 |w(z)dz ≤ m(D ′ j ) -1 ˆD′ j (|u -m j | + |u -m j+1 |)w(z)dz ≤ Cm(D j ) -1 ˆDj |u -m j |w(z)dz + Cm(D j+1 ) -1 ˆDj+1 |u -m j |w(z)dz (5.72) ≤ C2 j m(D j ) -1 ˆDj ∪D j+1 |∇u(z)| 2 w(z)dz 1 2 because m(D j ) ≤ Cm(D ′ j ) (and similarly for m(D j+1 )), since w(z)dz is doubling by (2.12). By (2.5), m(D j ) ≥ C -1 2 j(d+1) , so (5.72) yields (5.73) |m j -m j+1 | ≤ C2 -j(d-1)/2 ˆDj ∪D j+1 |∇u(z)| 2 w(z)dz 1 2 The same estimate, run with B 0 = B(0, 1) and D 0 whose intersection also contains a large ball, yields (5.74) |u 0 -m 0 | ≤ C ˆB0 ∪D 0 |∇u(z)| 2 w(z)dz 1 2 ≤ C u W . With ξ fixed, the various D j ∪ D j+1 have bounded overlap; thus by (5.73) and Cauchy-Schwarz, |m j+1 -m 0 | 2 ≤ C j i=0 2 -i(d-1)/2 ∇u L 2 (D j ∪D j+1 ,w) 2 ≤ C(j + 1) j i=0 2 i(1-d) ∇u 2 L 2 (D j ∪D j+1 ,w) ≤ C(j + 1)2 j(1-d) u 2 W . (5.75) Here we used our assumption that d ≤ 1, and we are happy about our trick with the bounded overlap because a more brutal estimate would lead to a factor (j + 1) 2 that would hurt us soon. Anyway, we add (5.74) and get that for j ≥ 0, (5.76) m j -u 0 2 ≤ C(j + 1)2 j(1-d) u 2 W . We are now ready to prove (5.69). Since the first part of the proof gives that u 0 ψ R W tends to 0, we shall assume that u 0 = 0 to simplify the estimates. By Lemma 5.24, (uu 0 )ψ R = uψ R lies in W and its gradient is u∇ψ R + ψ R ∇u. So we just need to show that when R tends to +∞, (5.77) uψ R -u W ≤ (1 -ψ R )∇u L 2 (Ω,w) + u∇ψ R L 2 (Ω,w) tends to 0. The first term of the right-hand side converges to 0 as R goes to +∞, thanks to the dominated convergence theorem, and for the second term we use (5.66) and the fact that ∇ψ R is supported in the region Z R where exp( √ ln R) ≤ |x| ≤ R. Thus (5.78) u∇ψ R 2 L 2 (Ω,w) = ˆRn |u(z)| 2 |∇ψ R (z)| 2 w(z)dz ≤ C | ln ln R| 2 ˆZR |u(z)| 2 |z| 2 (ln |z|) 2 w(z)dz As usual, we cut Z R into annular subregions C j , and then further into balls like the D j . We start with the C j = B j \ B j-1 . For R large, if C j meets Z R , then 10 ≤ j ≤ 1 + log 2 R and (5.79) ˆCj |u(z)| 2 |z| 2 (ln |z|) 2 w(z)dz ≤ j -2 2 -2j ˆCj |u(z)| 2 w(z)dz. We further cut C j into balls, because we want to apply Poincaré's inequality. Let the D ξ j be as in the definition (5.70). We can find a finite set Ξ ⊂ ∂B(0, 1) such that the balls D ξ , ξ ∈ Ξ, cover B(0, 1) \ B(0, 1/2). Then for j ≥ 1 the D ξ , ξ ∈ Ξ, cover C j and, by (5.78) and (5.79), (5.80) u∇ψ R 2 L 2 (Ω,w) ≤ C | ln ln R| 2 1+log 2 R j=10 j -2 2 -2j ξ∈Ξ ˆDξ j |u(z)| 2 w(z)dz. Then by the Poincaré inequality (4.14) (with p = 2), (5.81) ˆDξ j |u(z) -m ξ j | 2 w(z)dz ≤ C2 2j ˆDξ j |∇u(z)| 2 w(z)dz, where m ξ j = ffl D ξ j as in the estimates above. Thus (5.82) ˆDξ j |u(z)| 2 w(z)dz ≤ C2 2j ˆDξ j |∇u(z)| 2 w(z)dz + Cm(D ξ j )(j + 1)2 j(1-d) u 2 W by (5.76) and because u 0 = 0. But m(D ξ j ) ≤ C2 (d+1)j by (2.5), so (5.83) ˆDξ j |u(z)| 2 w(z)dz ≤ C(j + 1)2 2j u 2 W . We return to (5.80) and get that u∇ψ R 2 L 2 (Ω,w) ≤ C | ln ln R| 2 1+log 2 R j=10 j -2 2 -2j ξ∈Ξ (j + 1)2 2j u 2 W ≤ C | ln ln R| 2 1+log 2 R j=10 j -1 u 2 W ≤ C | ln ln R| u 2 W (5.84) because Ξ is finite, and where we see that j -1 is really useful. We already took care of the other part of (5.77); thus uψ Ru W tends to 0. This proves (5.69) (recall that u 0 = 0), and completes our proof of Lemma 5.64. The chain rule and applications We record here some basic (and not shocking) properties concerning the derivative of f • u when u ∈ W , and the fact that uv ∈ W ∩ L ∞ when u, v ∈ W ∩ L ∞ . Lemma 6.1. The following properties hold: (a) Let f ∈ C 1 (R) be such that f ′ is bounded and let u ∈ W . Then f • u ∈ W and (6.2) ∇(f • u) = f ′ (u)∇u. Moreover, T (f • u) = f • (T u) a.e. in Γ. (b) Let u, v ∈ W . Then max{u, v} and min{u, v} belong to W and, for almost every x ∈ R n , (6.3) ∇ max{u, v}(x) = ∇u(x) if u(x) ≥ v(x) ∇v(x) if v(x) ≥ u(x) and (6.4) ∇ min{u, v}(x) = ∇u(x) if u(x) ≤ v(x) ∇v(x) if v(x) ≤ u(x). In particular, for any λ ∈ R, ∇u = 0 a.e. on {x ∈ R n , u(x) = λ}. In addition, T max{u, v} = max{T u, T v} and T min{u, v} = min{T u, T v} σ-a.e. on Γ. Thus max{u, v} and min{u, v} lie in W 0 as soon as u, v ∈ W 0 . Remark 6.5. A consequence of Lemma 6.1 (b) is that, for example, |u| ∈ W (resp. |u| ∈ W 0 ) whenever u ∈ W (resp. u ∈ W 0 ). Proof. A big part of this proof follows the results from 1.18 to 1.23 in [HKM]. Let us start with (a). More precisely, we aim for (6.2). Let f ∈ C 1 (R) ∩ Lip(R) and let u ∈ W . The idea of the proof is the following: we approximate u by smooth functions ϕ k , for which the result is immediate. Then we observe that both ∇(f • u) and f ′ (u)∇u are the limit (in the sense of distributions) of the gradient of f • ϕ k . According to Lemma 5.21, there exists a sequence (ϕ k ) k∈N of functions in C ∞ (R n ) ∩ W such that ϕ k → u in W and in L 1 loc (R n ). The classical (thus distributional) derivative of f • ϕ k is (6.6) ∇[f • ϕ k ] = f ′ (ϕ k )∇ϕ k . In particular, since ϕ k ∈ W and f ′ is bounded, f • ϕ k ∈ W and f • ϕ k W ≤ ϕ k W sup |f ′ |. Notice that |f (s) -f (t)| ≤ |s -t| sup |f ′ |. Therefore, since ϕ k → u in L 1 loc (R n ), for any ball B ⊂ R n (6.7) ˆB |f • ϕ k -f • u| ≤ sup |f ′ | ˆB |ϕ k -u| -→ 0. That is f • ϕ k → f • u in L 1 loc (R n ), hence also in the sense of distributions. Besides, ˆΩ |f ′ (ϕ k )∇ϕ k -f ′ (u)∇u| 2 w dz 1 2 ≤ ˆΩ |f ′ (ϕ k )[∇ϕ k -∇u]| 2 w dz 1 2 + ˆΩ |∇u[f ′ (ϕ k ) -f ′ (u)]| 2 w dz 1 2 ≤ sup |f ′ | ˆΩ |∇ϕ k -∇u| 2 w dz 1 2 + ˆΩ |∇u| 2 |f ′ (ϕ k ) -f ′ (u)| 2 w dz 1 2 . (6.8) The first term in the right-hand side is converges to 0 since ϕ k → u in W . Besides, ϕ k → u a.e. in Ω and f ′ is continuous, so f ′ (ϕ k ) → f ′ (u) a.e. in Ω. Therefore, the second term also converges to 0 thanks to the dominated convergence theorem. It follows that w), and hence also in the sense of distributions. We proved that f ∇[f • ϕ k ] → f ′ (u)∇u in L 2 (Ω, • ϕ k → f • u and ∇[f • ϕ k ] → f ′ (u)∇u ∈ L 2 (Ω, w) in the sense of distributions, and so the distributional derivative of f • u lies in L 2 (Ω, w) and is equal to f ′ (u)∇u. In particular, f • u ∈ W . Note that we also proved that f • ϕ k → f • u in W . In order to finish the proof of (a), we need to prove that T (f • u) = f (T u) σ-a.e. in Γ. If v ∈ W is also a continuous function on R n , then it is easy to check from the definition of the trace that T v(x) = v(x) for every x ∈ Γ. Since f • ϕ k and ϕ k are both continuous functions, we get that (6.9) f • ϕ k (x) = T (f • ϕ k )(x) = f (T ϕ k (x)) for x ∈ Γ and k ∈ N. Hence for every ball B centered on Γ and every k ≥ 0, ˆB |T (f • u) -f (T u)|dσ ≤ ˆB |T (f • u) -T (f • ϕ k )|dσ + ˆB |f (T ϕ k ) -f (T u)|dσ ≤ ˆB |T (f • u) -T (f • ϕ k )|dσ + sup |f ′ | ˆB |T ϕ k -T u|dσ. (6.10) Recall that each convergence ϕ k → u and f • ϕ k → f • u holds in both W and L 1 loc (R n ). The assertion (5.18) then gives that both convergences T ϕ k → T u and T (f • ϕ k ) → T (f • u) hold in L 1 loc (Γ, σ). Thus the right-hand side of (6.10) converges to 0 as k goes to +∞. We obtain that for every ball B centered on Γ, (6.11) ˆB |T (f • u) -f (T u)|dσ = 0; in particular, T (f • u) = f (T u) σ-a.e. in Γ. Let us turn to the proof of (b). Set u + = max{u, 0}. Then max{u, v} = (uv) + + v and min{u, v} = u -(uv) + . Thus is it enough to show that for any u ∈ W , u + lies in W and satisfies (6.12) ∇u + (x) = ∇u(x) if u(x) > 0 0 if u(x) ≤ 0 for almost every x ∈ R n and (6.13) T (u + ) = (T u) + σ-almost everywhere on Γ. Note that in particular (6.12) implies that ∇u = 0 a.e. in {u = λ}. Indeed, since u = λ + (uλ) + -(λu) + , (6.12) implies that for almost every x ∈ Ω, (6.14) ∇u(x) = ∇u(x) if u(x) = λ 0 if u(x) = λ. Let us prove the claim (6.12). Define f and g = 1 (0,+∞) by f (t) = max{0, t} and g(t) = 0 when t ≤ 0 and g(t) = 1 when t > 0. Our aim is to approximate f by an increasing sequence of C 1 -functions and then to conclude by using (a) and the monotone convergence theorem. Define for any integer j ≥ 1 the function f j by (6.15) f j (t) =      0 if t ≤ 0 j j+1 t j+1 j if 0 ≤ t ≤ 1 t -1 j+1 if t ≥ 1. Notice that f j is non-negative and (f j ) is a nondecreasing sequence that converges pointwise to f . Consequently, f j • u ≥ 0 and (f j • u) is a nondecreasing sequence that converges pointwise to f • u = u + ∈ L 1 loc (R n ). The monotone convergence theorem implies that f j • u → u + in L 1 loc (R n ). Moreover, f j lies in C 1 (R) and its derivative is (6.16) f ′ j (t) =    0 if t ≤ 0 t 1 j if 0 ≤ t ≤ 1 1 if t ≥ 1. Thus f ′ j is bounded and part (a) of the lemma implies f j • u ∈ W and ∇(f j • u) = f ′ j (u) ∇u almost everywhere on R n . In addition, f ′ j converges to g pointwise everywhere, so (6.17) ∇(f j • u) = f ′ j (u)∇u → v := (g • u)∇u = ∇u if u > 0 0 if u ≤ 0 almost everywhere (i.e., wherever ∇(f j • u) = f ′ j (u)∇u). The convergence also occurs in L 2 (Ω, w) and in L 1 loc (R n ), because |∇(f j • u)| ≤ |∇u| and by the dominated convergence theorem, and therefore also in the sense of distribution. Since f j •u converges to u + pointwise almost everywhere and hence (by the dominated convergence theorem again) in L 1 loc and in the sense of distributions, we get that v = (g • u)∇u is the distribution derivative of u + . This completes the proof of (6.12). Finally, let us establish (6.13). The plan is to prove that we can find smooth functions ϕ k such that ϕ + k converges (in L 1 loc (Γ, σ)) to both T u + and (T u) + . We claim that for u ∈ W and any sequence (u k ) in W , the following implication holds true: (6.18) u k → u pointwise a.e. and in W =⇒ u + k → u + pointwise a.e. and in W. First we assume the claim and prove (6.13). With the help of Lemma 5.21, take (ϕ k ) k∈N be a sequence of functions in C ∞ (R n ) such that ϕ k → u in W , and in L 1 loc (R n ). We may also replace (ϕ k ) by a subsequence, and get that ϕ k → u pointwise a.e. The claim (6.18) implies that ϕ + k → u + in W . In addition, ϕ + k → u + in L 1 loc (R n ), for instance because ϕ k tends to u in L 1 loc (R n ) and by the estimate |ϕ + k -u + | ≤ |ϕ k -u|. Thus we may apply (5.18), and we get that T ϕ + k tends to T u + in L 1 loc (Γ). Since ϕ + k is continuous, T ϕ + k = ϕ + k and (6.19) ϕ + k tends to T u + in L 1 loc (Γ). We also need to check that ϕ + k converges to (T u) + . Notice that (5.18) also implies that ϕ k → T u in L 1 loc (Γ, σ). Together with the easy fact that |a + -b + | ≤ |a -b| for a, b ∈ R, this proves that ϕ + k → (T u) + in L 1 loc (Γ, σ). We just proved that ϕ + k converges to both T (u + ) and (T u) + in L 1 loc (Γ, σ). By uniqueness of the limit, T (u + ) = (T u) + σ-a.e. in Γ, as needed for (6.13). Thus the proof of the lemma will be complete as soon as we establish the claim (6.18). First notice that |u + ju + | ≤ |u j -u| and thus the a.e. pointwise convergence of u j → u yields the a.e. pointwise convergence u + j → u + . Let g denote the characteristic function of (0, +∞); then by (6.12) ˆΩ |∇u + j -∇u + | 2 w dz 1 2 = ˆΩ |g(u j )∇u j -g(u)∇u| 2 w dz 1 2 ≤ ˆΩ |g(u j )[∇u j -∇u]| 2 w dz 1 2 + ˆΩ |∇u[g(u j ) -g(u)]| 2 w dz 1 2 ≤ ˆΩ |∇u j -∇u| 2 w dz 1 2 + ˆΩ |∇u| 2 |g(u j ) -g(u)| 2 w dz 1 2 . (6.20) The first term in the right-hand side converges to 0 since u j → u in W . Call I the second term; I is finite, since u ∈ W and |g(u j )g(u)| ≤ 1. Moreover, thanks to (6.14), ∇u = 0 a.e. on {u = 0}. So the square of I can be written I 2 = ˆ |u|>0 |∇u| 2 |g(u j ) -g(u)| 2 w dz. (6.21) Let x ∈ |u| > 0 be such that u j (x) converges to u(x) = 0; then there exists j 0 ≥ 0 such that for j ≥ j 0 the sign of u j (x) is the same as the sign of u(x). That is, g(u j )(x) converges to g(u)(x). Since u j → u a.e. in Ω, the previous argument implies that g(u j ) → g(u) a.e. in {|u| > 0}. Then I 2 converges to 0, by the dominated convergence theorem. Going back to (6.20), we obtain that u + j → u + in W , which concludes our proof of (6.18); Lemma 6.1 follows. Lemma 6.22. Let u, v ∈ W ∩ L ∞ (Ω). Then uv ∈ W ∩ L ∞ (Ω), ∇(uv) = v∇u + u∇v, and T (uv) = T u • T v. Proof. Without loss of generality, we can assume that u ∞ , v ∞ ≤ 1. The fact that uv ∈ L ∞ (Ω) is immediate. Let us now prove that uv ∈ W . According to Lemma 5.21, there exists two sequences (u j ) j∈N and (v j ) j∈N of functions in C ∞ (R n ) ∩ W such that u j → u and v j → v in W , in L 1 loc (R n ), and pointwise. Besides, the construction of u j , v j given by Lemma 5.21 allows us to assume that u j ∞ ≤ u ∞ ≤ 1 and v j ∞ ≤ v ∞ ≤ 1. The distributional derivative of u j v j equals the classical derivative, which is (6.23) ∇(u j v j ) = v j ∇u j + u j ∇v j . Since u j and v j lie in W , (6.23) says that u j v j ∈ W . The bound ˆB |u j v j -uv| ≤ ˆB |u j ||v j -v| + ˆB |v||u j -u| ≤ v j -v L 1 (B) + u j -u L 1 (B) , (6.24) which holds for any ball B ⊂ R n , shows that u j v j → uv in L 1 loc (R n ). Moreover, ˆB |(u j ∇v j + v j ∇u j ) -(u∇v + v∇u)| 2 w dz 1 2 ≤ ˆB |u j ∇v j -u∇v| 2 w dz 1 2 + ˆB |v j ∇u j -v∇u| 2 w dz 1 2 ≤ ˆB |u j | 2 |∇v j -∇v| 2 w dz 1 2 + ˆB |u j -u| 2 |∇v| 2 w dz 1 2 + ˆB |v j | 2 |∇u j -∇u| 2 w dz 1 2 + ˆB |v j -v| 2 |∇u| 2 w dz 1 2 . (6.25) The first and third terms in the right-hand side converge to 0 as j goes to +∞, because |u j |, |v j | ≤ 1 and since u j → u and v j → v in W . The second and forth terms also converge to 0 thanks to the dominated convergence theorem (and the fact that u j → u and v j → v pointwise a.e.). We deduce that ∇(u j v j ) = u j ∇v j +v j ∇u j → u∇v+v∇u in L 2 (Ω, w). By the uniqueness of the distributional derivative, ∇(uv) = u∇v + v∇u ∈ L 2 (Ω, w). In particular, uv ∈ W . Note that we also proved that u j v j → uv in W . It remains to prove that T (uv) = T u • T v. Since u j v j is continuous and u j v j → uv in W and L 1 loc (R n ), then by (5.18), u j v j = T (u j v j ) → T (uv) in L 1 loc (Γ, σ). Moreover, for any ball B centered on Γ, ˆB |u j v j -T u • T v|dσ ≤ ˆB |u j ||v j -T v|dσ + ˆB |u j -T u||T v|dσ ≤ ˆB |v j -T v|dσ + ˆB |u j -T u|dσ (6.26) where the last line holds because |u j | ≤ 1 and |T v| ≤ sup |v| ≤ 1, where the later bound either follows from Lemma 6.1 or is easily deduced from the definition of the trace. By construction, u j → u and v j → v in W and L 1 loc (R n ). Then by (5.18) the right-hand side terms in (6.26) converge to 0. We proved that u j v j converges in L 1 loc (Γ, σ) to both T (uv) and T u • T v. By uniqueness of the limit, T (uv) = T u • T v σ-a.e. in Γ. Lemma 6.22 follows. The extension operator The main point of this section is the construction of our extension operator E : H → W , which will be done naturally, with the Whitney extension scheme that uses dyadic cubes. Our main object will be a function g on Γ, that typically lies in H or in L 1 loc (Γ, σ). We start with the Lebesgue density result for g ∈ L 1 loc (Γ, σ) that was announced in the introduction. Lemma 7.1. For any g ∈ L 1 loc (Γ, σ) and σ-almost all x ∈ Γ, (7.2) lim r→0 Γ∩B(x,r) |g(y) -g(x)|dσ(y) = 0. Proof. Since (Γ, σ) satisfies (1.1), the space (Γ, σ) equipped with the metric induced by R n is a doubling space. Indeed, let B be a ball centered on Γ. According to (1.1), (7.3) σ(2B) ≤ 2 d C 0 r d ≤ 2 d C 2 0 r d σ(B). From there, the lemma is only a consequence of the Lebesgue differentiation theorem in doubling spaces (see for example [START_REF] Federer | Geometric measure theory[END_REF].9]). Remark 7.4. We claim that H ⊂ L 1 loc (Γ, σ), and hence (7.2) holds for g ∈ H and σ-almost every x ∈ Γ. Indeed, let B be a ball centered on Γ, then a brutal estimate yields (7.5) ˆB ˆB |g(x) -g(y)|dσ(x)dσ(y) ≤ C B ˆB ˆB |g(x) -g(y)| 2 dσ(x)dσ(y) 1 2 ≤ C B g H < +∞. Hence for σ-almost every x ∈ B ∩ Γ, ´B |g(x)g(y)|dσ(y) < +∞. In particular, since σ(B) > 0, there exists x ∈ B ∩ Γ such that ´B |g(x)g(y)|dσ(y) < +∞. We get that g ∈ L 1 (B, σ), and our claim follows. Let us now start the construction of the extension operator E : H → W . We proceed as for the Whitney extension theorem, with only a minor modification because averages will be easier to manipulate than specific values of g. We shall use the family W of dyadic Whitney cubes constructed as in the first pages of [Ste] and the partition of unity {ϕ Q }, Q ∈ W, that is usually associated to W. Recall that W is the family of maximal dyadic cubes Q (for the inclusion) such that 20Q ⊂ Ω, say, and the ϕ Q are smooth functions such that ϕ Q is supported in 2Q, 0 ≤ ϕ Q ≤ 1, |∇ϕ Q | ≤ Cdiam(Q) -1 , and Q ϕ Q = 1 Ω . Let us record a few of the simple properties of W. These are simple, but yet we refer to [Ste, Chapter VI] for details. It will be convenient to denote by r(Q) the side length of the dyadic cube Q. Also set δ(Q) = dist(Q, Γ). For Q ∈ W, we select a point ξ Q ∈ Γ such that dist(ξ Q , Q) ≤ 2δ(Q), and set (7.6) B Q = B(ξ Q , δ(Q)). If Q, R ∈ W are such that 2Q meets 2R, then r(R) ∈ { 1 2 r(Q), r(Q), 2r(Q)}; then we can easily check that R ⊂ 8Q. Thus R is a dyadic cube in 8Q whose side length is bigger than 1 2 r(Q); there exist at most 2 • 16 n dyadic cubes like this. This proves that there is a constant C = C(n) such that for Q ∈ W, (7.7) the number of cubes R ∈ W such that 2R ∩ 2Q = ∅ is at most C. The operator E is defined on functions in L 1 loc (Γ, σ) by (7.8) Eg(x) = Q∈W ϕ Q (x)y Q , where we set (7.9) y Q = B Q g(z)dσ(z), with B Q as in (7.6). For the extension of Lipschitz functions, for instance, one would take y Q = g(ξ Q ), but here we will use the extra regularity of the averages. Notice that Eg is continuous on Ω, because the sum in (7.8) is locally finite. Indeed, if x ∈ Ω and Q ∈ W contains x, (7.7) says that there are at most C cubes R ∈ W such that ϕ R does not vanish on 2Q; then the restriction of Eg to 2Q is a finite sum of continuous functions. Moreover, if g is continuous on Γ, then Eg is continuous on the whole R n . We refer to [START_REF] Stein | Singular integrals and differentiability properties of functions[END_REF]Proposition VI.2.2] for the easy proof. Theorem 7.10. For any g ∈ L 1 loc (Γ, σ) (and by Remark 7.4, this applies to g ∈ H), (7.11) T (Eg) = g σ-a.e. in Γ. Moreover, there exists C > 0 such that for any g ∈ H, (7.12) Eg W ≤ C g H . Proof. Let g ∈ L 1 loc be given, and set u = Eg. We start the proof with the verification of (7.11). Recall that by definition of the trace, (7.13) T (E(g))(x) = lim r→0 B(x,r) u for σ-almost every x ∈ Γ; we want to prove that this limit is g(x) for almost every x ∈ Γ, and we can restrict to the case when x is a Lebesgue point for g (as in (7.2)). Fix such an x ∈ Γ and r > 0. Set B = B(x, r), then (7.14) B(x,r) u -g(x) ≤ B |u(z) -g(x)|dz ≤ Cr -n R∈W(B) ˆR |u(z) -g(x)|dz, where we denote by W(B) the set of cubes R ∈ W that meet B. Let R ∈ W and z ∈ R be given. Recall from (7.8) that u(z) = Q∈W ϕ Q (z)y Q ; the sum has less than C terms, corresponding to cubes Q ∈ W such that z ∈ 2Q. If Q is such a cube, we have seen that 1 2 r(R) ≤ r(Q) ≤ 2r(R), and since δ(R) ≥ 10r(Q) because 20Q ⊂ Ω, a small computation with (7.6) yields that B Q ⊂ 100B R . Hence (7.15) |y Q -g(x)| = B Q gdσ -g(x) ≤ B Q |g -g(x)|dσ ≤ C 100B R |g -g(x)|dσ. Since u(z) is an average of such y Q , we also get that |u(z) -g(x)| ≤ C ffl 100B R |g -g(x) |dσ, and (7.14) yields (7.16) B(x,r) u -g(x) ≤ Cr -n R∈W(B) |R| 100B R |g -g(x)|dσ. Notice that δ(R) = dist(R, Γ) ≤ dist(R, x) ≤ r because R meets B = B(x, r) and x ∈ Γ, so, by definition of W, the sidelength of R is such that r(R) ≤ Cr. Let W k (B) be the collection of R ∈ W(B) such that r(R) = 2 k . For each k, the balls 100B R , R ∈ W k (B) have bounded overlap (because the cubes R are essentially disjoint and they have the same sidelength), and they are contained in B ′ = B(x, Cr). Thus R∈W k (B) |R| 100B R |g -g(x)|dσ ≤ C2 nk 2 -dk R∈W k (B) ˆ100B R |g -g(x)|dσ ≤ C2 (n-d)k ˆB′ |g -g(x)|dσ. (7.17) We may sum over k (because 2 k = r(R) ≤ Cr when R ∈ W k (B), and the exponent nd is positive). We get that (7.18) B(x,r) u -g(x) ≤ Cr -n k 2 (n-d)k ˆB′ |g -g(x)|dσ ≤ Cr -d ˆB′ |g -g(x)|dσ. If x is a Lebesgue point for g, (7.2) says that both sides of (7.18) tend to 0 when r tends to 0. Recall from (7.13) that for almost every x ∈ Γ, T (E(g))(x) is the limit of ffl B(x,r) u; if in addition x is a Lebesgue point, we get that T (E(g))(x) = g(x). This completes our proof of (7.11). Now we show that for g ∈ H, u ∈ W and even u W ≤ C g H . The fact that u is locally integrable in Ω is obvious (u is continuous there because the cubes 2Q have bounded overlap), and similarly the distribution derivative is locally integrable, and given by (7.19) ∇u(x) = Q∈W y Q ∇ϕ Q (x) = Q∈W [y Q -y R ]∇ϕ Q (x), where in the second part (which will be used later) we can pick for R any given cube (that may depend on x), for instance, one to the cubes of W that contains x, and the identity holds because Q ∇ϕ Q = ∇( Q ϕ Q ) = 0. Thus the question is merely the computation of u 2 W = ˆΩ |∇u(x)| 2 w(x)dx = R∈W ˆR |∇u(x)| 2 w(x)dx ≤ C R∈W δ(R) d+1-n ˆR |∇u(x)| 2 dx (7.20) (because w(x) = δ(x) d+1-n ≤ δ(R) d+1-n when x ∈ R). Fix R ∈ W, denote by W(R) the set of cubes Q ∈ W such that 2Q meets R, and observe that for x ∈ R, (7.21) |∇u(x)| ≤ Q∈W(R) [y Q -y R ]∇ϕ Q (x) ≤ Cδ(R) -1 Q∈W(R) y Q -y R because |∇ϕ Q (x)| ≤ Cδ(Q) -1 ≤ Cδ(R) -1 by definitions and the standard geometry of Whitney cubes. In turn, y Q -y R ≤ Γ∩B Q Γ∩B R |g(x) -g(y)|dσ(x)dσ(y) ≤ Γ∩B Q Γ∩B R |g(x) -g(y)| 2 dσ(x)dσ(y) 1/2 ≤ Cδ(R) -d ˆΓ∩B R ˆΓ∩100B R |g(x) -g(y)| 2 dσ(x)dσ(y) 1/2 (7.22) by (1.1) and because B Q ⊂ 100B R . Thus by (7.21) ˆR |∇u(x)| 2 dx ≤ C|R|δ(R) -2 δ(R) -2d ˆΓ∩B R ˆΓ∩100B R |g(x) -g(y)| 2 dσ(x)dσ(y) ≤ Cδ(R) n-2d-2 ˆΓ∩B R ˆΓ∩100B R |g(x) -g(y)| 2 dσ(x)dσ(y) (7.23) because W(R) has at most C elements. We multiply by δ(R) d+1-n , sum over R, and get that u 2 W ≤ C R∈W δ(R) -d-1 ˆΓ∩B R ˆΓ∩100B R |g(x) -g(y)| 2 dσ(x)dσ(y) ≤ C ˆΓ ˆΓ |g(x) -g(y)| 2 h(x, y)dσ(x)dσ(y), (7.24) where we set (7.25) h(x, y) = R δ(R) -d-1 , and we sum over R ∈ W such that x ∈ B R and y ∈ 100B R . Notice that |x -y| ≤ 101δ(R), so we only sum over R such that δ(R) ≥ |x -y|/101. Let us fix x and y, and evaluate h(x, r). For each scale (each value of diam(R)), there are less than C cubes R ∈ W that are possible, because x ∈ B R implies that dist(x, R) ≤ 3δ(R). So the contribution of the cubes for which diam(R) is of the order r is less than Cr -d-1 . We sum over the scales (larger than C -1 |x -y|) and get less than C|x -y| -d-1 . That is, h(x, y) ≤ C|x -y| -d-1 and (7.26) u 2 W ≤ C ˆΓ ˆΓ |g(x) -g(y)| 2 |x -y| d+1 dσ(x)dσ(y) = C g 2 H , as needed for (7.12). Theorem 7.10 follows. We end the section with the density in H of (traces of) smooth functions. Lemma 7.27. For every g ∈ H, we can find a sequence (v k ) k∈N in C ∞ (R n ) such that T v k converges to g in H in L 1 loc (Γ, σ) , and σ-a.e. pointwise. Notice that since v k is continuous across Γ, T v k is the restriction of v k to Γ, and we get the density in H of continuous functions on Γ, for the same three convergences. Proof. The quickest way to prove this will be to use Theorem 3.13, Theorem 7.10 and the results in Section 5. Let g ∈ H be given. Let ρ ǫ be defined as in Lemma 5.21, and set v ε = ρ ǫ * Eg and g ε = T v ε . Theorem 7.10 says that Eg ∈ W ; then by Lemma 5.21, v ε = ρ ǫ * Eg lies in C ∞ (R n ) ∩ W . We still need to check that g ε tends to g for the three types of convergence. By Lemma 5.21, v ε = ρ ǫ * Eg converges to Eg in L 1 loc (R n ) and in W , and then (5.18) implies that g ǫ = T v ε tends to g = T (Eg) in L 1 loc (Γ, σ). The convergence in H is the consequence of the bounds (7.28) g -g ǫ H ≤ T (Eg -v ǫ ) H ≤ C Eg -v ǫ W that come from Theorem 7.10, plus the fact that the right-hand side converges to 0 thanks to Lemma 5.21. For the a.e. pointwise convergence, let us cheat slightly: we know that the g ε converge to g in L 1 loc (Γ, σ); we can then use the diagonal process to extract a sequence of g ε that converges pointwise a.e. to g, which is enough for the lemma. Definition of solutions The aim of the following sections is to define the harmonic measure on Γ. We follow the presentation of Kenig [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Sections 1.1 and 1.2]. In addition to W , we introduce a local version of W . Let E ⊂ R n be an open set. The set of function W r (E) is defined as (8.1) W r (E) = {f ∈ L 1 loc (E), ϕf ∈ W for all ϕ ∈ C ∞ 0 (E)} where the function ϕf is seen as a function on R n (since ϕf is compactly supported in E, it can be extended by 0 outside E). The inclusion W ⊂ W r (E) is given by Lemma 5.24. Let us discuss a bit more about our newly defined spaces. First, we claim that (8.2) W r (E) ⊂ {f ∈ L 1 loc (E), ∇f ∈ L 2 loc (E, w)} , where here ∇f denotes the distributional derivative of f in E. To see this, let f ∈ W r (E) be given; we just need to see that ∇f ∈ L 2 (K, w) for any relatively compact open subset K of E. Pick ϕ ∈ C ∞ 0 (E) such that ϕ ≡ 1 on K, and observe that ϕf ∈ W by (8.1), so Lemma 3.3 says that ϕf has a distribution derivative (on R n ) that lies in L 2 (R n , w). Of course the two distributions ∇f and ∇(ϕf ) coincide near K, so ∇f ∈ L 2 (K, w) and our claim follows. The reverse inclusion W r (E) ⊃ {f ∈ L 1 loc (E), ∇f ∈ L 2 loc (E, w)} surely holds, but we will not use it. Note that thanks to Lemma 3.3, we do not need to worry, even locally as here, about the difference between having a derivative in Ω ∩ E that lies in L 2 loc (E, w) and the apparently stronger condition of having a derivative in E that lies in L 2 loc (E, w). Also note that W r (R n ) = W ; the difference is that W demands some decay of ∇u at infinity, while W r (R n ) doesn't. Lemma 8.3. Let E ⊂ R n be an open set. For every function u ∈ W r (E), we can define the trace of u on Γ ∩ E by (8.4) T u(x) = lim r→0 B(x,r) u for σ-almost every x ∈ Γ ∩ E, and T u ∈ L 1 loc (Γ ∩ E, σ). Moreover, for every choice of f ∈ W r (E) and ϕ ∈ C ∞ 0 (E), (8.5) T (ϕu)(x) = ϕ(x)T u(x) for σ-almost every x ∈ Γ ∩ E. Proof. The existence of lim r→0 ffl B(x,r) u is easy. If B is any relatively compact ball in E, we can pick ϕ ∈ C ∞ 0 (E) such that ϕ ≡ 1 near B. Then ϕu ∈ W , and the analogue of (8.4) for ϕu comes with the construction of the trace. This implies the existence of the same limit for f , almost everywhere in Γ ∩ B. Next we check that T u ∈ L 1 loc (Γ ∩ E, σ). Let K be a compact set in E; we want to show that T u ∈ L 1 (K ∩ Γ, σ). Take ϕ ∈ C ∞ 0 (E) such that ϕ ≡ 1 on K. Then ϕu ∈ W by definition of W r (E) and thus (8.6) T u L 1 (K∩Γ,σ) ≤ T [ϕu] L 1 (K∩Γ,σ) ≤ C K ϕu W < +∞ by (3.30). Let us turn to the proof of (8.5). Take ϕ ∈ C ∞ 0 (E) and then choose φ ∈ C ∞ 0 (E) such that φ ≡ 1 on supp ϕ. According to Lemma 5.24, T (ϕφu)(x) = ϕ(x)T (φu)(x) for almost every x ∈ Γ. The result then holds by noticing that ϕφu = ϕu (i.e. T (ϕφu)(x) = T (ϕu)(x)) and φu = u on supp ϕ (i.e. ϕ(x)T (φu)(x) = ϕ(x)T (u)(x)). Let us remind the reader that we will be working with the differential operator L = div A∇, where A : Ω → M n (R) satisfies, for some constant C 1 ≥ 1, • the boundedness condition (8.7) |A(x)ξ • ν| ≤ C 1 w(x)|ξ| • |ν| ∀x ∈ Ω, ξ, ν ∈ R n ; • the ellipticity condition (8.8) A(x)ξ • ξ ≥ C -1 1 w(x)|ξ| 2 ∀x ∈ Ω, ξ ∈ R n . We denote the matrix w -1 A by A, so that ´Ω A∇u • ∇v = ´Ω A∇u • ∇v dm. The matrix A satisfies the unweighted elliptic and boundedness conditions, that is (8.9) |A(x)ξ • ν| ≤ C 1 |ξ| • |ν| ∀x ∈ Ω, ξ, ν ∈ R n , and (8.10) A(x)ξ • ξ ≥ C -1 1 |ξ| 2 ∀x ∈ Ω, ξ ∈ R n . Let us introduce now the bilinear form a defined by (8.11) a(u, v) = ˆΩ A∇u • ∇v = ˆΩ A∇u • ∇v dm. From (8.9) and (8.10), we deduce that a is a bounded on W × W and coercive on W (hence also on W 0 ). That is, (8.12) a(u, u) = ˆΩ A∇u • ∇u dm ≥ C -1 1 ˆΩ |∇u| 2 dm = C -1 1 u 2 W for u ∈ W , by (8.10). Definition 8.13. Let E ⊂ Ω be an open set. We say that u ∈ W r (E) is a solution of Lu = 0 in E if for any ϕ ∈ C ∞ 0 (E), (8.14) a(u, ϕ) = ˆΩ A∇u • ∇ϕ = ˆΩ A∇u • ∇ϕ dm = 0. We say that u ∈ W r (E) is a subsolution (resp. supersolution) in E if for any ϕ ∈ C ∞ 0 (E) such that ϕ ≥ 0, (8.15) a(u, ϕ) = ˆΩ A∇u • ∇ϕ = ˆΩ A∇u • ∇ϕ dm ≤ 0 (resp. ≥ 0). In particular, subsolutions and supersolutions are always associated to the equation Lu = 0. In the same way, each time we say that u is a solution in E, it means that u is in W r (E) and is a solution of Lu = 0 in E. We start with the following important result, that extends the possible test functions in the definition of solutions. Lemma 8.16. Let E ⊂ Ω be an open set and let u ∈ W r (E) be a solution of Lu = 0 in E. Also denote by E Γ is the interior of E ∪ Γ. The identity (8.14) holds: • when ϕ ∈ W 0 is compactly supported in E; • when ϕ ∈ W 0 is compactly supported in E Γ and u ∈ W r (E Γ ); • when E = Ω, ϕ ∈ W 0 , and u ∈ W . In addition, (8.15) holds when u is a subsolution (resp. supersolution) in E and ϕ is a non-negative test function satisfying one of the above conditions. Remark 8.17. The second statement of the Lemma will be used in the following context. Let B ⊂ R n be a ball centered on Γ and let u ∈ W r (B) be a solution of Lu = 0 in B \ Γ. Then we have (8.18) a(u, ϕ) = ˆΩ A∇u • ∇ϕ dm = 0 for any ϕ ∈ W 0 compactly supported in B. Similar statements can be written for subsolutions and supersolutions. Proof. Let u ∈ W r (E) be a solution of Lu = 0 on E and let ϕ ∈ W 0 be compactly supported in E. We want to prove that a(u, ϕ) = 0. Let E be an open set such that supp ϕ compact in E and E is relatively compact in E. By Lemma 5.21, there exists a sequence (ϕ k ) k≥1 of functions in C ∞ 0 ( E) such that ϕ k → ϕ in W . Observe that the map (8.19) φ → a E (u, φ) = ˆ E A∇u • ∇φ dm is bounded on W thanks to (8.9) and the fact that ∇u ∈ L 2 ( E, w) (see (8.2)). Then, since ϕ and the ϕ k are supported in E, (8.20) a(u, ϕ) = a E (u, ϕ) = lim k→+∞ a E (u, ϕ k ) = lim k→+∞ a(u, ϕ k ) = 0 by (8.14). Now let u ∈ W r (E Γ ) be a solution of Lu = 0 on E and let ϕ ∈ W 0 be compactly supported in E Γ . We want to prove that a(u, ϕ) = 0. Let E Γ be an open set such that supp ϕ is compact in E Γ and E Γ is relatively compact in E Γ . If we look at the proof of Lemma 5.30 (that uses cut-off functions and the smoothing process given by Lemma 5.21), we can see that our ϕ ∈ W 0 can be approached in W by functions ϕ k ∈ C ∞ 0 ( E Γ \ Γ). In addition, the map (8.21) φ → a E Γ (u, φ) = ˆ E Γ A∇u • ∇φ dm is bounded on W thanks to (8.9) and the fact that ∇u ∈ L 2 ( E Γ , w) (that holds because u ∈ W r (E Γ )). Then, as before, (8.22) a(u, ϕ) = a E Γ (u, ϕ) = lim k→+∞ a E Γ (u, ϕ k ) = lim k→+∞ a(u, ϕ k ) = 0. The proof of the last point, that is a(u, ϕ) = 0 if u ∈ W and ϕ ∈ W 0 , works the same way as before. This time, we use the facts that Lemma 5.21 gives an approximation of ϕ by functions in C ∞ 0 (Ω) and that φ → a(u, φ) is bounded on W . Finally, the cases where u is a subsolution or a supersolution have a similar proof. We just need to observe that the smoothing provided by Lemma 5.21 conserves the non-negativity of a test function. The first property that we need to know about sub/supersolution is the following stability property. Lemma 8.23. Let E ⊂ Ω be an open set. • If u, v ∈ W r (E) are subsolutions in E, then t = max{u, v} is also a subsolution in E. • If u, v ∈ W r (E) are supersolutions in E, then t = min{u, v} is also a supersolution in E. In particular if k ∈ R, then (u -k) + := max{u -k, 0} is a subsolution in E whenever u ∈ W r (E) is a subsolution in E and min{u, k} is a supersolution in E whenever u ∈ W r (E) is a supersolution in E. Proof. It will be enough to prove the the first statement of the lemma, i.e., the fact that t = max{u, v} is a subsolution when u and v are subsolutions. Indeed, the statement about supersolutions will follow at once, because it is easy to see that u ∈ W r (E) is a supersolution if and only -u is a subsolution. The remaining assertions are then straightforward consequences of the first ones (because constant functions are solutions). So we need to prove the first part, and fortunately it will be easy to reduce to the classical situation, where the desired result is proved in [START_REF] Stampacchia | Le problème de Dirichlet pour les équations elliptiques du second ordre à coefficients discontinus[END_REF]Theorem 3.5]. We need an adaptation, because Stampacchia's proof corresponds to the case where the subsolutions u, v lie in W , and also we want to localize to a place where w is bounded from above and below. Let F ⊂ E be any open set with a smooth boundary and a finite number of connected components, and whose closure is compact in E. We define a set of functions W F as (8.24) W F = {f ∈ L 1 loc (F ), ∇f ∈ L 2 (F, w)}. Let us record a few properties of W F . Since F is relatively compact in E ⊂ Ω, the weight w is bounded from above and below by a positive constant. Hence W F is the collection of functions in L 1 loc (F ) whose distributional derivative lies in L 2 (F ). Since F is bounded and has a smooth boundary, these functions lie in L 2 (F ) (see [START_REF] Maz | Sobolev spaces with applications to elliptic partial differential equations[END_REF]Corollary 1.1.11]). Of course Mazya states this when F is connected, but we here F has a finite number of components, and we can apply the result to each one. So W F is the 'classical' (where the weight is plain) Sobolev space on F . That is, (8.25) W F = {f ∈ L 2 (F ), ∇f ∈ L 2 (F )}. Notice that u and v lie in W F , so they are "classical" subsolutions of L in F , where (since F is relatively compact in E ⊂ Ω) w is bounded and bounded below, and hence L is a classical elliptic operator. Then, by [START_REF] Stampacchia | Le problème de Dirichlet pour les équations elliptiques du second ordre à coefficients discontinus[END_REF]Theorem 3.5], t = max{u, v} is also a classical subsolution in F . This means that a(t, ϕ) ≤ 0 for ϕ ∈ C ∞ 0 (F ). Now we wanted to prove this for every ϕ ∈ C ∞ 0 (E), and it is enough to observe that if ϕ ∈ C ∞ 0 (E) is given, then we can find an open set F ⊂⊂ E that contains the support of ϕ, and with the regularity properties above. Hence t is a subsolution in E, and the lemma follows. It was fortunate for this argument that the notion of subsolution does not come with precise estimates that would depend on w. In the sequel, the notation sup and inf are used for the essential supremum and essential infimum, since they are the only definitions that makes sense for the functions in W or in W r (E), E ⊂ R n open. Also, when we talk about solutions or subsolutions and don't specify, this will always refer to our fixed operator L. We now state some classical regularity results inside the domain. In particular, if B is a ball of radius r such that 2B ⊂ Ω and u ∈ W r (2B) is a non-negative subsolution in 2B, then (8.28) ˆB |∇u| 2 dm ≤ Cr -2 ˆ2B u 2 dm. Proof. Let α ∈ C ∞ 0 (E). We set ϕ = α 2 u. Since u ∈ W r (E), u ≤ C 1 m(2B) ˆ2B u p dm 1 p , where C depends on n, d, C 1 and p. Proof. For this lemma and the next ones, we shall use the fact that since 2B is far from Γ, our weight w is under control there, and we can easily reduce to the classical case. Let x and r denote the center and the radius of B. Since 3B ⊂ Ω, δ(x) ≥ 3r. For any z ∈ 2B, δ(x) -2r ≤ δ(z) ≤ δ(x + 2r), hence (8.36) 1 3 ≤ 1 - 2r δ(x) ≤ δ(z) δ(x) ≤ 1 + 2r δ(x) ≤ 5 3 and consequently (8.37) C -1 n,d w(x) ≤ w(z) ≤ C n,d w(x). Let u ∈ W r (3B) be a non-negative subsolution in 2B. Thanks to (8.2) and (8.37), the gradient ∇u lies in L 2 (2B). By the Poincaré's inequality, u ∈ L 2 (2B) and thus u lies in the classical (with no weight) Sobolev space W 2B of (8.25). Consider the differential operator L =div A∇ with A(z) = A(z) w(z) w(x) . Thanks to (8.37), (8.9) and (8.10), A(z) satisfies the elliptic condition and the boundedness condition (8.9) and (8.10), in the domain 2B, and with the constant C n,d C 1 . The condition satisfied by a subsolution (of Lu = 0) on 2B can be rewritten (8.38) ˆ2B A∇u • ∇ϕ ≤ 0, and so we are back in the situation of the classical elliptic case. By [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Lemma 1.1.8 u ≤ C r R α 1 m(B(x, R)) ˆB(x,R) u 2 dm 1 2 , where α and C depend only on n, d, and C 1 . Hence u is (possibly after modifying it on a set of measure 0) locally Hölder continuous with exponent α. Proof. This lemma and the next one follow from the classical results (see for instance [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Section 1.1], or [START_REF] Gilbarg | Elliptic partial differential equations of second order[END_REF]Sections 8.6,8.8 and 8.9]), by the same trick as for Lemma 8.34: we observe that L is a constant times a classical elliptic operator on 2B. Lemma 8.42 (Harnack). Let B be a ball such that 3B ⊂ Ω, and let u ∈ W r (3B) be a non-negative solution in 3B. Then (8.43) sup B u ≤ C inf B u, where C depends only on n, d and C 1 . For the next lemma, we shall need the Harnack tubes from Lemma 2.1. Lemma 8.44. Let K be a compact set of Ω and let u ∈ W r (Ω) be a non-negative solution in Ω. Then (8.45) sup K u ≤ C K inf K u, where C K depends only on n, d, C 0 , C 1 , dist(K, Γ) and diam K. Proof. Let K be a compact set in Ω. We can find r > 0 and k ≥ 1 such that dist(K, Γ) ≥ r and diam K ≤ kr. Now let x, y ∈ K be given. Notice that δ(x) ≥ r, δ(y) ≥ r and |x -y| ≤ kr, so Lemma 2.1 implies the existence of a path of length at most by (k + 1)r that joins x to y and stays at a distance larger than some ǫ (that depends on C 0 , d, n, r and k) of Γ. That is, we can find a finite collection of balls B 1 , . . . , B n (n bounded uniformly on x, y ∈ K) such that 3B i ⊂ Ω, B 1 is centered on x, B n is centered on y, and B i ∩ B i+1 = ∅. It remains to use n times Lemma 8.42 to get that (8.46) u(x) ≤ C n u(y) ≤ C K u(y). Lemma 8.44 follows. We also need analogues at the boundary of the previous results. For these we cannot immediately reduce to the classical case, but we will be able to copy the proofs. Of course we shall use our trace operator to define boundary conditions, say, in a ball B, and this is the reason why we want to use the space is W r (B) defined by (8.1). We cannot use W r (B \ Γ) instead, because we need some control on u near Γ to define T (u). In the sequel, we will use the expression 'T u = 0 a.e. on B', for a function u ∈ W r (B), to mean that T u, which is defined on Γ ∩ B and lies in L 1 loc (B ∩ Γ, σ) thanks to Lemma 8.3, is equal to 0 σ-almost everywhere on Γ ∩ B. The expression 'T u ≥ 0 a.e. on B' is defined similarly. We start with the Caccioppoli inequality on the boundary. Lemma 8.47 (Caccioppoli inequality on the boundary). Let B ⊂ R n be a ball of radius r centered on Γ, and let u ∈ W r (2B) be a non-negative subsolution in 2B \ Γ such that T (u) = 0 a.e. on 2B. Then for any α ∈ C ∞ 0 (2B), (8.48) ˆ2B α 2 |∇u| 2 dm ≤ C ˆ2B |∇α| 2 u 2 dm, where C depends only on the dimensions n and d and the constant C 1 . In particular, we can take α ≡ 1 on B and |∇α| ≤ 2 r , which gives (8.49) ˆB |∇u| 2 dm ≤ Cr -2 ˆ2B u 2 dm. Proof. We can proceed exactly as for Lemma 8.26, except that the initial estimate (8.29) needs to be justified differently. Here we choose to apply the second item of Lemma 8.16, as explained in Remark 8.17. That is, E = 2B \ Γ and E Γ = 2B. So we check the assumptions. We set, as before, ϕ = α 2 u. First observe that ϕ ∈ W because u ∈ W r (2B) and α ∈ C ∞ 0 (2B). Moreover, ϕ ∈ W 0 because, if we let φ ∈ C ∞ 0 ( 2B) be such that φ ≡ 1 on a neighborhood of supp α, Lemma 5.24 says that T (ϕ) = T (α 2 φu) = α 2 T (φu) = 0 a.e. on Γ. In addition, ϕ is compactly supported in 2B because α is, and u ∈ W r (2B) by assumption. Thus ϕ is a valid test function, Lemma 8.16 applies, (8.29) holds, and the rest of the proof is the same as for Lemma 8.26. Lemma 8.50 (Moser estimates on the boundary). Let B be a ball centered on Γ. Let u ∈ W r (2B) be a non-negative subsolution in 2B \ Γ such that T u = 0 a.e. on 2B. Then (8.51) sup B u ≤ C m(2B) -1 ˆ2B u 2 dm 1 2 , where C depends only on the dimensions d and n and the constants C 0 and C 1 . Proof. This proof will be a little longer, but we will follow the ideas used by Stampacchia in [START_REF] Stampacchia | Le problème de Dirichlet pour les équations elliptiques du second ordre à coefficients discontinus[END_REF]Section 5]. The aim is to use the so-called Moser iterations. We start with some consequences of Lemma 8.47. Pick 2 * ∈ (2, +∞) in the range of p satisfying the Sobolev-Poincaré inequality (4.34); for instance take 2 * = 2n n-1 . Let u be as in the statement and let B = B(x, r) be a ball centered on Γ. We claim that for any α ∈ C ∞ 0 (2B), (8.52) ˆ2B (αu) 2 dm ≤ Cr 2 m(supp αu) 1-2 2 * m(2B) 2 2 * -1 ˆ2B |∇α| 2 u 2 dm where in fact we abuse notation and set supp αu = {αu > 0}. Indeed, by Hölder's inequality and the Sobolev-Poincaré inequality (4.34), ˆRn (αu) 2 dm ≤ Cm(supp αu) 1-2 2 * ˆ2B (αu) 2 * dm 2 2 * ≤ Cr 2 m(supp αu) 1-2 2 * m(2B) 2 2 * -1 ˆ2B |∇[αu]| 2 dm. (8.53) The last integral can be estimated, using Caccioppoli's inequality (Lemma 8.47), by ˆ2B |∇(αu)| 2 dm ≤ 2 ˆ2B |∇α| 2 u 2 dm + 2 ˆ2B |∇u| 2 α 2 dm ≤ C ˆ2B |∇α| 2 u 2 dm. (8.54) Our claim claim (8.52) follows. Recall that B = B(x, r), with x ∈ Γ. Since u is a subsolution in 2B \ Γ, Lemma 8.23 says that (uk) + := max{uk, 0} is a non-negative subsolution in 2B \ Γ. For any 0 < s < t ≤ 2r, we choose a smooth function α supported in B(x, t), such that 0 ≤ α ≤ 1, α ≡ 1 on B(x, s), and |∇α| ≤ 2 t-s . By (8.52) (applied to (uk) + and this function α), (8.55) ˆA(k,s) |u -k| 2 dm ≤ C r 2 (t -s) 2 m(A(k, t)) 1-2 2 * m(2B) 2 2 * -1 ˆA(k,t) |u -k| 2 dm where A(k, s) = {y ∈ B(x, s), u(y) > k}. If h > k, we have also, (8.56) (h -k) 2 m(A(h, s)) ≤ ˆA(h,s) |u -k| 2 dm ≤ ˆA(k,        u(k, s) ≤ Cr 2 m(2B) 2 2 * -1 (t -s) 2 u(k, t)[a(k, t)] 1-2 2 * a(h, s) ≤ 1 (h -k) 2 u(k, t) or, if we set κ = 1 -2 2 * > 0, (8.60)        u(k, s) ≤ Cr 2 m(2B) -κ (t -s) 2 u(k, t)[a(k, t)] κ a(h, s) ≤ 1 (h -k) 2 u(k, t). Notice also that u(h, s) ≤ u(k, s) because A(h, s) ⊂ A(k, s) and |u-h| 2 ≤ |u-k| 2 on A(h, s). Let ǫ > 0 be given, to be chosen later. The estimates (8.60) yield (8.61) u(h, s) ǫ a(h, s) ≤ u(k, s) ǫ a(h, s) ≤ Cr 2ǫ m(2B) -ǫκ (t -s) 2ǫ (h -k) 2 u(k, t) ǫ+1 a(k, t) ǫκ . Following [Sta], we define a function of two variables ϕ by (8.62) ϕ(h, s) = u(h, s) ǫ a(h, s) for h > 0 and 0 < s < 2r. Notice that ϕ(h, s) ≥ 0. When s is fixed, ϕ(h, s) is non increasing in h, and when h is fixed, ϕ(h, s) is non decreasing in s. We want to show that (8.63) ϕ(h, s) ≤ K (h -k) α (t -s) γ [ϕ(k, t)] β for some choice of positive constants K, α and γ, and some β > 1, because if we do so we shall be able to use Lemma 5.1 in [Sta] directly. It is a good idea to choose ǫ so that (8.64) βǫ = ǫ + 1, β = ǫκ. for some β > 1. Choose β = 1 2 + 1 4 + κ > 1 and ǫ = β κ > 0. An easy computation proves that (ǫ, β) satisfies (8.64). With this choice, (8.61) becomes (8.65) ϕ(h, s) ≤ Cr 2ǫ m(2B) -ǫκ (t -s) 2ǫ (h -k) 2 ϕ(k, t) β , which is exactly (8.63) with K = Cr 2ǫ m(2B) -ǫκ , α = 2 and γ = 2ǫ. So we can apply Lemma 5.1 in [Sta], which says that (8.66) ϕ(d, r) = 0, where d is given by (8.67) d α = 2 β α+β β-1 K[ϕ(0, 2r)] β-1 r γ . We replace and get that we can take (8.68) d 2 = Cr 2ǫ m(2B) -ǫκ ϕ(0, 2r) β-1 r γ = Cm(2B) -ǫκ ϕ(0, 2r) β-1 . Notice that ϕ(d, r) = 0 implies that a(d, r) = 0, which in turn implies that u ≤ d a.e. on B = B(x, r). Moreover, by definition of a, we have a(0, 2r) ≤ m(2B). Thus sup B(x,r) u ≤ d ≤ Cm(2B) -ǫκ/2 u(0, 2r) (β-1)ε/2 a(0, 2r) (β-1)/2 ≤ Cu(0, 2r) ǫ(β-1)/2 m(2B) (β-1-ǫκ)/2 . (8.69) The first line in (8.64) yields ǫ(β -1) = 1 and the second line in (8.64) yields β -1-ǫκ = -1. Besides, u(0, 2r) = ´2B u 2 dm because u is nonnegative. Hence (8.70) sup B u ≤ C m(2B) -1 ˆ2B u 2 dm 1 2 , which is the desired conclusion. Proof. Lemma 8.71 can be deduced from Lemma 8.50 by a simple iterative argument. The proof is fairly similar to the very end of the proof of [HL, Chapter IV, Theorem 1.1]. Nevertheless, because the proof in [HL] doesn't hold at the boundary (and for the sake of completeness), we give a proof here. First, let us prove that we can improve (8.51) into the following: if B is a ball centered on Γ and u ∈ W r (B) is a non-negative subsolution on B ∩ Ω such that T u = 0 a.e. on B, then for any θ ∈ (0, 1) (in practice, close to 1), (8.73) sup θB u ≤ C(1 -θ) -n 2 m(B) -1 ˆB u 2 dm 1 2 , where C > 0 depends only on n, d, C 0 and C 1 . Let B be a ball centered on Γ, with radius r, and let θ ∈ (0, 1). Choose x ∈ θB. Two cases may happen: either δ(x) ≥ 1-θ 6 r or δ(x) < 1-θ 6 r. In the first case, if δ(x) ≥ 1-θ 6 r, we apply Lemma 8.34 to the ball B(x, 1-θ 20 r) (notice that B(x, 1-θ 10 r) ⊂ B ∩ Ω). We get that u(x) ≤ C 1 m(B(x, 1-θ 10 r)) ˆB(x, 1-θ 10 r) u 2 dm 1 2 ≤ C m(B(x, 2r)) m(B(x, 1-θ 10 r) 1 2 1 m(B) ˆB u 2 dm 1 2 ≤ C(1 -θ) -n 2 m(B) -1 ˆB u 2 dm 1 2 (8.74) by (2.12). In the second case, when δ(x) ≤ 1-θ 6 r, we take y ∈ Γ such that |x -y| = δ(x). Remark that y ∈ 1+θ 2 B and then B(y, 1-θ 2 r) ⊂ B. We apply then Lemma 8.50 to the ball B(y, 1-θ 6 r) in order to get u(x) ≤ sup B(y, 1-θ 6 r) u ≤ C 1 m(B(x, 1-θ 3 r)) ˆB(x, 1-θ 3 r) u 2 dm 1 2 ≤ C m(B(x, 2r)) m(B(x, 1-θ 3 r) 1 2 1 m(B) ˆB u 2 dm 1 2 ≤ C(1 -θ) -n 2 m(B) -1 ˆB u 2 dm 1 2 (8.75) with (2.12). The claim (8.73) follows. Let us prove now (8.72). Without loss of generality, we can restrict to the case p < 2, since the case p ≥ 2 can be deduced from Lemma 8.50 and Hölder's inequality. Let B = B(x, r) be a ball and let u ∈ W r (2B) be a non-negative subsolution on 2B \ Γ such that T u = 0 on 2B. Set for i ∈ N, r i := r i j=0 3 -j = 3 2 r(1 -3 -i-1 ) < 3 2 r. Note that r i r i -r i-1 = 3 i+1 -1 2 ≤ 3 i+1 . As a consequence, for any i ∈ N * , (8.73) yields sup B(x,r i-1 ) u ≤ C3 in 2 1 m(B(x, r i )) ˆB(x,r i ) |u| 2 dm 1 2 ≤ C3 in 2 sup B(x,r i ) u 1-p 2 1 m(B(x, r i )) ˆB(x,r i ) |u| p dm 1 2 ≤ C3 in 2 sup B(x,r i ) u 1-p 2 1 m(2B) ˆB(x,r i ) |u| p dm 1 2 . (8.76) Set α = 1-p 2 . By taking the power α i-1 of the inequality (8.76), where i is a positive integer, we obtain (8.77) sup B(x,r i-1 ) u α i-1 ≤ C α i-1 (3 in 2 ) α i-1 sup B(x,r i ) u α i m(2B) -1 ˆB(x,r i ) |u| p dm 1 2 α i , where C is independent of i (and also p, x, r and u). An immediate induction gives, for any i ≥ 1, (8.78) sup B(x,r) u ≤ C i-1 j=0 α j i j=1 3 jn 2 α j-1 sup B(x,r i ) u α i m(2B) -1 ˆB(x,r i ) |u| p dm 1 2 i-1 j=0 α j , and if we apply Corollary 8.51 once more, we get that (8.79) sup B u ≤ C i j=0 α j i+1 j=1 3 jn 2 α j-1 m(2B) -1 ˆ2B |u| p dm 1 2 i-1 j=0 α j m(2B) -1 ˆ3 2 B |u| 2 dm α i 2 . We want to to take the limit when i goes to +∞. Since u ∈ W r (2B), the quantity ´3 2 B |u| 2 dm is finite and thus (8.80) lim i→+∞ m(2B) -1 ˆ3 2 B |u| 2 dm α i 2 = 1 because we took p such that α = 1 -p 2 < 1. Note also that (8.81) lim i→+∞ i-1 j=0 α j = 2 p and lim i→+∞ 1 2 i-1 j=0 α j = 1 p . Furthermore, i+1 j=1 3 jn 2 α j-1 has a limit (that depends on p and n) when j < +∞ because, (8.82) ∞ j=1 jn 2 α j-1 = n 2 +∞ j=1 jα j-1 < +∞. These three facts prove that the limit when i → +∞ of the right-hand side of (8.79) exists and (8.83) sup B u ≤ C p m(2B) -1 ˆ2B |u| p dm 1 p , which is the desired result. Next comes the Hölder continuity of the solutions at the boundary. We start with a boundary version of the density property. Lemma 8.84. Let B be a ball centered on Γ and u ∈ W r (4B) be a non-negative supersolution in 4B \ Γ such that T u = 1 a.e. on 4B. Then (8.85) inf B u ≥ C -1 , where C > 0 depends only on the dimensions d, n and the constants C 0 , C 1 . Proof. The ideas of the proof are taken from the Density Theorem (Section 4.3, Theorem 4.9) in [HL]. The result in [HL] states, roughly speaking, that (8.85) holds whenever u is a supersolution in 4B ⊂ Ω such that u ≥ 1 on a large piece of B; and its proof relies on a Poincaré inequality on balls for functions that equal 0 on a big piece of the considered ball. We will adapt this argument to the case where B is centered on Γ and we will rely on the Poincaré inequality given by Lemma 4.1. Let B and u be as in the statement. Let δ ∈ (0, 1) be small (it will be used to avoid some functions to take the value 0) and set u δ = min{1, u + δ} and v δ := -Φ δ (u δ ), where Φ is a smooth Lipschitz function defined on R such that Φ δ (s) =ln(s) when s ∈ [δ, 1]. The plan of the proof is: first we prove that v δ is a subsolution, and then we use the Moser estimate and the Poincaré inequality given Lemma 8.50 and 4.1 respectively. It will give that the supremum of v δ on B is bounded by the L 2 -norm of the gradient of v δ . Then, we will test the supersolution u δ against an appropriate test function, which will give that the L 2 (2B) bound on ∇v δ -and thus the supremum of v δ on B -can be bounded by a constant independent of δ. This will yield a lower bound on u δ (x) which is uniform in δ and x ∈ B. So we start by proving that (8.86) v δ ∈ W r (4B) is a subsolution in 4B \ Γ such that T v δ = 0 a.e. on 4B. Let ϕ ∈ C ∞ 0 (Ω ∩ 4B). Choose φ ∈ C ∞ 0 (Ω ∩ 4B ) such that φ ≡ 1 on supp ϕ. Then for y ∈ Ω, (8.87) v δ (y)ϕ(y) = Φ δ (min{1, (u(y) + δ)φ(y)})ϕ(y). Since u ∈ W r (4B), it follows that uφ ∈ W and thus (u + δ)φ ∈ W . Consequently, we obtain min{1, (u + δ)φ} ∈ W by Lemma 6.1 (b), then Φ δ (min{1, (u + δ)φ}) ∈ W by Lemma 6.1 (a) and finally v δ ϕ ∈ W thanks to Lemma 5.24. Hence v δ ∈ W r (4B). Using the fact that the trace is local and Lemmata 6.1 and 8.3, it is clear that (8.88) T v δ =ln(min{1, T (uφ) + δ}) = 0 a.e. on 4B. The claim (8.86) will be proven if we can show that v δ is a subsolution in 4B \ Γ. Let ϕ ∈ C ∞ 0 (4B \ Γ) be a non-negative function. We have ˆΩ A∇v δ • ∇ϕ dm = - ˆΩ A∇u δ u δ • ∇ϕ dm = -ˆΩ A∇u δ • ∇ ϕ u δ dm - ˆ4B A∇u δ • ∇u δ u 2 δ ϕ dm. (8.89) The second term in the right-hand side is non-positive by the ellipticity condition (8.10). So v δ is a subsolution if we can establish that (8.90) ˆΩ A∇u δ • ∇ ϕ u δ dm ≥ 0. Yet u δ is a supersolution according to Lemma 8.23. Moreover ϕ/u δ is compactly supported in 4B \ Γ and, since u δ ≥ δ > 0, we deduce from Lemma 6.1 that ϕ u δ ∈ W . So (8.90) is just a consequence of Lemma 8.16. The claim (8.86) follows. The function v δ satisfies now all the assumptions of Lemma 8.50 and thus (8.91) sup B v δ ≤ C m(2B) -1 ˆ2B |v δ | 2 dm 1 2 . Since T v δ = 0 a.e. on 2B, the right-hand side can be bounded with the help of (4.15), which gives (8.92) sup B v δ ≤ Cr m(2B) -1 ˆ2B |∇v δ | 2 dm 1 2 . We will prove that the right-hand side of (8.92) is bounded uniformly in δ. Use the test function ϕ = α 2 1 u δ -1 with α ∈ C ∞ 0 (4B), 0 ≤ α ≤ 1, α ≡ 1 on 2B and ∇α ≤ 1 r . Note that ϕ is a non-negative function compactly supported in 4B and, by Lemma 6.1, ϕ is in W and has zero trace, that is ϕ ∈ W 0 . Since u is a supersolution, u is also a supersolution. We test u δ against ϕ (this is allowed, thanks to Lemma 8.16) and we get 0 ≤ ˆRn A∇u δ • ∇ α 2 1 u δ -1 dm = - ˆRn α 2 A∇u δ • ∇u δ u 2 δ dm + 2 ˆRn α (1 -u δ ) A∇u δ • ∇α u δ dm, (8.93) hence, by the ellipticity and the boundedness of A (see (8.9) and (8.10)), ˆRn α 2 |∇u δ | 2 u 2 δ dm ≤ C ˆRn α 2 A∇u δ • ∇u δ u 2 δ dm ≤ C ˆRn α (1 -u δ ) A∇u δ • ∇α u δ dm ≤ C ˆRn α (1 -u δ ) |∇u δ ||∇α| u δ dm ≤ C ˆRn α 2 |∇u δ | 2 u 2 δ dm 1 2 ˆRn (1 -u δ ) 2 |∇α| 2 dm 1 2 (8.94) by Cauchy-Schwarz' inequality. Therefore, (8.95) ˆRn α 2 |∇ ln u δ | 2 dm = ˆRn α 2 |∇u δ | 2 u 2 δ dm ≤ C ˆRn (1 -u δ ) 2 |∇α| 2 dm ≤ C ˆRn |∇α| 2 dm because 0 ≤ u δ ≤ 1, and then with our particular choice of α, (8.96) m -1 (2B) ˆ2B |∇v δ | 2 dm = m -1 (2B) ˆ2B |∇ ln u δ | 2 dm ≤ C r 2 . Lemma 8.106. Let B = B(x, r) be a ball centered on Γ and u ∈ W r (B) be a solution in B such that T u is continuous and bounded on B. There exists α > 0 such that for 0 < s < r, (8.107) osc B(x,s) u ≤ C s r α osc B(x,r) u + C osc B(x, √ sr)∩Γ T u where the constants α, C depend only on the dimensions n and d and the constants C 0 and C 1 . In particular, u is continuous on B. If, in addition, T u ≡ 0 on B, then for any 0 < s < r/2 (8.108) osc B(x,s) u ≤ C s r α m(B) -1 ˆB |u| 2 dm 1 2 . Proof. The first part of the Lemma, i.e. the estimate (8.107), is a straightforward consequence of Lemma 8.98 and [START_REF] Gilbarg | Elliptic partial differential equations of second order[END_REF]Lemma 8.23]. Basically, [START_REF] Gilbarg | Elliptic partial differential equations of second order[END_REF]Lemma 8.23] is a result on functions stating that the functional inequality (8.99) can be turned, via iterations, into (8.107). The second part of the Lemma is simply a consequence of the first part and of the Moser inequality given in Lemma 8.50. Harmonic measure We want to solve the Dirichlet problem (9.1) Lu = f in Ω u = g on Γ, with a notation that we explain now. Here we require u to lie in W , and by the second line we actually mean that T u = g σ-almost everywhere on Γ, where T is our trace operator. Logically, we are only interested in functions g ∈ H, because we know that T (u) ∈ H for u ∈ W . The condition Lu = f in Ω is taken in the weak sense, i.e. we say that u ∈ W satisfies Lu = f , where f ∈ W -1 = (W 0 ) * , if for any v ∈ W 0 , (9.2) a(u, v) = ˆΩ A∇u • ∇v = f, v W -1 ,W 0 . Notice that when f ≡ 0, a function u ∈ W that satisfies (9.2) is a solution in Ω. Now, we made sense of (9.1) for at least f ∈ W -1 and g ∈ H. The next result gives a good solution to the Dirichlet problem. Lemma 9.3. For any f ∈ W -1 and any g ∈ H, there exists a unique u ∈ W such that (9.4) Lu = f in Ω T u = g a.e. on Γ. Moreover, there exists C > 0 independent of f and g such that (9.5) u W ≤ C( g H + f W -1 ), where (9.6) f W -1 = sup ϕ∈W 0 ϕ W =1 f, ϕ W -1 ,W 0 . Proof. Since g ∈ H, Theorem 7.10 implies that there exists G ∈ W such that T (G) = g and (9.7) G W ≤ C g H . The quantity LG is an element of W -1 defined by (9.8) LG, ϕ W -1 ,W 0 := ˆΩ A∇G • ∇ϕ = ˆΩ A∇G • ∇ϕ dm, and notice that (9.9) LG W -1 ≤ C G W ≤ C g H by (8.9) and (9.7). Observe that the conditions (8.9) and (8.10) imply that the bilinear form a is bounded and coercive on W 0 . It follows from the Lax-Milgram theorem that there exists a (unique) v ∈ W 0 such that Lv = -LGf . Set u = Gv. It is now easy to see that T u = g a.e. on Γ and Lu = f in Γ. The existence of a solution of (9.4) follows. It remains to check the uniqueness of the solution and the bounds (9.5). Take u 1 , u 2 ∈ W two solutions of (9.4). One has then T (u 1u 2 ) = gg = 0 and hence u 1u 2 ∈ W 0 . Moreover, L(u 1u 2 ) = 0. Since a is bounded and coercive, the uniqueness in the Lax-Milgram theorem yields u 1u 2 = 0. Therefore (9.4) has also a unique solution. Finally, let us prove the bounds (9.5). From the coercivity of a, we get that (9.10) v 2 W ≤ Ca(v, v) ≤ C LG + f W -1 v W , i. e., with (9.9), (9.11) v W ≤ C LG + f W -1 ≤ C( g H + f W -1 ). We conclude the proof of (9.5) with (9.12) u W = G -v W ≤ C( g H + f W -1 ) by (9.7). The next step in the construction of a harmonic measure associated to L, is to prove a maximum principle. In particular, (9.15) implies that v ∈ W 0 . The third case of Lemma 8.16 allows us to test v against the supersolution u ∈ W ; this gives (9.16) ˆΩ A∇u • ∇v dm ≤ 0, that is with (9.14), (9.17) ˆΩ A∇v • ∇v dm = ˆ{u<0} A∇u • ∇u dm = ˆΩ A∇u • ∇v dm ≤ 0. Together with the ellipticity condition (8.10), we obtain v W ≤ 0. Recall from Lemma 5.9 that . W is a norm on W 0 ∋ v, hence v = 0 a.e. in Ω. We conclude from the definition of v that u ≥ 0 a.e. in Ω. Here is a corollary of Lemma 9.13. Lemma 9.18 (Maximum principle). Let u ∈ W be a solution of Lu = 0 in Ω. Then (9.19) sup Ω u ≤ sup Γ T u and (9.20) inf Ω u ≥ inf Γ T u, where we recall that sup and inf actually essential supremum and infimum. In particular, if T u is essentially bounded, (9.21) sup Ω |u| ≤ sup Γ |T u|. Proof. Let us prove (9.19). Write M for the essential supremum of T u on Γ; we may assume that M < +∞, because otherwise (9.19) is trivial. Then Mu ∈ W and T (Mu) ≥ 0 a.e. on Γ. Lemma 9.13 yields Mu ≥ 0 a.e. in Ω, that is (9.22) sup Ω u ≤ sup Γ T u. The lower bound (9.20) is similar, and (9.21) follows. We want to define the harmonic measure via the Riesz representation theorem (for measures), that requires a linear form on the space of compactly supported continuous functions on Γ. We denote this space by C 0 0 (Γ); that is, g ∈ C 0 0 (Γ) if g is defined and continuous on Γ, and there exists a ball B ⊂ R n centered on Γ such that supp g ⊂ B ∩ Γ. Lemma 9.23. There exists a bounded linear operator (9.24) U : C 0 0 (Γ) → C 0 (R n ) such that, for every every g ∈ C 0 0 (Γ), (i) the restriction of Ug to Γ is g; (ii) sup R n Ug = sup Γ g and inf R n Ug = inf Γ g; (iii) Ug ∈ W r (Ω) and is a solution of L in Ω; (iv) if B is a ball centered on Γ and g ≡ 0 on B, then Ug lies in W r (B); (v) if g ∈ C 0 0 (Γ) ∩ H, then Ug ∈ W , and it is the solution of (9.4), with f = 0, provided by Lemma 9.3. Proof. This is essentially an argument of extension from a dense class by uniform continuity. We first define U on C 0 0 (Γ) ∩ H, by saying that u = Ug is the solution of (9.4), with f = 0, provided by Lemma 9.3. Thus u ∈ W ; but since its trace is T u = g is continuous, it follows from Lemmata 8.40 and 8.106 (the Hölder continuity inside and at the boundary) that u is continuous on R n . Next we check that U is linear and bounded on C 0 0 (Γ) ∩ H ⊂ C 0 0 (Γ) (where we use the sup norm). The linearity comes from the uniqueness in Lemma 9.3, and the boundedness from the maximum principle: for g, h ∈ C 0 0 (Γ) ∩ H, we can apply (9.22) to u = Ug -Uh, and we get that sup R n |u| = sup Ω |u| ≤ sup Γ |T u| = ||g -h|| ∞ because u is continuous. It is clear that C 0 0 (Γ) ∩ H is dense in C 0 0 (Γ) , because (restrictions to Γ of) compactly supported smooth functions on R n (or even Lipschitz functions, for that matter) lie in H: compute their norm in (1.5) directly. Thus U has a unique extension by continuity to C 0 0 (Γ). We could even define U, with the same properties, on its closure (continuous functions that tend to 0 at infinity), but we decided not to bother. We are now ready to check the various properties of U. Given g ∈ C 0 0 (Γ), select a sequence (g k ) of compactly supported smooth functions that converges to g in the sup norm. Then u k = Ug k converges uniformly in R n to u = Ug, and in particular u is continuous and its restriction to Γ is g, as in (i). In addition, (ii) holds because sup R n u = lim k→+∞ sup R n u k ≤ lim k→+∞ sup Γ g k = sup Γ g, and similarly for the infimum. For (iii) we first need to check that u ∈ W r (Ω). Observe that we know these facts for the u k , so we'll only need to take limits. Let φ ∈ C ∞ 0 (Ω) be given. Lemma 8.26 (Caccioppoli's inequality) says that, since u k is a solution, (9.25) ˆΩ |∇(φu k )| 2 dm ≤ C ˆΩ φ 2 |∇u k | 2 dm + C ˆΩ |∇φ| 2 |u k | 2 dm ≤ C ˆΩ |∇φ| 2 |u k | 2 dm. The right-hand side of (9.25) converges to C ´Ω |∇φ| 2 |u| 2 dm, since |∇φ| 2 is bounded and compactly supported. So ´B |∇(φu k )| 2 dm is bounded uniformly in k. Since the φu k vanish outside of the support of φ (which lies far from Γ) and converge uniformly to φu, we get that the φu k converge to φu in L 1 and, since the |∇(φu k )| are uniformly bounded in L 2 (Ω, w), we can find a subsequence for which they converge weakly to a limit V ∈ L 2 (Ω, w). We easily check on test functions that ∇(φu) = V , hence φu ∈ W for any φ ∈ C ∞ 0 (Ω), and u ∈ W r (Ω). Next we check that u is a solution in Ω, i.e., that for ϕ ∈ C ∞ 0 (Ω), (9.26) ˆΩ A∇u • ∇ϕ dm = 0. Let ϕ ∈ C ∞ 0 (Ω) be given, and choose φ ∈ C ∞ 0 (Ω) such that φ ≡ 1 on supp ϕ. We just proved that for some subsequence, ∇(φu k ) converges weakly to ∇(φu) in L 2 (Ω, w). For (iv), suppose in addition that g ≡ 0 on a ball B centered on Γ; we want similar results in B (that is, across Γ). Notice that it is easy to approximate it (in the supremum norm) by smooth, compactly supported functions g k that also vanish on Γ ∩ B. Let use such a sequence (g k ) to define Ug = lim k→+∞ Ug k . Let ϕ ∈ C ∞ 0 (B) be given, and let us check that ϕu ∈ W . Set K = supp ϕ, suppose K = ∅, and set δ = dist(K, ∂B) > 0. Cover K ∩ Γ by a finite number of balls balls B i of radius 10 -1 δ centered on K ∩ Γ, and then cover K ′ = K \ ∪ i B i by a finite number of balls B j of radius 10 -2 δ centered on that set K ′ . We can use a partition of unity composed of smooth functions supported in the 2B i and the 2B j to reduce to the case when ϕ is supported on a 2B i or a 2B j . Suppose for instance that ϕ is supported in 2B i . We can apply Lemma 8.47 (Caccioppoli's inequality at the boundary) to u k = Ug k on the ball 2B i , because its trace g k vanishes on 4B i . We get that (9.28) ˆ2B i |∇(ϕu k )| 2 dm ≤ C ˆ2B i (|ϕ∇u k )| 2 + |u k ∇ϕ| 2 )dm ≤ ˆ4B i |∇ϕ| 2 |u k | 2 dm. With this estimate, we can proceed as with (9.25) above to prove that ϕu ∈ W and its derivative is the weak limit of the ∇(ϕu k ). When instead ϕ is supported in a 2B j , we use the interior Caccioppoli inequality (Lemma 8.26 and proceed as above). Thus u = Ug lies in W r (B), and this proves (iv). We started the proof with (v), so this completes our proof of Lemma 9.23. Our next step is the construction of the harmonic measure. Let X ∈ Ω. By Lemma 9.23, the linear form (9.29) g ∈ C 0 0 (Γ) → Ug(X) is bounded and positive (because u = Ug is nonnegative when g ≥ 0). The following statement is thus a direct consequence of the Riesz representation theorem (see for instance [START_REF] Rudin | Real and complex analysis[END_REF]Theorem 2.14]). Lemma 9.30. There exists a unique positive regular Borel measure ω X on Γ such that (9.31) Ug(X) = ˆΓ g(y)dω X (y) for any g ∈ C 0 0 (Γ). Besides, for any Borel set E ⊂ Γ, (9.32) ω X (E) = sup{ω X (K) : E ⊃ K, K compact} = inf{ω X (V ) : E ⊂ V, V open}. The harmonic measure is a probability measure, as proven in the following result. Lemma 9.33. For any X ∈ Ω, ω X (Γ) = 1. Proof. Let X ∈ Ω be given. Choose x ∈ Γ such that δ(X) = |X -x|. Set then B j = B(x, 2 j δ(X)). According to (9.32), (9.34) ω X (Γ) = lim j→+∞ ω X (B j ). Choose, for j ≥ 1, ḡj ∈ C ∞ 0 (B j+1 ) such that 0 ≤ ḡj ≤ 1 and ḡj ≡ 1 on B j and then define g j = T (ḡ j ). Since the harmonic measure is positive, we have (9.35) ω X (B j ) ≤ ˆΓ g j (y)dω X (y) ≤ ω X (B j+1 ). Together with (9.34), (9.36) ω X (Γ) = lim j→+∞ ˆΓ g j (y)dω X (y) = lim j→+∞ u j (X), where u j is the image by the map (9.24) of the function g j . Since g j is the trace of a smooth and compactly supported function, g j ∈ H and so u j ∈ W is the solution of (9.4) with data g j . Moreover, 0 ≤ u j ≤ 1 by Lemma 9.23 (ii). We want to show that u j (X) → 1 when j → +∞. The function v j := 1u j ∈ W is a solution in B j satisfying T v j ≡ 0 on B j . So Lemma 8.106 says that (9.37) 0 ≤ 1 -u j (X) = v j (X) ≤ osc B 1 v j ≤ C2 -jα osc B j v j ≤ C2 -jα , where C > 0 and α > 0 are independent of j. It follows that v j (X) tends to 0, and u j (X) tends to 1 when j goes to +∞. The lemma follows from this and (9.36), the lemma follows. Lemma 9.38. Let E ⊂ Γ be a Borel set and define the function u E on Ω by u E (X) = ω X (E). Then (i) if there exists X ∈ Ω such that u E (X) = 0, then u E ≡ 0; (ii) the function u E lies in W r (Ω) and is a solution in Ω; (iii) if B ⊂ R n is a ball such that E ∩ B = ∅, then u E ∈ W r (B) and T u E = 0 on B. Proof. First of all, 0 ≤ u E ≤ 1 because ω X is a positive probability measure for any X ∈ Ω. Let us prove (i). Thanks to (9.32), it suffices to prove the result when E = K is compact. Let X ∈ Ω be such that u K (X) = 0. Let Y ∈ Ω and ǫ > 0 be given. By (9.32) again, we can find an open U such that U ⊃ K and ω X (U) < ǫ. Urysohn's lemma (see for instance Lemma 2.12 in [Rud]) gives the existence of g ∈ C 0 0 (Γ) such that 0 ≤ g ≤ 1 and g ≡ 1 on K. Set u = Ug, where U is as in (9.24). Thanks to the positivity of the harmonic measure, u K ≤ u. Let Y ∈ Ω be given, and apply the Harnack inequality (8.45) to u (notice that u lies in W r (Ω) and is a solution in Ω thanks to Lemma 9.23). We get that (9.39) 0 ≤ u K (Y ) ≤ u(Y ) ≤ C X,Y u(X) ≤ C X,Y ǫ. Since (9.39) holds for any positive ǫ, we have u K (Y ) = 0. Part (i) of the lemma follows. We turn to the proof of (ii), which we first do when E = V is open. We first check that (9.40) u V is a continuous function on Ω. Fix X ∈ Ω, and build an increasing sequence of compact sets K j ⊂ V such that ω X (V ) < ω X (K j )+ 1 j . With Urysohn's lemma again, we construct g j ∈ C 0 0 (V ) such that 1 K j ≤ g j ≤ 1 V and, without loss of generality we can choose g j ≤ g i whenever j ≤ i. Set u j = Ug j ∈ C 0 (R n ), as in (9.24), and notice that u j (X) = ´Γ g j dω X by (9.31). Then for j ≥ 1, (9.41) u K j (X) = ω X (K j ) ≤ u j (X) ≤ ω X (V ) = u V (X) ≤ ω X (K j ) + 1 j by definition of u E , because the harmonic measure is nondecreasing, and since 1 K j ≤ g j ≤ 1 V . Similarly, (u j ) is a nondecreasing sequence of functions, i.e., (9.42) u i ≥ u j on Ω for i ≥ j ≥ 1, by the maximum principle in Lemma 9.23 and because g i ≥ g j , so that in particular (9.43) by (9.41). Now u iu j is a nonnegative solution (by Lemma 9.23), and Lemma 8.44 implies that for every compact set J ⊂ Ω, there exists C J > 0 such that u j (X) ≤ u i (X) ≤ u j (X) + 1 j for i ≥ j ≥ 1, (9.44) 0 ≤ sup J (u i -u j ) ≤ C J (u i -u j )(X) ≤ C J j for i ≥ j ≥ 1. We deduce from this that (u j ) j converges uniformly on compact sets of Ω to a function u ∞ , which is therefore continuous on Ω. Thus (9.40) will follow as soon as we prove that u ∞ = u V . Set K = j K j ; then u K j ≤ u K ≤ u V by monotonicity of the harmonic measure, and (9.41) implies that u K (X) = u V (X). Now u V -u K = u V \K , so u V \K (X) = 0. By Point (i) of the present lemma, u V \K (Y ) = 0 for every Y ∈ Ω. But u V (Y ) = ω Y (V ), and ω Y is a measure, so u V \K (Y ) = lim j→+∞ u V \K j (Y ) = u V (Y ) -lim j→+∞ u K j (Y ). Since u K j (Y ) ≤ u j (Y ) ≤ u V (Y ) by the proof of (9.41), we get that u j (Y ) tends to u V (Y ). In other words, u ∞ (Y ) = u V (Y ), and (9.40) follows as announced. We proved that u V is continuous on Ω and that it is the limit, uniformly on compact subsets of Ω, of a sequence of functions u j ∈ C 0 (R n ) ∩ W r (Ω), which are also solutions of L in Ω. We now want to prove that u V ∈ W r (Ω), and we proceed as we did near (9.25). Let φ ∈ C ∞ 0 (Ω) be given. In the distributional sense, we have ∇(φu j ) = u j ∇φ + φ∇u j . So the Caccioppoli inequality given by Lemma 8.26 yields (9.45) ˆΩ |∇(φu j )| 2 dm ≤ C ˆΩ(|∇φ| 2 |u j | 2 + φ 2 |∇u j | 2 )dm ≤ C ˆΩ |∇φ| 2 |u j | 2 dm. Since the u j converge to u uniformly on supp φ, the right-hand side of (9.45) converges to C ´Ω |∇φ| 2 |u| 2 dm. Consequently, the left-hand side of (9.45) is uniformly bounded in j and hence there exists v ∈ L 2 (Ω, w) such that ∇(φu j ) converges weakly to v in L 2 (Ω, w). By uniqueness of the limit, the distributional derivative ∇(φu V ) equals v ∈ L 2 (Ω, w), so by definition of W , φu V ∈ W . Since the result holds for any φ ∈ C ∞ 0 (Ω), we just established u V ∈ W r (Ω) as desired. In addition, we also checked that (for a subsequence) ∇(φu j ) converges weakly in L 2 (Ω, w) to ∇(φu V ). We now establish that u V is a solution. Let ϕ ∈ C ∞ 0 (Ω) be given. Choose φ ∈ C ∞ 0 (Ω) such that φ ≡ 1 on supp ϕ. Thanks to the weak convergence of ∇(φu j ) to ∇(φu j ) ˆΩ A∇u V • ∇ϕ dm = ˆΩ A∇(φu V ) • ∇ϕ dm = lim j→+∞ ˆΩ A∇(φu j ) • ∇ϕ dm = lim j→+∞ ˆΩ A∇u j • ∇ϕ dm = 0 (9.46) because each u j is a solution. Hence u V is a solution. This completes our proof of (ii) when E = V is open. The proof of (ii) for general Borel sets E works similarly, but we now approximate E from above by open sets. Fix X ∈ Ω. Thanks to the regularity property (9.32), there exists a decreasing sequence (V j ) of open sets that contain E, and for which u V j (X) tends to u E (X). From our previous work, we know that each u V j is continuous on Ω, lies in W r (Ω), and is a solution in Ω. Using the same process as before, we can show first that the u V j converge, uniformly on compact sets of Ω, to u E , which is then continuous on Ω. Then we prove that, for any φ ∈ C ∞ 0 (Ω), ∇(φu V j ) converges weakly in L 2 (Ω, w) to ∇(φu E ), from which we deduce u E ∈ W r (Ω) and then that u E is a solution. Part (iii) of the lemma remains to be proven. Let B ⊂ R n be a ball such that B ∩ E = ∅. Since u E lies in W r (Ω) and is a solution, Lemma 8.40 says that u E is continuous in Ω. We first prove that if we set u = 0 on B ∩ Γ, we get a continuous extension of u, (with then has a vanishing trace, or restriction, on B ∩ Γ). Let x ∈ B ∩ Γ be given. Choose r > 0 such that B(x, 2r) ⊂ B and then construct a function ḡ ∈ C ∞ 0 (B(x, 2r)) such that ḡ ≡ 1 in B(x, r). Since ḡ is smooth and compactly supported, g := T (ḡ) lies in H ∩ C 0 0 (Γ) and then u = Ug, the image of g by the map of (9.24), lies in in W ∩ C 0 (R n ). From the positivity of the harmonic measure, we deduce that 0 ≤ u E ≤ 1u. Since 0 and 1u are both continuous functions that are equal 0 at x, the squeeze theorem says that u E is continuous (or can be extended by continuity) at x, and u E (x) = 0. To complete the proof of the lemma, we show that u E actually lies in W r (B). As for the proof of (ii), we first assume that E = V is open. We take a nondecreasing sequence of compact sets K j ⊂ V that converges to V , and then we build g j ∈ C 0 0 (V ), such that 1 K j ≤ g j ≤ 1 V and the sequence (g j ) is non-decreasing. We then take u j = Ug j (with the map from (9.24)), and in particular the sequence (u j ) is non-decreasing on Ω. From the proof of (ii), we know that u j converges to u V on compact sets of Ω, then in particular u j converges pointwise to u V in Ω. Let ϕ ∈ C ∞ 0 (B); we want to prove that ϕu V ∈ W . From Lemma 8.47, we have (9.47) ˆB |∇(ϕu j )| 2 dm ≤ C ˆB(|∇ϕ| 2 |u j | 2 + ϕ 2 |∇u j | 2 )dm ≤ C ˆB |∇ϕ| 2 |u j | 2 dm. Since u is continuous on B, u V ∈ L 2 (supp ϕ, w) and the right-hand side converges to C ´B |∇ϕ| 2 |u V | 2 dm by the dominated convergence theorem. The left-hand side is thus uniformly bounded in j and ∇(ϕu j ) converges weakly, maybe after extracting a subsequence, to some v in L 2 (B, w). By uniqueness of the limit, v = ∇(ϕu V ) ∈ L 2 (B, w). Since the result holds for all ϕ ∈ C ∞ 0 (B), we get u V ∈ W r (B). In the general case where E is a Borel set, fix X ∈ Ω and take a decreasing sequence of open sets V j ⊃ X such that u V j (X) → u E (X). We can prove using part (i) of this lemma that u V j converges to u E pointwise in Ω. Then we use Lemma 8.47 to show that for ϕ ∈ C ∞ 0 (B), (9.48) ˆB |∇(ϕu V j )| 2 dm ≤ C ˆB |∇ϕ| 2 |u V j | 2 dm when j is so large that V j is far from the support of ϕ. The right-hand side has a limit, thanks to the dominated convergence theorem, thus the left-hand side is uniformly bounded in j. So there exists a subsequence of ∇(ϕu V j ) that converges weakly in L 2 (B, w), and by uniqueness to the limit, the limit has to be ∇(ϕu E ), which thus lies in L 2 (B, w). We deduce that ϕu E ∈ W and then u ∈ W r (B). Green functions The aim of this section is to define a Green function, that is, formally, a function g defined on Ω × Ω and such that for y ∈ Ω, (10.1) Lg(., y) = δ y in Ω T g(., y) = 0 on Γ. where δ y denotes the Dirac distribution. Our proof of existence and uniqueness, and the estimates below, are adapted from arguments of [GW] (see also [HoK] and [DK]) for the classical case of codimension 1. Lemma 10.2. There exists a non-negative function g : Ω × Ω → R ∪ {+∞} with the following properties. (i) For any y ∈ Ω and any function α ∈ C ∞ 0 (R n ) such that α ≡ 1 in a neighborhood of y (10.3) (1α)g(., y) ∈ W 0 . In particular, g(., y) ∈ W r (R n \ {y}) and T [g(., y)] = 0. (ii) For every choice of y ∈ Ω, R > 0, and q ∈ [1, n n-1 ), (10.4) g(., y) ∈ W 1,q (B(y, R)) := {u ∈ L q (B(y, R)), ∇u ∈ L q (B(y, R))}. (iii) For y ∈ Ω and ϕ ∈ C ∞ 0 (Ω), (10.5) ˆΩ A∇ x g(x, y) • ∇ϕ(x)dx = ϕ(y). In particular, g(., y) is a solution of Lu = 0 in Ω \ {y}. In addition, the following bounds hold. (iv) For r > 0, y ∈ Ω and ǫ > 0, (10.6) ˆΩ\B(y,r) |∇ x g(x, y)| 2 dm(x) ≤      Cr 1-d if 4r ≥ δ(y) Cr 2-n w(y) if 2r ≤ δ(y), n ≥ 3 Cǫ w(y) δ(y) r ǫ if 2r ≤ δ(y), n = 2, where C > 0 depends on d, n, C 0 , C 1 and C ǫ > 0 depends on d, C 0 , C 1 , and ǫ. (v) For x, y ∈ Ω such that x = y and ǫ > 0, (10.7) 0 ≤ g(x, y) ≤      C|x -y| 1-d if 4|x -y| ≥ δ(y) C|x-y| 2-n w(y) if 2|x -y| ≤ δ(y), n ≥ 3 Cǫ w(y) δ(y) |x-y| ǫ if 2|x -y| ≤ δ(y), n = 2, where again C > 0 depends on d, n, C 0 , C 1 and C ǫ > 0 depends on d, C 0 , C 1 , ǫ. (vi) For q ∈ [1, n n-1 ) and R ≥ δ(y), (10.8) ˆB(y,R) |∇ x g(x, y)| q dm(x) ≤ C q R d(1-q)+1 , where C q > 0 depends on d, n, C 0 , C 1 , and q. (vii) For y ∈ Ω, R ≥ δ(y), t > 0 and p ∈ [1, 2n n-2 ] (if n ≥ 3) or p ∈ [1, +∞) (if n = 2), (10.9) m({x ∈ B(y, R), g(x, y) > t}) m(B(y, R)) ≤ C p R 1-d t p 2 , where C p > 0 depends on d, n, C 0 , C 1 and p. (viii) For y ∈ Ω, t > 0 and η ∈ (0, 2), (10.10) m({x ∈ Ω, |∇ x g(x, y)| > t}) ≤    Ct -d+1 d if t ≤ δ(y) -d Cw(y) -1 n-1 t -n n-1 if t ≥ δ(y) -d , n ≥ 3 C η w(y) -1 δ(y) dη t η-2 if t ≥ δ(y) -d , n = 2, where C > 0 depends on d, n, C 0 , C 1 and C η > 0 depends on d, C 0 , C 1 , η. Remark 10.11. When d < 1 and |x -y| ≥ 1 2 δ(y), the bound g(x, y) ≤ C|x -y| 1-d given in (10.7) can be improved into (10.12) g(x, y) ≤ C min{δ(x), δ(y)} 1-d . This fact is proven in Lemma 11.39 below. Remark 10.13. The authors believe that the bounds given in (10.6) and (10.7) when n = 2 and 2r (or 2|x -y|) is smaller than δ(y) are not optimal. One should be able to replace for instance the bound Cǫ w(y) δ(y) r ǫ by C w(y) ln δ(y) r in (10.6) by adapting the arguments of [DK] (see also [START_REF] Fabes | The Wiener test for degenerate elliptic equations[END_REF]Theorem 3.3]). However, the estimates given above are sufficient for our purposes and we didn't want to make this article even longer. Remark 10.14. Note that when n ≥ 3, thanks to Lemma 2.3, the bound (10.7) can be gathered into a single estimate (10.15) g(x, y) ≤ C |x -y| 2 m(B(y, |x -y|)) whenever x, y ∈ Ω, x = y. In the same way, also for n ≥ 3, the bound (10.6) can be gathered into a single estimate (10.16) ˆΩ\B(y,r) |∇ x g(x, y)| 2 dm(x) ≤ C r 2 m(B(y, r)) whenever y ∈ Ω and r > 0. Proof. This proof will adapt the arguments of [START_REF] Grüter | The Green function for uniformly elliptic equations[END_REF]Theorem 1.1]. Let y ∈ Ω be fixed. Consider again the bilinear form a on W 0 × W 0 defined as (10.17) a(u, v) = ˆΩ A∇u • ∇v = ˆΩ A∇u • ∇v dm. The bilinear form a is bounded and coercive on W 0 , thanks to (8.9) and (8.10). Let ρ > 0 be small. Take, for instance, ρ such that 100ρ < δ(y). Write B ρ for B(y, ρ). The linear form We like g ρ , and will actually spend some time studying it, because g(•, y) will later be obtained as a limit of the g ρ . By (10.20), (10.21) g ρ ∈ W 0 is a solution of Lg ρ = 0 in Ω \ B ρ . This fact will be useful later on. For now, let us prove that g ρ ≥ 0 a.e. on Ω. Since g ρ ∈ W 0 , Lemma 6.1 yields |g ρ | ∈ W 0 , ∇|g ρ | = ∇g ρ a.e. on {g ρ > 0}, ∇|g ρ | = -∇g ρ a.e. on {g ρ < 0} and ∇|g ρ | = 0 a.e. on {g ρ = 0}. Consequently (10.22) ˆΩ A∇|g ρ | • ∇|g ρ | dm = ˆ{g ρ >0} A∇g ρ • ∇g ρ dm + ˆ{g ρ <0} A∇g ρ • ∇g ρ dm = ˆΩ A∇g ρ • ∇g ρ dm and (10.23) ˆΩ A∇|g ρ | • ∇g ρ dm = ˆ{g ρ >0} A∇g ρ • ∇g ρ dm - ˆ{g ρ <0} A∇g ρ • ∇g ρ dm = ˆΩ A∇g ρ • ∇|g ρ | dm, which can be rewritten a(|g ρ |, |g ρ |) = a(g ρ , g ρ ) and a(|g ρ |, g ρ ) = a(g ρ , |g ρ |). Moreover, if we use g ρ ∈ W 0 and |g ρ | ∈ W 0 as test functions in (10.20), we obtain (10.24) a(|g ρ |, |g ρ |) = a(g ρ , g ρ ) = ˆBr g ρ ≤ ˆBr |g ρ | = a(g ρ , |g ρ |) = a(|g ρ |, g ρ ). Hence a(|g ρ |g ρ , |g ρ |g ρ ) ≤ 0 and, by the coercivity of a, g ρ = |g ρ | ≥ 0 a.e. on Ω. Let R ≥ δ(y) > 100ρ > 0. We write again B R for B(y, R). Let p in the range given by Lemma 4.13, that is p ∈ [1, 2n/(n -2)] if n ≥ 3 and p ∈ [1, +∞) if n = 2. We aim to prove that for all t > 0, (10.25) m({x ∈ B R , g ρ (x) > t}) m(B R ) ≤ Ct -p 2 R p 2 (1-d) with a constant C independent of ρ, t and R. We use (10.20) with the test function (10.26) ϕ(x) := 2 t - 1 g ρ (x) + = max 0, 2 t - 1 g ρ (x) (and ϕ(x) = 0 if g ρ (x) = 0), which lies in W 0 by Lemma 6.1. So if Ω s := {x ∈ Ω, g ρ (x) > s}, we have (10.27) a(g ρ , ϕ) = ˆΩt/2 A∇g ρ • ∇g ρ (g ρ ) 2 dm = Br ϕ ≤ 2 t . Therefore, with the ellipticity condition (8.10), (10.28) ˆΩt/2 |∇g ρ | 2 (g ρ ) 2 dm ≤ C t . Pick y 0 ∈ Γ such that |yy 0 | = δ(y). Set B R for B(y 0 , 2R) ⊃ B R . Also define v by v(x) := (ln(g ρ (x)ln t + ln 2)) + , which lies in W 0 too, thanks to Lemma 6.1. The Sobolev-Poincaré inequality (4.15) implies that (10.29) ˆΩt/2 ∩ B R |v| p dm 1 p ≤ CR m( B R ) 1 p -1 2 ˆΩt/2 ∩ B R |∇v| 2 dm 1 2 ≤ CR m( B R ) 1 p -1 2 t -1 2 by (10.28). Since m( B R ) ≈ R d+1 thanks to Lemma 2.3, one has (10.30) ˆΩt/2 ∩B R ln 2g ρ t p dm ≤ CR p+(d+1)(1-p 2 ) t -p 2 . But the latter implies that (10.31) (ln 2) p m(Ω t ∩ B R ) ≤ CR p+(d+1)(1-p 2 ) t -p 2 = Ct -p 2 R p 2 (1-d)+(d+1) . The claim (10.25) follows once we notice that, due to Lemma 2.3, we have m(B R ) ≈ R d+1 . Now we give a pointwise estimate on g ρ when x is far from y. We claim that (10.32) g ρ (x) ≤ C|x -y| 1-d if 4|x -y| ≥ δ(y) > 100ρ, where again C > 0 is independent of ρ. Set R = 4|x -y| > δ(y). Recall (10.21), i.e., that g ρ lies in W 0 and is a solution in Ω \ B ρ . So we can use the Moser estimates to get that (10.33) g ρ (x) ≤ C 1 m(B(x, R/2)) ˆB(x,R/2) g ρ dm. Indeed, (10.33) is obtained with Lemma 8.34 when δ(x) ≥ R/30 (apply Moser inequality in the ball B(x, R/90)) and with Lemma 8.71 when δ(x) ≤ R/30 (apply Moser inequality in the ball B(x 0 , R/15) where x 0 is such that |xx 0 | = δ(x)). We can use now the fact that B(x, R/2) ⊂ B R and [START_REF] Duoandikoetxea | Fourier analysis[END_REF]p. 28,Proposition 2.3] to get (10.34) g ρ (x) ≤ C ˆ+∞ 0 m(Ω t ∩ B R ) m(B R ) dt Take s > 0, to be chosen later. By (10.25), applied with any valid p > 2 (for instance p = 2n n-1 ), g ρ (x) ≤ C ˆs 0 m(Ω t ∩ B R ) m(B R ) dt + C ˆ+∞ s m(Ω t ∩ B R ) m(B R ) dt ≤ Cs + CR p 2 (1-d) ˆ+∞ s t -p 2 dt ≤ Cs + CR p 2 (1-d) s 1-p 2 . (10.35) We minimize the right-hand side in s. We find s ≈ R 1-d and then g ρ (x) ≤ CR 1-d . The claim (10.32) follows. Let us now prove some pointwise estimates on g ρ when x is close to y. When n ≥ 3, we want to show that (10.36) g ρ (x) ≤ C |x -y| 2-n w(y) if δ(y) ≥ 2|x -y| > 4ρ and δ(y) > 100ρ, where C > 0 is independent of ρ, x and y. When n = 2, we claim that for any ǫ > 0, (10.37) g ρ (x) ≤ C ǫ 1 w(y) δ(y) r ǫ if δ(y) ≥ 2|x -y| > 4ρ and δ(y) > 100ρ, where C ǫ > 0 is also independent of ρ, x and y. The proof works a little like when x is far from y, but we need to be a bit more careful about the Poincaré-Sobolev inequality that we use. Set again r = 2|x -y|. Lemma 8.34 applied to the ball B(x, r/20) yields (10.38) g ρ (x) ≤ C m(B(x, r/2)) ˆB(x,r/2) g ρ dm ≤ C m(B r ) ˆBr g ρ dm and then for s > 0 and R > r to be chosen soon, (10.39) g ρ (x) ≤ C ˆs 0 m(Ω t ∩ B r ) m(B r ) dt + C m(B R ) m(B r ) ˆ+∞ s m(Ω t ∩ B R ) m(B R ) dt. Take R = δ(y). The doubling property (2.12) allows us to estimate m(B R ) m(Br ) by δ(y) r n . Let p lie in the range given by Lemma 4.13, and apply (10.25) with R := δ(y) to estimate m(Ω t ∩B R ); we get that (10.40) m(Ω t ∩ B R ) m(B R ) ≤ Ct -p/2 R p 2 (1-d) ≤ C p t -p 2 δ(y) p 2 (1-d) . The bound (10.39) becomes now (10.41) We minimize then the right hand side of (10.41) in s. We take s ≈ δ(y) 1-d δ(y) r 2n p and get that (10.42) g ρ (x) ≤ Cs + C p δ(y) r n δ(y) p 2 (1-d) ˆ+∞ s t -p 2 dt ≤ Cs + C p δ(y) p 2 (1-d)+n r -n s 1-p 2 . g ρ (x) ≤ C p δ(y) 1-d δ(y) r 2n p . The assertion (10.36) follows from (10.42) by taking p = 2n n-2 (which is possible since n ≥ 3) and by recalling that w(y) = δ(y) d+1-n . When n = 2, we have δ(y) 1-d = δ(y) n-d-1 = w(y) -1 and so (10.37) is obtained from (10.42) by taking p = 2n ǫ < +∞. Next we give a bound on the L q -norm of the gradient of g ρ for some q > 1. As before, we want the bound to be independent of ρ so that we can later let our Green function be a weak limit of a subsequence of g ρ . We want to prove first the following Caccioppoli-like inequality: for any r > 4ρ, (10.43) ˆΩ\Br |∇g ρ | 2 dm ≤ Cr -2 ˆBr\Br/2 (g ρ ) 2 dm, where C > 0 is a constant that depends only upon d, n, C 0 and C 1 . Keep r > 4ρ, and let α ∈ C ∞ (R n ) be such that α ≡ 1 on R n \ B r , α ≡ 0 on B r/2 and |∇η| ≤ 4 r . By construction, g ρ lies in W 0 , and thus the function ϕ := α 2 g ρ is supported in Ω \ B r/4 and lies in W 0 thanks to Lemma 5.24. Since we like function with compact support, let us further multiply ϕ by a smooth, compactly supported function ψ R such that ψ R ≡ 1 on a large ball B R . Then ψ R ϕ is compactly supported in Ω \ B ρ , and still lies in W 0 like ϕ. Also, (10.21) says that g ρ lies in W 0 and is a solution of Lg ρ = 0 in Ω \ B ρ ⊃ Ω \ B r/4 . So we may apply the second item of Lemma 8.16, with E = Ω \ B r/4 , and we get that (10.44) ˆΩ A∇g ρ • ∇(ψ R ϕ) dm = 0, but we would prefer to know that (10.45) ˆΩ A∇g ρ • ∇ϕ dm = 0. Fortunately, we proved in (ii) of Lemma 5.30 that with correctly chosen functions ψ R , the product ψ R ϕ tends to ϕ in W ; see (5.38) in particular. Then ˆΩ A∇g ρ • [∇ϕ -∇(ψ R ϕ)] dm ≤ C||∇g ρ || L 2 (dm) ||∇ϕ -∇(ψ R ϕ)|| L 2 (dm) ≤ C||g ρ || W ||ϕ -(ψ R ϕ)|| W (10.46) by the boundedness property (8.9) of A. The right-hand side tends to 0, so (10.45) follows from (10.44). Since ϕ = α 2 g ρ , (10.45) yields (10.47) ˆΩ α 2 [A∇g ρ • ∇g ρ ] dm = -2 ˆΩ αg ρ [A∇g ρ • ∇α] dm. Together with the elliptic and boundedness conditions on A (see (8.10) and (8.9)) and the Cauchy-Schwarz inequality, (10.47) becomes ˆΩ α 2 |∇g ρ | 2 dm ≤ C ˆΩ αg ρ |∇g ρ ||∇α| dm ≤ C ˆΩ α 2 |∇g ρ | 2 dm 1 2 ˆΩ(g ρ ) 2 |∇α| 2 dm 1 2 , (10.48) which can be rewritten (10.49) ˆΩ α 2 |∇g ρ | 2 dm ≤ C ˆΩ(g ρ ) 2 |∇α| 2 dm. The bound (10.43) is then a straightforward consequence of our choice of α. Set Ωt = {x ∈ Ω, |∇g ρ | > t}. As before, there will be two different behaviors. We first check that (10.50) m( Ωt ) ≤ Ct -d+1 d when t ≤ δ(y) -d . Let r ≥ δ(y) be given, to be chosen later. The Caccioppoli-like inequality (10.43) and the pointwise bound (10.32) give (10.51) ˆΩ\Br |∇g ρ | 2 dm ≤ Cr -2 ˆBr\Br/2 (g ρ ) 2 dm ≤ Cr -2d m(B r ) ≤ Cr 1-d by (2.5), and hence (10.52) m( Ωt \ B r ) ≤ Ct -2 r 1-d . This yields (10.53) m( Ωt ) ≤ Ct -2 r 1-d + m(B r ) = Ct -2 r 1-d + Cr 1+d because r ≥ δ(y). Take r = t -1 d in (10.53) (and notice that r ≥ δ(y) when t ≤ δ(y) -d ). The claim (10.50) follows. We also want a version of (10.50) when t is big. We aim to prove that (10.54) m( Ωt ) ≤ Cw(y) -1 n-1 t -n n-1 when t ≥ δ(y) -d and n ≥ 3 and for any η ∈ (0, 2), (10.55) m( Ωt ) ≤ C η w(y) -1 δ(y) dη t η-2 when t ≥ δ(y) -d and n = 2. The proof of (10.54) is similar to (10.50) but has an additional difficulty: we cannot use the Caccioppoli-like argument (10.43) when r is smaller than 4ρ. So we will use another way. By (10.20) for the test function φ = g ρ and the elliptic condition (8.10), (10.56) ˆΩ |∇g ρ | 2 dm ≤ C ˆΩ A∇g ρ • ∇g ρ dm = C Bρ g ρ ≤ C m(B ρ ) ˆBρ g ρ dm by (2.17). Let y 0 be such that |yy 0 | = δ(y). We use Hölder's inequality, and then the Sobolev-Poincaré inequality (4.15), with p in the range given by Lemma 4.13, to get that ˆΩ |∇g ρ | 2 dm ≤ C p m(B ρ ) -1 m(B ρ ) 1-1 p ˆBρ (g ρ ) p dm 1 p ≤ C p m(B ρ ) -1 p ˆB(y 0 ,2δ(y)) (g ρ ) p dm 1 p ≤ C p m(B ρ ) -1 p δ(y)m(B 3δ(y) ) 1 p -1 2 ˆΩ |∇g ρ | 2 dm 1 2 , (10.57) that is, (10.58) ˆΩ |∇g ρ | 2 dm ≤ C p m(B ρ ) -2 p δ(y) 2 m(B δ(y) ) 2 p -1 . We use the fact that 100ρ < δ(y) and Lemma 2.3 to get that m(B ρ ) ≈ ρ n w(y) = ρ n δ(y) d+1-n . Besides, notice that m(B 3δ(y) ) ≈ δ(y) d+1 . We end up with (10.59) ˆΩ |∇g ρ | 2 dm ≤ C p ρ -2n p w(y) -2 p δ(y) 2+(d+1)( 2 p -1) = C p δ(y) ρ 2n p δ(y) 1-d once we recall that w(y) = δ(y) d+1-n . Observe that the right-hand side of (10.59) is similar to the one of (10.42). In the same way as below (10.42) we take p = 2n n-2 when n ≥ 3 and p = 4 ǫ when n = 2, and obtain that (10.60) ˆΩ |∇g ρ | 2 dm ≤ Cw(y) -1 ρ 2-n if n ≥ 3 C ǫ w(y) -1 δ(y) ρ ǫ for any ǫ > 0 if n = 2. Let r ≤ δ(y), to be chosen soon. Now we show that (10.61) ˆΩ\Br |∇g ρ | 2 dm ≤ Cw(y) -1 r 2-n if n ≥ 3 C ǫ w(y) -1 δ(y) r ǫ for any ǫ > 0 if n = 2. When r ≤ 4ρ, this is a consequence of (10.60), and when 4ρ < r ≤ δ(y), this can be proven as we proved (10.51), by using Caccioppoli-like inequality (10.43) and the pointwise bounds (10.36) or (10.37). That is, we say that (10.62) ˆΩ\Br |∇g ρ | 2 dm ≤ Cr -2 ˆBr\Br/2 (g ρ ) 2 dm ≤ Cr -2 m(B r ) 1 w(y) 2 r 2(2-n) δ(y) r 2ǫ and we observe that m(B r ) ≈ w(y)r n . Let n ≥ 3. We deduce from (10.61) that m( Ωt \ B r ) ≤ Ct -2 r 2-n w(y) -1 and then, since m(B r ) ≤ Cr n w(y) and thanks to Lemma 2.3, (10.63) m( Ωt ) ≤ Cw(y) -1 t -2 r 2-n + m(B r ) ≤ Ct -2 w(y) -1 r 2-n + Cr n w(y). Choose r = [tw(y)] -1 n-1 (which is smaller than δ(y) if t ≥ δ(y) -d ) in (10.63). This yields (10.54). Let n = 2 and let η ∈ (0, 2) be given. Set ǫ := 2η 2-η > 0. In this case, (10.61) gives (10.64) m( Ωt \ B r ) ≤ Ct -2 w(y) -1 δ(y) r ǫ and then since m(B r ) ≤ Cr 2 w(y) by Lemma 2.3, (10.65) m( Ωt ) ≤ Ct -2 w(y) -1 δ(y) r ǫ + Cr 2 w(y). We want to minimize the above quantity in r. We take r = δ(y) 2(1-d)+ǫ 2+ǫ t -2 2+ǫ , which is smaller than δ(y) when t ≥ δ(y) -d and we find that (10.66) m( Ωt ) ≤ Ct -4 2+ǫ δ(y) 2(1-d)+ǫ(d+1) 2+ǫ = Ct η-2 δ(y) 1-d+ηd , with our choice of ǫ. Since w(y) -1 = δ(y) 1-d when n = 2, the claim (10.55) follows. We plan to show now that ∇g ρ ∈ L q (B R , w) for 1 ≤ q < n/(n -1), and the L q (B R , w)norm of ∇g ρ can be bounded uniformly in ρ. More precisely, we claim that for R ≥ δ(y) and 1 ≤ q < n/(n -1), (10.67) ˆBR |∇g ρ | q dm ≤ C q R d(1-q)+1 , where C q is independent of ρ and R. Let s ∈ (0, δ(y) -d ] be given, to be chosen soon. Then (10.68) ˆBR |∇g ρ | q dm ≤ C ˆs 0 t q-1 m(B R )dt+C ˆδ(y) -d s t q-1 m( Ωt ∩B R )dt+C ˆ+∞ δ(y) -d t q-1 m( Ωt ∩B R )dt. Let us call I 1 , I 2 and I 3 the three integrals in the right hand side of (10.68). By Lemma 2.3, I 1 ≤ Cs q m(B R ) ≤ Cs q R d+1 . The second integral I 2 is bounded with the help of (10.50), which gives (10.69) I 2 ≤ C ˆδ(y) -d s t q-1-d+1 d dt ≤ C s q-d+1 d -δ(y) d(1-q)+1 . When n ≥ 3, the last integral I 3 is bounded with the help of (10.54) and we obtain, when q < n n-1 , (10.70) I 3 ≤ Cw(y) -1 n-1 ˆ+∞ δ(y) -d t q-1-n n-1 dt ≤ Cw(y) -1 n-1 δ(y) -qd+ nd n-1 = Cδ(y) 1+d(1-q) where the last equality is obtained by using the fact that w(y) = δ(y) d+1-n . Note also that the same bound (10.70) can be obtained when n = 2 by using (10.55) with η = 2-q 2 . The left-hand side of (10.68) can be now bounded for every n ≥ 2 by (10.71) ˆBR |∇g ρ | q dm ≤ Cs q R d+1 + C s q-d+1 d -δ(y) d(1-q)+1 + Cδ(y) 1+d(1-q) = Cs q R d+1 + Cs q-d+1 d , where the third term in the middle is dominated by s q-d+1 d because I 2 ≥ 0. We take s = R -d ≤ δ(y) -d in the right hand side of (10.71) to get the claim (10.67). As we said, we want to define the Green function as a weak limit of functions g ρ , 0 < ρ ≤ δ(y)/100. We want to prove that for q ∈ (1, n n-1 ) and R > 0, (10.72) g ρ W 1,q (B R ) ≤ C q,R , where C q,R is independent of ρ (but depends, among others things, on y, q and R). First, it is enough to prove the result for R ≥ 2δ(y). Thanks to (10.67), the quantity ∇g ρ L q (B R ,w) is bounded uniformly in ρ ∈ (0, δ(y)/100). Due to (2.17), the quantity ∇g ρ L q (B R ) is bounded uniformly in ρ. Now, due to [START_REF] Maz | Sobolev spaces with applications to elliptic partial differential equations[END_REF]Corollary 1.1.11], we deduce that g ρη ∈ W 1,q (B R ) and hence with the classical Poincaré inequality on balls that (10.73) ˆBR g ρ - B R g ρ q ≤ C q,R ∇g ρ q L q (B R ) ≤ C q,R , where C q,R > 0 is independent of ρ. Choose y 0 ∈ Γ such that |y -y 0 | = δ(y 0 ). Note that B(y 0 , δ(y)/2) ⊂ B R because R ≥ 2δ(y) , so (10.73) implies that (10.74) B(y 0 ,δ(y)/2) g ρ - B R g ρ q ≤ ˆBR g ρ - B R g ρ q ≤ C q,R and hence also, by the triangle inequality, (10.75) ˆBR g ρ - B(y 0 ,δ(y)/2) g ρ q ≤ C q,R ˆBR g ρ - B R g ρ q . Together with (10.73), we obtain (10.76) ˆBR |g ρ | q ≤ C q,R 1 + B(y 0 ,δ(y)/2) |g ρ | q and since (10.32) gives that ffl B(y 0 ,δ(y)/2) |g ρ | ≤ Cδ(y) 1-d , the claim (10.72) follows. Fix q 0 ∈ (1, n n-1 ), for instance, take q 0 = 2n+1 2n-1 . Due to (10.72), for all R > 0, the functions (g ρ ) 0<100ρ<δ(y) are uniformly bounded in W 1,q 0 (B R ). So a diagonal process allows us to find a sequence (ρ η ) η≥1 converging to 0 and a function g ∈ L 1 loc (R n ) such that (10.77) g ρη ⇀ g = g(., y) in W 1,q 0 (B R ), for all R > 0. Let q ∈ (1, n n-1 ) and R > 0. The functions g ρη are uniformly bounded in W 1,q (B R ) thanks to (10.72). So we can find a subsequence g ρ η ′ of g ρη such that g ρ η ′ converges weakly to some function g (q,R) ∈ W 1,q (B R ). Yet, by uniqueness of the limit, g equals g (q,R) almost everywhere in B R . As a consequence, up to a subsequence (that depends on q and R), (10.78) g ρη ⇀ g = g(., y) in W 1,q (B R ). The assertion (10.4) follows. We aim now to prove (10.3), that is (10.79) (1 -α)g ∈ W 0 whenever α ∈ C ∞ 0 (R n ) satisfies α ≡ 1 on B r for some r > 0. So we choose α ∈ C ∞ 0 (R n ) and r > 0 such that α ≡ 1 on B r . Since α is compactly supported, we can find R > 0 such that supp α ⊂ B R . For any η ∈ N such that 4ρ η ≤ r and 100ρ η < δ(y), (1 -α)g ρη W ≤ g ρη ∇α L 2 (B R \Br,w) + (1 -α)∇g ρη L 2 (Ω\Br,w) ≤ C α sup B R \Br g ρη + C α ∇g ρη L 2 (Ω\Br,w) . (10.80) Thanks to (10.32), (10.36) and (10.37), the term sup B R \Br g ρη can be bounded by a constant that doesn't depend on η, provided that ρ η ≤ min(r/4, δ(y)/100). In the same way, (10.61) proves that ∇g ρη L 2 (Ω\Br ,w) can be also bounded by a constant independent of η. As a consequence, for any η satisfying 4ρ η ≤ r, (10.81) (1α)g ρη W ≤ C α where C α is independent of η. Note also that for η large enough, (1α)g ρη belongs to W 0 because g ρη ∈ W 0 by construction, and by Lemma 5.24. Therefore, the functions (1α)g ρη , η ∈ N large, lie in a fixed closed ball of the Hilbert space W 0 . So, up to a subsequence, there exists f α ∈ W 0 such that (1α)g ρη ⇀ f α in W 0 . By uniqueness of the limit, we have (1 -α)g = f α ∈ W 0 , that is (10.82) (1 -α)g ρη ⇀ (1 -α)g in W 0 . The claim (10.79) follows. Observe that (10.79 ) implies that g ∈ W r (R n \ {y}). Indeed, take ϕ ∈ C ∞ 0 (R n \ {y}). We can find r > 0 such that ϕ ≡ 0 in B r . Construct now α ∈ C ∞ 0 (B r ) such that α ≡ 1 in B r/2 and we have (10.83) ϕg = ϕ[(1 -α)g] ∈ W 0 ⊂ W by (10.79) and Lemma 5.24. Hence g ∈ W r (R n \ {y}). Now we want to prove (10.5). Fix q ∈ (1, n/(n -1)) and a function φ ∈ C ∞ 0 (B δ(y)/2 ) such that φ ≡ 1 in B δ(y)/4 . Then let ϕ be any function in C ∞ 0 (Ω). Let us first check that (10.84) a(g, φϕ) := ˆΩ A∇g • ∇[φϕ]dx = ϕ(y) and (10.85) a(g, (1 -φ)ϕ) := ˆΩ A∇g • ∇[(1 -φ)ϕ]dx = 0. The map a(., φϕ) is a bounded linear functional on W 1,q (B δ(y)/2 ) and thus the weak convergence (in W 1,q (B R )) of a subsequence g ρ η ′ of g ρη yields (10.86) a(g, φϕ) = lim η ′ →+∞ a(g ρ η ′ , φϕ) = lim ρ→0 B(y,ρ) φϕ = ϕ(y), which is (10.84). Let α ∈ C ∞ 0 (B δ(y)/4 ) be such that α ≡ 1 on B δ(y)/8 . The map a(., (1φ)ϕ) is bounded on W 0 thus the weak convergence of a subsequence of (1α)g ρη to (1α)g in W 0 gives a(g, (1 -φ)ϕ) = a((1 -α)g, (1 -φ)ϕ) = lim η ′ →+∞ a((1 -α)g ρ η ′ , (1 -φ)ϕ) = lim η ′ →+∞ a(g ρ η ′ , (1 -φ)ϕ) = lim ρ→0 B(y,ρ) (1 -φ)ϕ = 0. (10.87) which is (10.85). The assertion (10.5) now follows from (10.84) and (10.85). If we use (10.5) for the functions in C ∞ 0 (Ω \ {y}), we immediately obtain that (10.88) g is a solution of Lg = 0 on Ω \ {y}. Assertions (10.6) and (10.8) come from the weak lower semicontinuity of the L q -norms and the bounds (10.51), (10.61) and (10.67). Notice also that r 1-d ≈ r 2-n w(y) when r is near δ(y), so the cut-off between the different cases does not need to be so precise. Let us show (10.7). Let R > 0 be a big given number. We have shown that the sequence g ρη is uniformly bounded in W 1,q (B R ). Then, by the Rellich-Kondrachov theorem, there exists a subsequence of g ρη that also converges strongly in L 1 (B R ) and then another subsequence of g ρη that converges almost everywhere in B R . The estimates (10.32), (10.36) and (10.37) yield then (10.89) 0 ≤ g(x) ≤      C|x -y| 1-d if 4|x -y| ≥ δ(y) C|x-y| 2-n w(y) if 2|x -y| ≤ δ(y), n ≥ 3 Cǫ w(y) δ(y) |x-y| ǫ if 2|x -y| ≤ δ(y), n = 2, a.e. on B R . But by (10.88) g is a solution of Lg = 0 on Ω \ {y}, so it is continuous on R n \ {y} by Lemmas 8.40 and 8.106, and the bounds (10.89) actually hold pointwise in Ω ∩ B R \ {y}. Since R can be chosen as large as we want, the bounds (10.7) follow. It remains to check the weak estimates (10.9) and (10.10). Set q = 2n+1 2n-1 , which satisfies 1 < q < n n-1 < n n-2 . Let t > 0 be given ; by the weak lower semicontinuity of the L q -norm, (10.90) t q m({x ∈ B R , g(x) > t) m(B R ) ≤ 1 m(B R ) g q L q (B R ,w) ≤ lim inf η→+∞ 1 m(B R ) g ρη q L q (B R ,w) . Let us use [START_REF] Duoandikoetxea | Fourier analysis[END_REF]p. 28,Proposition 2.3]; in the case of (10.9), we could manage otherwise, but we also want to get (10.10) with the same proof. We observe that t q m({x ∈ B R , g(x) > t}) m(B R ) ≤ lim inf η→+∞ ˆt 0 s q-1 m({x ∈ B R , g(x) > t, g ρη > s)} m(B R ) ds + ˆ+∞ t s q-1 m({x ∈ B R , g(x) > t, g ρη > s}) m(B R ) ds ≤ t q q m({x ∈ B R , g(x) > t}) m(B R ) (10.91) + lim inf η→+∞ ˆ+∞ t s q-1 m({x ∈ B R , g ρη > s}) m(B R ) ds. Let p lie in the range given by Lemma 4.13. The bounds (10.25) gives t q m({x ∈ B R , g(x) > t}) m(B R ) ≤ C lim inf η→+∞ ˆ+∞ t s q-1 m({x ∈ B R , g ρη > s}) m(B R ) ds ≤ C p R p 2 (1-d) ˆ+∞ t s q-1-p 2 ds ≤ C p R p 2 (1-d) t q-p 2 . (10.92) The estimates (10.9) follows by dividing both sides of (10.92) by t q . The same ideas are used to prove (10.10) from (10.50), (10.54) and (10.55). This finally completes the proof of Lemma 10.2. Lemma 10.93. Any non-negative function g : Ω×Ω → R∪{+∞} that verifies the following conditions: (i) for every y ∈ Ω and α ∈ C ∞ 0 (R n ) such that α ≡ 1 in B(y, r) for some r > 0, the function (1α)g(., y) lies in W 0 , (ii) for every y ∈ Ω, the function g(., y) lies in W 1,1 (B(y, δ(y))), (iii) for y ∈ Ω and ϕ ∈ C ∞ 0 (Ω), (10.94) ˆΩ A∇ x g(x, y) • ∇ϕ(x)dx = ϕ(y), enjoys the following pointwise lower bound: (10.95) g(x, y) ≥ C -1 |x -y| 2 m(B(y, |x -y|)) ≈ |x -y| 2-n w(y) for x, y ∈ Ω such that 0 < |x -y| ≤ δ(y) 2 . Proof. Let g satisfy the assumptions of the lemma, fix y ∈ Ω, write g(x) for g(x, y), and use B r for B(y, r). Thus we want to prove that (10.96) g(x) ≥ |x -y| 2 Cm(B |x-y| ) whenever 0 < |x -y| ≤ δ(y) 2 . With our assumptions, g ∈ W r (R n \ {y}) and it is a solution in Ω \ {y} with zero trace; the proof is the same as for (10.83) and (10.88) in Lemma 10.2. Take x ∈ Ω \ {y} such that |x -y| ≤ δ(y) 2 . Write r for |x -y| and let α ∈ C ∞ 0 (Ω \ {y}) be such that α = 1 on B r \ B r/2 , α = 0 outside of B 3r/2 \ B r/4 , and |∇α| ≤ 8/r. Using Caccioppoli's inequality (Lemma 8.26) with the cut-off function α, we obtain ˆBr\Br/2 |∇g| 2 dm ≤ Cr -2 ˆB3r/2 \B r/4 g 2 dm ≤ Cr -2 m(B 3r/2 ) sup B 3r/2 \B r/4 g 2 ≤ Cr -2 m(B r ) sup B 3r/2 \B r/4 g 2 (10.97) by the doubling property (2.12). We can cover B 3r/2 \ B r/4 by a finite (independent of y and r) number of balls of radius r/20 centered in B 3r/2 \ B r/4 . Then use the Harnack inequality given by Lemma 8.42 several times, to get that (10.98) where we used (8.9) for the last estimate. Together with the Cauchy-Schwarz inequality and (10.98), this yields ˆBr\Br/2 |∇g| 2 dm ≤ Cr -2 m(B r )g(x) 2 . Define another function η ∈ C ∞ 0 (Ω) which is supported in B r , 1 ≤ C r m(B r ) 1 2 ˆBr\Br/2 |∇g| 2 dm 1 2 ≤ Cr -2 m(B r )g(x). (10.100) The lower bound (10.96) follows. In the sequel, A T denotes the transpose matrix of A, defined by A T ij (x) = A ji (x) for x ∈ Ω and 1 ≤ i, j ≤ n. Thus A T satisfies the same boundedness and elliptic conditions as A. That is, it satisfies (8.7) and (8.8) with the same constant C 1 . We can thus define solutions to L T u :=div A T ∇u = 0 for which the results given in Section 8 hold. Denote by g : Ω × Ω → R ∪ {+∞} the Green function defined in Lemma 10.2, and by g T : Ω × Ω → R ∪ {+∞} the Green function defined in Lemma 10.2, but with A is replaced by A T . Lemma 10.101. With the notation above, (10.102) g(x, y) = g T (y, x) for x, y ∈ Ω, x = y. In particular, the functions y → g(x, y) satisfy the estimates given in Lemma 10.2 and Lemma 10.93. Proof. The proof is the same as for [START_REF] Grüter | The Green function for uniformly elliptic equations[END_REF]Theorem 1.3]. Let us review it for completeness. Let x, y ∈ Ω be such that x = y. Set B = B( x+y 2 , |x -y|) and let q ∈ (1, n n-1 ). From the construction given in Lemma 10.2 (see (10.78) in particular), there exists two sequences (ρ ν ) ν and (σ µ ) µ converging to 0 such that g ρν (., y) and g σµ T (., x) converge weakly in W 1,q (B) to g(., y) and g T (., x) respectively. So, up to additional subsequence extractions, g ρν (., y) and g σµ T (., x) converge to g(., y) and g T (., x), strongly in L 1 (B), and then pointwise a.e. in B. Inserting them as test functions in (10.20), we obtain (10.103) ˆΩ A∇g ρν (z, y) • ∇g σµ T (z, x)dz = B(y,ρν ) g σµ T (z, x)dz = B(x,σµ) g ρν (z, y)dz. We want to let σ µ tend to 0. The term ffl B(y,ρν ) g σµ T (z, x)dz tends to ffl B(y,ρν ) g T (z, x)dz because g T (., x) σµ tends to g T (., x) in L 1 (B). When ρ ν is small enough, the function g ρν (., y) is a solution of Lg ρν = 0 in Ω \ B(y, ρ ν ) ∋ x, so it is continuous at x thanks to Lemma 8.40. Therefore, the term ffl B(x,σµ) g ρν (z, y)dz tends to g ρν (x, y). We deduce, when ν is big enough so that ρ ν < |x -y|, (10.104) B(y,ρν ) g T (z, x)dz = g ρν (x, y). Now let ρ ν tend to 0 in (10.104). The function g T (., x) is a solution for L T in Ω \ {x}, so it is continuous on B(y, ρ ν ) for ν large. Hence the left-hand side of (10.104) converges to g T (y, x). Thanks to Lemma 8.40, the functions g ρν (., y) are uniformly Hölder continuous, so the a.e. pointwise convergence of g ρν (., y) to g(., y) can be improved into a uniform convergence on B(x, 1 3 |x -y|). In particular g ρν (x, y) tends to g(x, y) when ρ ν goes to 0. We get that g T (y, x) = g(x, y), which is the desired conclusion. Lemma 10.105. Let g : Ω × Ω → R ∪ {+∞} be the non-negative function constructed in Lemma 10.2. Then for any f ∈ C ∞ 0 (Ω), the function u defined by (10.106) u(x) = ˆg(x, y)f (y)dy belongs to W 0 and is a solution of Lu = f in the sense that (10.107) ˆΩ A∇u • ∇ϕ = ˆΩ A∇u • ∇ϕ dm = ˆΩ f ϕ for every ϕ ∈ W 0 . Proof. First, let us check that (10.106) make sense. Since f ∈ C ∞ 0 (Ω), there exists a big ball B with center y and radius R > δ(y) such that supp f ⊂ B. By (10.4) and (10.102), g(x, .) lies in L 1 (B). Hence the integral in (10.106) is well defined. Let f ∈ C ∞ 0 (Ω). Choose a big ball B f centered on Γ such that suppf ⊂ B f . For any ϕ ∈ W 0 , (10.108) ˆΩ f ϕ ≤ f ∞ ˆBf |ϕ| ≤ C f ϕ W by Lemma 4.1. So the map ϕ ∈ W 0 → ´f ϕ is a bounded linear functional on W 0 . Since the map a(u, v) = ´Ω A∇u • ∇v dm is bounded and coercive on W 0 , the Lax-Milgram theorem yields the existence of u ∈ W 0 such that for any ϕ ∈ W 0 , (10.109) ˆΩ A∇u • ∇ϕ = ˆΩ f ϕ. We want now to show that u(x) = ´Ω g(x, y)f (y)dy. A key point of the proof uses the continuity of u, a property that we assume for the moment and will prove later on. For every ρ > 0, let g ρ T (., x) ∈ W 0 be the function satisfying (10.110) ˆΩ A T ∇ y g ρ T (y, x) • ∇ϕ(y)dy = B(x,ρ) ϕ(y) dy for every ϕ ∈ W 0 . We use g ρ T (., x) as a test function in (10.109) and get that ˆΩ f (y)g ρ T (y, x)dy = ˆΩ A∇u(y) • ∇ y g ρ T (y, x)dy = ˆΩ A T ∇ y g ρ T (y, x) • ∇u(y)dy = B(x,ρ) u(y) dy, (10.111) by (10.110). We take a limit as ρ goes to 0. The right-hand side converges to u(x) because, as we assumed, u is continuous. Choose R ≥ δ(x) so big that supp f ⊂ B(x, R), and choose also q ∈ (1, n n-1 ). According to (10.78), there exists a sequence ρ ν converging to 0 such that g ρν T (., x) converges weakly in W 1,q (B(x, R)) ⊂ L 1 (B(x, R)) to the function g T (., x), the latter being equal to g(x, .) by Lemma 10.101. Hence (10.112) lim ν→+∞ ˆΩ f (y)g ρν T (y, x)dy = ˆΩ f (y)g(x, y)dy and then (10.106) holds. It remains to check what we assumed, that is the continuity of u on Ω. The quickest way to show it is to prove a version of the Hölder continuity (Lemma 8.40) when u is a solution of Lu = f . As for the proof of Lemma 8.40, since we are only interested in the continuity inside the domain, we can use the standard elliptic theory, where the result is well known (see for instance [START_REF] Gilbarg | Elliptic partial differential equations of second order[END_REF]Theorem 8.22]). The following Lemma states the uniqueness of the Green function. Lemma 10.113. There exists a unique function g : Ω × Ω → R ∪ {+∞} such that g(x, .) is continuous on Ω \ {x} and locally integrable in Ω for every x ∈ Ω, and such that for every f ∈ C ∞ 0 (Ω) the function u given by (10.114) u(x) := ˆΩ g(x, y)f (y)dy belongs to W 0 and is a solution of Lu = f in the sense that (10.115) ˆΩ A∇u • ∇ϕ = ˆΩ A∇u • ∇ϕ dm = ˆΩ f ϕ for every ϕ ∈ W 0 . Proof. The existence of the Green function is given by Lemma 10.2, Lemma 10.101 and Lemma 10.105. Indeed, if g is the function built in Lemma 10.2, the property (10.4) (together with Lemma 10.101) states that g(x, .) is locally integrable in Ω. The property (10.5) (and Lemma 10.101 again) gives that g(x, .) is a solution in Ω \ {x}, and thus, by Lemma 8.40, that g(x, .) is continuous in Ω \ {x}. The last property, i.e. that fact that u given by (10.114) is in W 0 and satisfies (10.115), is exactly Lemma 10.105. So it remains to prove the uniqueness. Assume that g is another function satisfying the given properties. Thus for f ∈ C ∞ 0 (Ω), the function ũ given by (10.116) ũ(x) := ˆΩ g(x, y)f (y)dy belongs to W 0 and satisfies Lũ = f . By the uniqueness of the solution of the Dirichlet problem (9.4) (see Lemma 9.3), we must have ũ = u. Therefore, for all x ∈ Ω and all f ∈ C ∞ 0 (Ω), (10.117) ˆΩ[g(x, y)g(x, y)]f (y)dy = 0. From the continuity of g(x, .) and g(x, .) in Ω \ {x}, we deduce that g(x, y) = g(x, y) for any x, y ∈ Ω, x = y. We end this section with an additional property of the Green function, its decay near the boundary. This property is proven in [GW] under the assumption that Ω is of 'S class', which means that we can find an exterior cone at any point of the boundary. We still can prove it in our context because the property relies on the Hölder continuity of solutions at the boundary, that holds in our context because we have (Harnack tubes and) Lemma 8.106. Lemma 10.118. The Green function satisfies (10.119) g(x, y) ≤ Cδ(x) α |x -y| 1-d-α for x, y ∈ Ω such that |x -y| ≥ 4δ(x), where C > 0 and α > 0 depend only on n, d, C 0 and C 1 . Proof. Let y ∈ Ω be given. For any x ∈ Ω, we write g(x) for g(x, y). We want to prove that (10.124) g(z) ≤ Cr 1-d = C|x -y| 1-d for z ∈ Ω ∩ B, even if d < 1, and with a constant C > 0 that does not dependent on x, y, or x 0 . We now use the fact that g is a solution of Lg = 0 on Ω ∩ B. Notice that its oscillation on B is the same as its supremum, because it is nonnegative and, by (i) of Lemma 10.93, its trace on Γ ∩ B vanishes. Lemma 8.106 (the Hölder continuity of solutions at the boundary) says that for some α > 0, that depends only on n, d, C 0 and C 1 , (10.125) because B = B(x 0 , |x -y|/3). The lemma follows. g(x, y) = g(x) ≤ sup B(x 0 ,δ(x)) g = osc B(x 0 ,δ(x)) g ≤ C 3δ(x) |x -y| α osc B(x 0 ,|x-y|/3) g = C 3δ(x) |x -y| α sup B(x 0 ,|x-y|/3) g ≤ C δ(x) |x -y| α |x -y| 1-d . The comparison principle In this section, we prove two versions of the comparison principle: one for the harmonic measure (Lemma 11.135) and one for locally defined solutions (Lemma 11.146). A big technical difference is that the former is a globally defined solution, while the latter is local. At the moment we write this manuscript, the proofs of the comparison principle in codimension 1 that we are aware of cannot be straightforwardly adapted to the case of higher codimension. To be more precise, we can indeed prove the comparison principle (in higher codimension) for harmonic measures on Γ by only slightly modifying the arguments of [CFMS, Ken]. However, the proof of the comparison principle for solutions (of Lu = 0) defined on a subset D of Ω in the case of codimension 1 relies on the use of the harmonic measure on the boundary ∂D (see for instance [CFMS, Ken]). In our setting, in the case where the considered functions are non-negative and solutions to Lu = 0 only on a subset D Ω, we are lacking a definition for harmonic measures with mixed boundaries (some parts in codimension 1 and some parts in higher codimension). The reader can imagine a ball B centered at a point of ∂Ω = Γ. The boundary of the B ∩ Ω consists of Γ ∩ B and ∂B, the sets of different co-dimension. For those reasons, our proof of the comparison principle (in higher codimension) for locally defined functions nontrivially differs from the one in [CFMS, Ken]. Therefore, in a first subsection, we illustrate our arguments in the case of codimension 1 to build reader's intuition. 11.1. Discussion of the comparison theorem in codimension 1. We present here two proofs of the comparison principle in the codimension 1 case. The first proof of the one we can find in [CFMS, Ken] and the second one is our alternative proof. We consider in this subsection that the reader knows or is able to see the results in the three first sections of [Ken], that contain the analogue in codimension 1 of the results proved in the previous sections. For simplicity, the domain Ω ⊂ R n that we study is a special Lipschitz domain, that is Ω = {(y, t) ∈ R n-1 × R, ϕ(y) < t} where ϕ : R n-1 → R is a Lipschitz function. The elliptic operator that we consider is L =div A∇, where A is a matrix with bounded measurable coefficients satisfying the classical elliptic condition (see for instance (1.1.1) in [Ken]). Yet, the change of variable ρ : R n-1 × R → R n-1 × R defined by ρ(y, t) = (y, t -ϕ(y)) maps Ω into Ω = R n + := {(y, t) ∈ R n-1 × R, t > 0} and changes the elliptic operator L into L =div A∇, where A is also a matrix with bounded measurable coefficients satisfying the elliptic condition (1.1.1) in [Ken]. Therefore, in the sequel, we reduce our choices of Ω and Γ = ∂Ω to R n + and R n-1 = {(y, 0) ∈ R n , y ∈ R n-1 } respectively. Let us recall some facts that also hold in the present context. If u ∈ W 1,2 (D) and D is a Lipschitz set, then u has a trace on the boundary of D and hence we can give a sense to the expression u = h on ∂D. If in addition, the function u is a solution of Lu = 0 in D (the notion of solution is taken in the weak sense, see for instance [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Definition 1.1.4]) and h is continuous on ∂D, then u is continuous on D. The Green function (associated to the domain Ω = R n + and the elliptic operator L) is denoted by g(X, Y ) -with X, Y ∈ Ω -and the harmonic measure (associated to Ω and L) is written ω X (E) -with X ∈ Ω and E ⊂ Γ. The notation ω X D (E) -where X ∈ D and E ⊂ ∂D -denotes the harmonic measure associated to the domain D (and the operator L). When x 0 = (y 0 , 0) ∈ Γ and r > 0, we use the notation A r (x 0 ) for (y 0 , r). In this context, the comparison principle given in [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Lemma 1.3.7] is Lemma 11.1 (Comparison principle, codimension 1). Let x 0 ∈ Γ and r > 0. Let u, v ∈ W 1,2 (Ω ∩ B(x 0 , 2r)) be two non-negative solutions of Lu = Lv = 0 in Ω ∩ B(x 0 , 2r) satisfying u = v = 0 on Γ ∩ B(x 0 , 2r). Then for any X ∈ Ω ∩ B(x 0 , r), we have (11.2) C -1 u(A r (x 0 )) v(A r (x 0 )) ≤ u(X) v(X) ≤ C u(A r (x 0 )) v(A r (x 0 )) , where C > 0 depends only on the dimension n and the ellipticity constants of the matrix A. Proof. We recall quickly the ideas of the proof of the comparison principle found in [Ken]. Let x 0 ∈ Γ and r > 0 be given. We denote A r (x 0 ) by X 0 and, for α > 0, B(x 0 , αr) by B α . The proof of (11.2) is reduced to the proof of the upper bound (11.3) u(X) v(X) ≤ C u(X 0 ) v(X 0 ) for X ∈ B 1 ∩ Ω because of the symmetry of the role of u and v. Step 1: Upper bound on u. By definition of the harmonic measure, (11.4) u(X) = ˆ∂(Ω∩B 3/2 ) u(y)dω X Ω∩B 3/2 (y) for X ∈ B 3/2 ∩ Ω. Note that ∂(Ω ∩ B 3/2 ) = (∂B 3/2 ∩ Ω) ∪ (Γ ∩ B 3/2 ). Hence, for any X ∈ B 3/2 ∩ Ω, (11.5) u(X) = ˆ∂B 3/2 ∩Ω u(y)dω X Ω∩B 3/2 (y) + ˆΓ∩B 3/2 u(y)dω X Ω∩B 3/2 (y) = ˆ∂B 3/2 ∩Ω u(y)dω X Ω∩B 3/2 (y) because, by assumption, u = 0 on Γ ∩ B 2 . Lemma 1.3.5 in [Ken] gives now, for any Y ∈ B 7/4 , the bound u(Y ) ≤ Cu(X 0 ) with a constant C > 0 which is independent of Y . So by the positivity of the harmonic measure, we have for any X ∈ B 3/2 ∩ Ω u(X) ≤ Cu(X 0 ) ˆ∂B 3/2 ∩Ω dω X Ω∩B 3/2 (y) ≤ Cu(X 0 )ω X Ω∩B 3/2 (∂B 3/2 ∩ Ω). (11.6) Step 2: Lower bound on v. First, again by definition of the harmonic measure, we have that for X ∈ B 3/2 ∩ Ω, (11.7) v(X) = ˆ∂(Ω∩B 3/2 ) v(y)dω X Ω∩B 3/2 (y). Set E = {y ∈ ∂B 3/2 ∩Ω ; dist(y, Γ) ≥ 1 2 r}. By assumption, v ≥ 0 on ∂(Ω∩B 3/2 ). In addition, thanks to the Harnack inequality, v(y) ≥ C -1 v(X 0 ) for every y ∈ E, with a constant C > 0 that is independent of y. So the positivity of the harmonic measure yields, for (11.8) Step 3: Conclusion. X ∈ B 3/2 ∩ Ω, v(X) ≥ C -1 v(X 0 ) ˆE dω X Ω∩B 3/2 (y) ≥ C -1 v(X 0 )ω X Ω∩B 3/2 (E). From steps 1 and 2, we deduce that (11.9) u(X) v(X) ≤ C u(X 0 ) v(X 0 ) ω X Ω∩B 3/2 (∂B 3/2 ∩ Ω) ω X Ω∩B 3/2 (E) for X ∈ Ω ∩ B 3/2 . The inequality (11.3) is now a consequence of the doubling property of the harmonic measure (see for instance (1.3.7) in [Ken]), that gives (11.10) ω X Ω∩B 3/2 (∂B 3/2 ∩ Ω) ≤ Cω X Ω∩B 3/2 (E) for X ∈ Ω ∩ B 1 . The lemma follows. The proof above relies on the use of the harmonic measure for the domain Ω ∩ B 3/2 . We want to avoid this, and use only the Green functions and harmonic measures related to the domain Ω itself. First, we need a way to compare two functions in a domain, that is a suitable maximum principle. In the previous proof of Lemma 11.1, the maximum principle was replaced/hidden by the positivity of the harmonic measure, whose proof makes a crucial use of the maximum principle for solutions. See [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Definition 1.2.6] for the construction of the harmonic measure, and [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Corollary 1.1.18] for the maximum principle. The maximum principle that we will use is the following. Lemma 11.11. Let F ⊂ E ⊂ R n be two sets such that dist(F, R n \ E) > 0. Let u be a solution in E ∩ Ω such that (i) ˆE |∇u| 2 < +∞, (ii) u ≥ 0 on Γ ∩ E, (iii) u ≥ 0 in (E \ F ) ∩ Ω. Then u ≥ 0 in E ∩ Ω. In a more 'classical' maximum principle, assumption (iii) would be replaced by (iii') u ≥ 0 in ∂E ∩ Ω. Since this subsection aims to illustrate what we will do in the next subsection, we state here a maximum principle which is as close as possible to the one we will actually prove in higher codimension. Let us mention that using (iii) instead of (iii') will not make computations harder or easier. However, (iii) is much easier to define and use in the higher codimension case (to the point that we did not even try to give a precise meaning to (iii')). We do not prove Lemma 11.11 here, because the proof is the same as for Lemma 11.32 below, which is its higher codimension version. Notice that Lemma 11.11 is really a maximum principle where we use the values of u on a boundary (Γ ∩ E) ∪ (Ω ∩ F \ E) that surrounds E to control the values of u in Ω ∩ E, but here the boundary also has a thick part, Ω ∩ F \ E. This makes it easier to define Dirichlet conditions on that thick set, which is the main point of (iii). The first assumption (i) is a technical hypothesis, it can be seen as a way to control u at infinity, which is needed because we actually do not require E or even F to be bounded. Lemma 11.11 will be used in different situations. For instance, we will use it when E = 2B and F = B, where B is a ball centered on Γ. Step 1 (modified): We want to find an upper estimate for u that avoids using the measure ω X Ω∩B 3/2 . Lemma 1.3.5 in [Ken] gives, as before, that u(X) ≤ Cu(X 0 ) for any X ∈ B 7/4 ∩ Ω. The following result states the non-degeneracy of the harmonic measure. (11.12) ω X (Γ \ B 5/4 ) ≥ C -1 for X ∈ Ω \ B 3/2 , where C > 0 is independent of x 0 , r or X. Indeed, when X ∈ Ω \ B 3/2 is close to the boundary, the lower bound (11.12) can be seen as a consequence of the Hölder continuity of solutions. The proof for all X ∈ Ω \ B 3/2 is then obtained with the Harnack inequality. See [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Lemma 1.3.2] or Lemma 11.73 below for the proof. From there, we deduce that (11.13) u(X) ≤ Cu(X 0 )ω X (Γ \ B 5/4 ) for X ∈ Ω ∩ [B 7/4 \ B 3/2 ]. We want to use the maximum principle given above (Lemma 11.11), with E = B 7/4 and F = B 3/2 . However, the function X → ω X (Γ \ B 5/4 ) doesn't satisfy the assumption (i) of Lemma 11.11. So we take h ∈ C ∞ (R n ) such that 0 ≤ h ≤ 1, h ≡ 1 on R n \ B 5/4 , and h ≡ 0 on R n \ B 9/8 . Define u h as the only solution of Lu h = 0 in Ω with the Dirichlet condition u h = h on Γ. We have u h (X) ≥ ω X (Γ \ B 5/4 ) by the positivity of the harmonic measure, and thus the bound (11.13) yields the existence of K 0 > 0 (independent of x 0 , r, X) such that (11.14) u 1 (X) := K 0 u(X 0 )u h (X) -u(X) ≥ 0 for X ∈ Ω ∩ [B 7/4 \ B 3/2 ]. It would be easy to check that u 1 ≥ 0 on Γ ∩ B 7/4 and ´E |∇u 1 | < +∞, but we leave the details because they will be done in the larger codimension case. So Lemma 11.11 gives that u 1 ≥ 0 in Ω ∩ B 7/4 , that is (11.15) u(X) ≤ K 0 u(X 0 )u h (X) ≤ K 0 u(X 0 )ω X (Γ \ B 9/8 ) for X ∈ Ω ∩ B 7/4 , by definition of h and positivity of the harmonic measure. Step 2 (modified): In the same way, we want to adapt Step 2 of the proof of Lemma 11.1. If we want to proceed as in Step 1, we would like to find and use a function f that keeps the main properties of the object ω X Ω∩B 7/4 (E), where E = y ∈ ∂B 7/4 , ; dist(y, Γ) ≥ r/2 . For instance, f such that (a) f is a solution of Lf = 0 in Ω ∩ B 7/4 , (b) f ≤ 0 in Γ ∩ B 7/4 , (c) f ≤ 0 in {X ∈ Ω, dist(X, Γ) < r/2} ∩ [B 7/4 \ B 3/2 ], (d) f (X) ≈ ω X (Γ \ B 9/8 ) in Ω ∩ B 1 , in particular f > 0 in Ω ∩ B 1 . The last point is important to be able to conclude (in Step 3). It is given by the doubling property of the harmonic measure (11.10) in the previous proof of Lemma 11.1. We were not able to find such a function f . However, we can construct an f that satisfies some conditions close to (a), (b), (c) and (d) above. Since f fails to verify exactly (a), (b), (c) and (d), extra computations are needed. First, note that it is enough to prove that there exists M > 0 depending only on n and the ellipticity constants of A, such that for y 0 ∈ Γ, s > 0, and any non-negative solution v to Lv = 0 in B(y 0 , Ms) (11.16) v(X) ≥ C -1 v(A s (y))ω X (Γ \ B(y 0 , 2s)) for X ∈ Ω ∩ B(y 0 , s), where here the the corkscrew point A s (y) is just A s (y) = (y, s). Indeed, if we have (11.16), then we can prove that, in the situation of Step 2, (11.17) v(X) ≥ C -1 v(X 0 )ω X (Γ \ B 2 ) for X ∈ Ω ∩ B 1 by using a proper covering of the domain Ω ∩ B 1 (if X ∈ Ω ∩ B 1 lies within 1 4M of Γ, say, we use (11.16) with y 0 ∈ Γ close to X and s = 1 2M , and then the Harnack inequality; if instead X ∈ Ω ∩ B 1 is far from the boundary Γ, (11.17) is only a consequence of the Harnack inequality). The conclusion (11.3) comes then from (11.15), (11.17) and the doubling property of the harmonic measure (see for instance (1.3.7) in [Ken]). It remains to prove the claim (11.16). Let y 0 ∈ Γ, s > 0, and v be given. Write Y 0 for A s (y 0 ) and, for α > 0, write B ′ α for B(y 0 , αs). Let K 1 and K 2 be some positive constants that are independent of y 0 , s, and X, and will be chosen later. Pick h K 2 ∈ C ∞ (R n ) such that h K 2 ≡ 1 on R n \ B ′ K 2 , 0 ≤ h K 2 ≤ 1 everywhere, and h K 2 ≡ 0 on B K 2 /2 . Define u K 2 as the solution of Lu K 2 = 0 in Ω with the Dirichlet condition h K 2 on Γ, that will serve as a smooth substitute for X → ω X (Γ \ B ′ K 2 ). Define a function f y 0 ,s on Ω \ {Y 0 } by (11.18) f y 0 ,s (X) = s n-2 g(X, Y 0 ) -K 1 u K 2 . When |X -Y 0 | ≥ s/8, the term s n-2 g(X, Y 0 ) is uniformly bounded: this fact can be found in [HoK] (for n ≥ 3) and [DK] (for n = 2). In addition, due to the non-degeneracy of the harmonic measure (same argument as for (11.12), similar to [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Lemma 1.3.2]), there exists C > 0 (independent of K 2 > 0) such that ω X (Γ \ B ′ K 2 ) ≥ C -1 for X ∈ Ω \ B ′ 2K 2 . Hence we can find K 1 > 0 such that for any choice of K 2 > 0, we have (11.19) f y 0 ,s (X) ≤ 0 for X ∈ Ω \ B ′ 2K 2 . For the sequel, we state an important result. There holds (11.20) C -1 s n-2 g(X, Y 0 ) ≤ ω X (Γ \ B ′ 2 ) ≤ Cs n-2 g(X, Y 0 ) for X ∈ Ω ∩ [B ′ 1 \ B(Y 0 , s/8)], where C > 0 depends only on n and the ellipticity constant of the matrix A. This result can be seen as an analogue of [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Corollary 1.3.6]. It is proven in the higher codimension case in Lemma 11.78 below. The equivalence (11.20) can be seen as a weak version of the comparison principle, dealing only with harmonic measures and Green functions. It can be proven, like [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Corollary 1.3.6], before the full comparison principle by using the specific properties of the Green functions and harmonic measures. We want to take K 2 > 0 so large that (11.21) f y 0 ,s (X) ≥ 1 2 s n-2 g(X, Y 0 ) for X ∈ Ω ∩ [B ′ 1 \ B(Y 0 , s/8)]. We build a smooth substitute u 4 for ω X (Γ \ B ′ 2 ), namely the solution of Lu 4 = 0 in Ω with the Dirichlet condition u 4 = h 4 on Γ, where h 4 ∈ C ∞ (R n ), h 4 ≡ 1 on R n \ B ′ 4 , 0 ≤ h 4 ≤ 1 everywhere, and h 4 ≡ 0 on B ′ 2 . Thanks to the Hölder continuity of solutions and the nondegeneracy of the harmonic measure, we have that for X ∈ Ω ∩ [B ′ 10 \ B ′ 5 ] and any K 2 ≥ 20, (11.22) C -1 u K 2 (X) ≤ (K 2 ) -α ≤ C(K 2 ) -α u 4 (X), with constants C, α > 0 independent of K 2 , y 0 , s or X. Since the functions u K 2 and u 4 are smooth enough, and C -1 u K 2 = 0 ≤ C(K 2 ) -α u 4 (X) on Γ ∩ B ′ 10 , the maximum principle (Lemma 11.11) implies that (11.23) u K 2 (X) ≤ C(K 2 ) -α u 4 (X) for X ∈ Ω ∩ B ′ 10 . We use (11.20) to get that for K 2 ≥ 20, (11.24) K 1 u K 2 (X) ≤ CK 1 (K 2 ) -α s n-2 g(X, Y 0 ) for X ∈ Ω ∩ [B ′ 1 \ B(Y 0 , s/8)]. The inequality (11.21) can be now obtained by taking K 2 ≥ 20 so that CK 1 (K 2 ) -α ≤ 1 2 . From (11.21) and (11.20), we deduce that (11.25) f y 0 ,s (X) ≥ C -1 ω X (Γ \ B ′ 2 ) for X ∈ B ′ 1 \ B(Y 0 , s/8 ), where C > 0 depends only on n and the ellipticity constants of the matrix A. Recall that our goal is to prove the claim (11.16), which will be established with M = 4K 2 . Let v be a non-negative solution of Lv = 0 in Ω ∩ B ′ 4K 2 . We can find K 3 > 0 (independent of y 0 , s and X) such that (11.26) v(X) ≥ K 3 v(Y 0 )f y 0 ,s (X) for X ∈ B(Y 0 , 1 4 s) \ B(Y 0 , 1 8 s). Indeed f y 0 ,s (X) ≤ s n-2 g(X, Y 0 ) ≤ C when |X -Y 0 | ≥ s/8, thanks to the pointwise bounds on the Green function (see [HoK], [DK]) and v(X) ≥ C -1 v(Y 0 ) when |X -Y 0 | ≤ s/ v(X) ≥ 0 ≥ K 3 v(Y 0 )f y 0 ,s (X) for X ∈ Ω ∩ [B ′ 4K 2 \ B ′ 2K 2 ] and it is easy to check that (11.28) v(y) ≥ 0 ≥ K 3 v(Y 0 )f y 0 ,s (y) for y ∈ Γ ∩ B ′ 4K 2 . We can apply our maximal principle, that is Lemma 11.11, with E = B ′ 4K 2 \ B(Y 0 , 1 8 s) and F = B ′ 2K 2 \ B(Y 0 , 1 4 s) and get that (11.29) v(X) ≥ K 3 v(Y 0 )f y 0 ,s (X) for X ∈ Ω ∩ [B ′ 4K 2 \ B(Y 0 , 1 8 s)]. In particular, thanks to (11.25), The claim (11.16) follows, which ends our alternative proof of Lemma 11.1. 11.2. The case of codimension higher than 1. We need first the following version of the maximum principle. (11.30) v(X) ≥ C -1 v(Y 0 )ω X (Γ \ B ′ 2 ) for X ∈ Ω ∩ [B ′ 1 \ B(Y 0 , 1 8 s)]. Since both v and X → ω X (Γ \ B ′ 2 ) are solutions in Ω ∩ B ′ 2 , the Harnack inequality proves (11.31) v(X) ≥ C -1 v(Y 0 )ω X (Γ \ B ′ 2 ) for X ∈ Ω ∩ B ′ 1 . Lemma 11.32. Let F ⊂ R n be a closed set and E ⊂ R n an open set such that F ⊂ E ⊂ R n and dist(F, R n \ E) > 0. Let u ∈ W r (E) be a supersolution for L in Ω ∩ E such that (i) ˆE |∇u| 2 dm < +∞, (ii) T u ≥ 0 a.e. on Γ ∩ E, (iii) u ≥ 0 a.e. in (E \ F ) ∩ Ω. Then u ≥ 0 a.e. in E ∩ Ω. Proof. The present proof is a slight variation of the proof of Lemma 9.13. Set v := min{u, 0} in E ∩ Ω and v := 0 in Ω \ E. Note that v ≤ 0. We want to use v as a test function. We claim that (11.33) v lies in W 0 and is supported in F . Pick η ∈ C ∞ 0 (E) such that η = 1 in F and η ≥ 0 everywhere. Since u ∈ W r (E), we have ηu ∈ W , from which we deduce min{0, ηu} ∈ W by Lemma 6.1. By (iii), v = min{0, ηu} almost everywhere and hence v ∈ W . Notice that T (ηu) ≥ 0 because of Assumption (ii) (and Lemma 8.3). Hence v = min{ηu, 0} ∈ W 0 . And since (iii) also proves that v is supported in F , the claim (11.33) follows. Since v is in W 0 , Lemma 5.30 proves that v can be approached in W by a sequence of functions (v k ) k≥1 in C ∞ 0 (Ω) (i.e., that are compactly supported in Ω; see (5.29)). Note also that the construction used in Lemma 5.30 allows us, since v ≤ 0 is supported in F , to take v k ≤ 0 and compactly supported in E. for some K 1 > 0 that is independent of x and y. Define u on Ω \ {y} by u(x) = K 1 δ(y) 1-dg(x, y). Notice that u is a solution in Ω\B(y, δ(y)/4), by (10.5). Also, thanks to Lemma 10.2 Let us prove the existence of "corkscrew points" in Ω. Lemma 11.46. There exists ǫ > 0, that depends only upon the dimensions d and n and the constant C 0 , such that for x 0 ∈ Γ and r > 0, there exists a point A r (x 0 ) ∈ Ω such that (i) |A r (x 0 ) -x 0 | ≤ r, (ii) δ(A r (x 0 )) ≥ ǫr. In particular, δ(A r (x 0 )) ≈ |A r (x 0 ) -x 0 | ≈ r. Before we prove the comparison theorem, we need a substitute for [START_REF] Kenig | Harmonic analysis techniques for second order elliptic boundary value problems[END_REF]Lemma 1.3.4]. Lemma 11.50. Let x 0 ∈ Γ and r > 0 be given, and let X 0 := A r (x 0 ) be as in Lemma 11.46. Let u ∈ W r (B(x 0 , 2r)) be a non-negative, non identically zero, solution of Lu = 0 in B(x 0 , 2r) ∩ Ω, such that T u ≡ 0 on B(x 0 , 2r) ∩ Γ. Then (11.51) u(X) ≤ Cu(X 0 ) for X ∈ B(x 0 , r), where C > 0 depends only on d, n, C 0 and C 1 . Proof. We follow proof of [START_REF] Jerison | Boundary behavior of harmonic functions in nontangentially accessible domains[END_REF]Lemma 4.4]. Let x ∈ Γ and s > 0 such that T u ≡ 0 on B(x, s) ∩ Γ. Then the Hölder continuity of solutions given by Lemma 8.41 proves the existence of ǫ > 0 (that depends only on d, n, C 0 , C 1 ) such that (11.52) sup B(x,ǫs) u ≤ 1 2 sup B(x,s) u. Without loss of generality, we can choose ǫ < 1 2 . A rough idea of the proof of (11.51) is that u(x) should not be near the maximum of u when x lies close to B(x 0 , r)∩Γ, because of (11.52). Then we are left with points x that lie far from the boundary, and we can use the Harnack inequality to control u(x). The difficulty is that when x ∈ B(x 0 , r) lies close to Γ, u(x) can be bounded by values of u inside the domain, and not by values of u near Γ but from the exterior of B(x 0 , r). We will prove this latter fact by contradiction: we show that if sup B(x 0 ,r) u exceeds a certain bound, then we can construct a sequence of points X k ∈ B(x 0 , 3 2 r) such that δ(X k ) → 0 and u(X k ) → +∞, and hence we contradict the Hölder continuity of solutions at the boundary. Since u(X) > 0 somewhere, the Harnack inequality (Lemma 8.42), maybe applied a few times, yields u(X 0 ) > 0. We can rescale u and assume that u(X 0 ) = 1. We claim that there exists M > 0 such that for any integer N ≥ 1 and Y ∈ B(x 0 , 3 2 r), (11.53) δ(Y ) ≥ ǫ N r =⇒ u(Y ) ≤ M N , where ǫ comes from (11.52) and the constant M depends only upon d, n, C 0 , C 1 . The statement is definitely a little strange, because it seems to be going the wrong way. However, the closer Y is to Γ, the harder it is to estimate u(Y ), even though we expect u(Y ) to be small because of the Dirichlet condition. We will prove this by induction. The base case (and in fact we will manage to start directly from some large integer N 0 ) is given by the following. Let M 2 > 0 be the value given by Lemma 11.46 when M 1 := 1 ǫ . Let N 0 ≥ 1 be the smallest integer such that M 2 ≤ ǫ -N 0 . We want to show the existence of M 3 ≥ 1 such that (11.54) u(Y ) ≤ M 3 for every Y ∈ B(x 0 , 3 2 r) such that δ(Y ) ≥ ǫ N 0 r. Indeed, if Y ∈ B(x 0 , 3 2 r) satisfies δ(Y ) ≥ ǫ N 0 r, Lemma 2. 1 and the fact that |x 0 -X 0 | ≈ r (by Lemma 11.46) imply the existence of a Harnack chain linking Y to X 0 . More precisely, we can find balls B 1 , . . . , B h with a same radius, such that Y ∈ B 1 , X 0 ∈ B h , 3B i ⊂ B(x 0 , 2r)\Γ for i ∈ {1, . . . , h}, and B i ∩ B i+1 = ∅ for i ∈ {1, . . . , h -1}, and in addition h is bounded independently of x 0 , r and Y . Together with the Harnack inequality (Lemma 8.42), we obtain (11.54). This proves (11.53) for N = N 0 , but also directly for 1 ≤ N ≤ N 0 , if we choose M ≥ M 3 . For any point Y ∈ B(x 0 , 3 2 r) such that δ(Y ) ≤ ǫ N 0 r ≤ r M 2 , Lemma 11.48 (and our choice of M 2 ) gives the existence of Z ∈ B(x 0 , 3 2 r) ∩ B(Y, M 2 δ(Y )) such that δ(Z) ≥ M 1 δ(Y ). Since Z ∈ B(Y, M 2 δ(Y ) ) and δ(Z) > δ(Y ) > 0, Lemma 2.1 implies the existence of a Harnack chain whose length is bounded by a constant depending on d, n, C 0 (and M 2 -but M 2 depends only on the three first parameters) and together with the Harnack inequality (Lemma 8.42), we obtain the existence of M 4 ≥ 1 (that depends only on d, n, C 0 and C 1 ) such that u(Y ) ≤ M 4 u(Z). So we just proved that (11.55) for any Y ∈ B(x 0 , 3 2 r) such that δ(Y ) ≤ ǫ -N 0 r, there exists Z ∈ B(x 0 , 3 2 r) such that δ(Z) ≥ M 1 δ(Y ) and u(Y ) ≤ M 4 u(Z). We turn to the main induction step. Set M = max{M 3 , M 4 } ≥ 1 and let N ≥ N 0 be given. Assume, by induction hypothesis, that for any (Z). By the induction hypothesis, u(Y ) ≤ M N +1 . This completes our induction step, and the proof of (11.53) for every N ≥ 1. Z ∈ B(x 0 , 3 2 r) satisfying δ(Z) ≥ ǫ N r, we have u(Z) ≤ M N . Let Y ∈ B(x 0 , 3 2 r) be such that δ(Y ) ≥ ǫ N +1 r . The assertion (11.55) yields the existence of Z ∈ B(x 0 , 3 2 r) such that δ(Z) ≥ M 1 δ(Y ) = ǫ -1 δ(Y ) ≥ ǫ N r and u(Y ) ≤ M 4 u(Z) ≤ Mu Choose an integer i such that 2 i ≥ M, where M is the constant of (11.53) that we just found, and then set M ′ = M i+3 . We want to prove by contradiction that (11.56) u(X) ≤ M ′ u(X 0 ) = M ′ for every X ∈ B(x 0 , r). So we assume that (11.57) there exists X 1 ∈ B(x 0 , r) such that u(X 1 ) > M ′ and we want to prove by induction that for every integer k ≥ 1, (11.58) there exists X k ∈ B(x 0 , 3 2 r) such that u(X k ) > M i+2+k and |X k -x 0 | ≤ 3 2 r -2 -k r. The base step of the induction is given by (11.57) and we want to do the induction step. Let k ≥ 1 be given and assume that (11.58) holds. From the contraposition of (11.53), we deduce that δ( X k ) < ǫ i+2+k r. Choose x k ∈ Γ such that |X k -x k | = δ(X k ) < ǫ i+2+k r. By the induction hypothesis, (11.59) |x k -x 0 | ≤ |x k -X k | + |X k -x k | ≤ 3r 2 -2 -k r + ǫ i+2+k r and, since ǫ ≤ 1 2 , (11.60) |x k -x 0 | ≤ 3r 2 -2 -k r + 2 -2-k r. Now, due to (11.52), we can find k+1) . X k+1 ∈ B(x k , ǫ 2+k r) such that (11.61) u(X k+1 ) ≥ 2 i sup X∈B(x k ,ǫ i+2+k r) u(X) ≥ 2 i u(X k ) ≥ M i+2+ ( The induction step will be complete if we can prove that (11.62) by (11.60) and because ǫ ≤ 1 2 . Let us sum up. We assumed the existence of X 1 ∈ B(x 0 , r) such that u(X 1 ) > M ′ and we end up with (11.58), that is a sequence X k of values in B(x 0 , 3 2 r) such that u(X k ) increases to +∞. Up to a subsequence, we can thus find a point in B(x 0 , 3 2 r) where u is not continuous, which contradicts Lemma 8.106. Hence u(X) ≤ M ′ = M ′ u(X 0 ) for X ∈ B(x 0 , r). Lemma 11.50 follows. |X k+1 -x 0 | ≤ 3 2 r -2 -(k+1) r. Indeed, |X k+1 -x 0 | ≤ |X k+1 -x k | + |x k -x 0 | ≤ ǫ 2+k r + 3r 2 -2 -k r + 2 -2-k r ≤ 3r 2 -2 -k r + 2 -1-k r = 3 2 r -2 -k-1 r Lemma 11.63. Let x 0 ∈ Γ and r > 0 be given, and set X 0 := A r (x 0 ) as in Lemma 11.46. Then for all X ∈ Ω \ B(X 0 , δ(X 0 )/4), (11.64) r d-1 g(X, X 0 ) ≤ Cω X (B(x 0 , r) ∩ Γ) and (11.65) r d-1 g(X, X 0 ) ≤ Cω X (Γ \ B(x 0 , 2r)), where C > 0 depends only on d, n, C 0 and C 1 . Proof. We prove (11.64) first. Let h ∈ C ∞ 0 (B(x 0 , r)) satisfy h ≡ 1 on B(x 0 , r/2) and 0 ≤ h ≤ 1. Define then u ∈ W as the solution of Lu = 0 with data T h given by Lemma 9.3. Set v(X) = 1u(X) ∈ W and observe that 0 ≤ v ≤ 1 and T v = 0 on B(x 0 , r/2) ∩ Γ. By Lemma 8.106, we can find ǫ > 0 (that depends only on d, n, C 0 , C 1 ) such that v(A ǫr (x 0 )) ≤ 1 2 , i.e. u(A ǫr (x 0 )) ≥ 1 2 . The existence of Harnack chains (Lemma 2.1) and the Harnack inequality (Lemma 8.42) give (11.66) C -1 ≤ u(X) for X ∈ B(X 0 , δ(X 0 )/2). By Lemma 10.2 (v), g(X, X 0 ) ≤ C|X-X 0 | 1-d for X ∈ Ω\B(X 0 , δ(X 0 )/4). Since δ(X 0 ) ≈ r by construction of X 0 , (11.67) r d-1 g(X, X 0 ) ≤ C for X ∈ B(X 0 , δ(X 0 )/2) \ B(X 0 , δ(X 0 )/4). The combination of (11.66) and (11.67) yields the existence of K 1 > 0 (depending only on n, d, C 0 and C 1 ) such that (11.68) r d-1 g(X, X 0 ) ≤ K 1 u(X) for X ∈ B(X 0 , δ(X 0 )/2) \ B(X 0 , δ(X 0 )/4). We claim that K 1 u(X)r d-1 g(X, X 0 ) satisfies the assumptions of Lemma 11.32, with E = R n \ B(X 0 , δ(X 0 )/4) and F = R n \ B(X 0 , δ(X 0 )/2). Indeed Assumption (i) of Lemma 11.32 is satisfied because u ∈ W and by Lemma 10.2 (i). Assumption (ii) of Lemma 11.32 holds because T u = h ≥ 0 by construction and also T g(., X 0 ) = 0 thanks to Lemma 10.2 (i). Assumption (iii) of Lemma 11.32 is given by (11.68). The lemma yields (11.69) r d-1 g(X, X 0 ) ≤ K 1 u(X) for X ∈ Ω \ B(X 0 , δ(X 0 )/4). By the positivity of the harmonic measure, u(X) ≤ ω X (B(x 0 , r) ∩ Γ) for X ∈ Ω; (11.64) follows. Let us turn to the proof of (11.65). We want to find two points x 1 , x 2 ∈ Γ ∩ [B(x 0 , Kr) \ B(x 0 , 4r)], where the constant K ≥ 10 depends only on C 0 and d, such that X 1 := A r (x 1 ) and X 2 := A r (x 2 ) satisfy (11.70) B(X 1 , δ(X 1 )/4) ∩ B(X 2 , δ(X 2 )/4) = ∅. To get such points, we use the fact that Γ is Ahlfors regular to find M ≥ 3 (that depends only on C 0 and d) such that Γ 1 := Γ ∩ [B(x 0 , 2Mr) \ B(x 0 , 6r)] = ∅ and Γ 2 := Γ ∩ [B(x 0 , 2M 2 r) \ B(x 0 , 6Mr)] = ∅. Any choice of points x 1 ∈ Γ 1 and x 2 ∈ Γ 2 verifies (11.70). Let X ∈ Ω \ B(X 0 , δ(X 0 )/4). Thanks to (11.70), there exists i ∈ {1, 2} such that X / ∈ B(X i , δ(X i )/4). The existence of Harnack chains (Lemma 2.1), the Harnack inequality (Lemma 8.42), and the fact that Y → g(X, Y ) is a solution of L T u :=div A T ∇u = 0 in Ω \ {X} (Lemma 10.2 and Lemma 10.101) yield (11.71) r d-1 g(X, X 0 ) ≤ Cr 1-d g(X, X i ). By (11.64) and the positivity of the harmonic measure, (11.72) r d-1 g(X, X 0 ) ≤ Cr 1-d g(X, X i ) ≤ Cw X (B(x i , r) ∩ Γ) ≤ Cw X (Γ \ B(x 0 , r)). The lemma follows. We turn now to the non-degeneracy of the harmonic measure. Lemma 11.73. Let α > 1, x 0 ∈ Γ, and r > 0 be given, and let X 0 := A r (x 0 ) ∈ Ω be as in Lemma 11.46. Then (11.74) ω X (B(x 0 , r) ∩ Γ) ≥ C -1 α for X ∈ B(x 0 , r/α), (11.75) ω X (B(x 0 , r) ∩ Γ) ≥ C -1 α for X ∈ B(X 0 , δ(X 0 )/α), (11.76) ω X (Γ \ B(x 0 , r)) ≥ C -1 α for X ∈ Ω \ B(x 0 , αr), and (11.77) ω X (Γ \ B(x 0 , r)) ≥ C -1 α for X ∈ B(X 0 , δ(X 0 )/α), where C α > 0 depends only upon d, n, C 0 , C 1 and α. Proof. Let us first prove (11.74). Set u(X) = 1ω X (B(x 0 , r) ∩ Γ). By Lemma 9.38, u lies in W r (B(x 0 , r)), is a solution of Lu = 0 in Ω ∩ B(x 0 , r), and has a vanishing trace on Γ ∩ B(x 0 , r). So the Hölder continuity of solutions at the boundary (Lemma 8.106) gives the existence of an ǫ > 0, that depends only on d, n, C 0 , C 1 and α, such that u (X) ≤ 1 2 for every X ∈ B(x 0 , 1 2 [1 + 1 α ]r) such that δ(X) ≤ ǫr. Thus v(X) := ω X (B(x 0 , r) ≥ 1 2 for X ∈ B(x 0 , 1 2 [1 + 1 α ]r ) such that δ(X) ≤ ǫr. We now deduce (11.74) from the existence of Harnack chains (Lemma 2.1) and the Harnack inequality (Lemma 8.42). The assertion (11.75) follows from (11.74). Indeed, (11.74) implies that ω A r/2 (x 0 ) (B(x 0 , r)∩ Γ) ≥ C -1 . The existence of Harnack chains (Lemma 2.1) and the Harnack inequality (Lemma 8.42) allow us to conclude. Finally (11.76) and (11.77) can be proved as above, and we leave the details to the reader. Lemma 11.78. Let x 0 ∈ Γ and r > 0 be given, and set X 0 = A r (x 0 ). Then (11.80) where C > 0 depends only upon d, n, C 0 and C 1 . (11.79) C -1 r d-1 g(X, X 0 ) ≤ ω X (B(x 0 , r) ∩ Γ) ≤ Cr d-1 g(X, X 0 ) for X ∈ Ω \ B(x 0 , 2r), and C -1 r d-1 g(X, X 0 ) ≤ ω X (Γ \ B(x 0 , 2r)) ≤ Cr d-1 g(X, X 0 ) for X ∈ B(x 0 , r) \ B(X 0 , δ(X 0 )/4), Proof. The lower bounds are a consequence of Lemma 11.63; the one in (11.79) also requires to notice that δ(X 0 ) ≤ r and thus B(X 0 , δ(X 0 )/4) ⊂ B(x 0 , 2r). It remains to check the upper bounds. But we first prove an intermediate result. We claim that for φ ∈ C ∞ (R n ) ∩ W and X / ∈ supp φ, (11.81) u φ (X) = -ˆΩ A∇φ(Y ) • ∇ y g(X, Y )dY, where u φ ∈ W is the solution of Lu φ = 0, with the Dirichlet condition T u φ = T φ on Γ, given by Lemma 9.3. Indeed, recall that by (8.9) and (8.10) the map Let s > 0 such that B(X, 2s) ∩ (supp φ ∪ Γ) = ∅. For any ρ > 0 we define, as we did in (10.20), the function g ρ T = g ρ T (., X) on Ω as the only function in W 0 such that (11.85) (11.82) u, v ∈ W 0 → ˆΩ A∇u • ∇v = ˆΩ A∇u • ∇v dm ˆΩ A∇ϕ • ∇g ρ T = B(X,ρ) ϕ ∀ϕ ∈ W 0 . We take ϕ = g ρ T in (11.84) to get (11.86) ˆΩ A∇φ • ∇g ρ T = ˆΩ A∇v • ∇g ρ T = B(X,ρ) v. We aim to take the limit as ρ → 0 in (11.86). Since v satisfies (11.87) ˆΩ A∇v • ∇ϕ = ˆΩ A∇φ • ∇ϕ = 0 ∀ϕ ∈ C ∞ 0 (B(X, 2s)), v is a solution of Lv = 0 on B(X, 2s) and thus Lemma 8.40 proves that v is continuous at X. As a consequence, (11.88) lim ρ→0 B(X,ρ) v = v(X). Recall that the g ρ T , ρ > 0, are the same functions as in in the proof of Lemma 10.2, but for the transpose matrix A T . Let α ∈ C ∞ 0 (B(x, 2s)) be such that α ≡ 1 on B(x, s). By (10.82) and Lemma 10.101, there exists a sequence (ρ η ) tending to 0, such that (1α)g ρη T converges weakly to (1α)g T (., X) = (1α)g(X, .) in W 0 . As a consequence, lim η→+∞ ˆΩ A∇φ • ∇g ρη T = lim η→+∞ ˆΩ A∇φ • ∇[(1 -α)g ρη T ] = ˆΩ A∇φ(Y ) • ∇ y [(1 -α)g(X, Y )]dY = ˆΩ A∇φ(Y ) • ∇ y g(X, Y )dY. (11.89) The combination of (11.86), (11.88) and (11.89) yields (11.90) ˆΩ A∇φ(Y ) • ∇ y g(X, Y )dY = v(X). Since v ∈ W 0 satisfies (11.84), the function u φ = φv lies in W and is a solution of Lu φ = 0 with the Dirichlet condition T u φ = T φ. Hence (11.91) ˆΩ A∇φ(Y ) • ∇ y g(X, Y )dY = v(X) = φ(X) -u φ (X) = -u φ (X), by (11.90) and because X / ∈ supp φ. The claim (11.81) follows. We turn to the proof of the upper bound in (11.79), that is, (11.92) ω X (B(x 0 , r) ∩ Γ) ≤ Cr d-1 g(X, X 0 ) for X ∈ Ω \ B(x 0 , 2r). Let X ∈ Ω \ B(x 0 , 2r) be given, and choose φ ∈ C ∞ 0 (R n ) such that 0 ≤ φ ≤ 1, φ ≡ 1 on B(x 0 , r), φ ≡ 0 on R n \ B(x 0 , 5 4 r), and |∇φ| ≤ 10 r . We get that (11.93) u φ (X) ≤ C r ˆB(x 0 , 5 4 r) |∇ y g(X, Y )|dm(Y ) by (11.81) and (8.9), and since ω X (B(x 0 , r) ∩ Γ) ≤ u φ (X) by the positivity of the harmonic measure, ω X (B(x 0 , r) ∩ Γ) ≤ C r ˆB(x 0 , 5 4 r) |∇ y g(X, Y )|dm(Y ). ≤ C r r d+1 2 ˆB(x 0 , 5 4 r) |∇ y g(X, Y )| 2 dX 1 2 (11.94) by Cauchy-Schwarz' inequality and Lemma 2.3. Since X ∈ Ω \ B(x 0 , 2r), Lemma 10.101 and Lemma 10.2 (iii) say that the function Y → g(X, Y ) is a solution of L T u :=div A T ∇u on B(x 0 , 2r), with a vanishing trace on Γ ∩ B(x 0 , 2r). So the Caccioppoli inequality at the boundary (see Lemma 8.47) applies and yields ω X (B(x 0 , r) ∩ Γ) ≤ C r 2 r d+1 2 ˆB(x 0 , 3 2 r) |g(X, Y )| 2 dm(Y ) 1 2 . (11.95) Then by Lemma 11.50, (11.96) the bound (11.92) follows. ω X (B(x 0 , r) ∩ Γ) ≤ C r 2 r d+1 g(X, X 0 ) = Cr d-1 g(X, X 0 ); It remains to prove the upper bound in (11.80), i.e., that (11.97) ω X (Γ \ B(x 0 , 2r)) ≤ Cr d-1 g(X, X 0 ) for X ∈ B(x 0 , r) \ B(X 0 , δ(X 0 )/4). The proof will be similar to the upper bound in (11.79) once we choose an appropriate function φ in (11.81). Let us do this rapidly. Let X ∈ B(x 0 , r) \ B(X 0 , δ(X 0 )/4) be given and take φ ∈ C ∞ (R n ) such that 0 ≤ φ ≤ 1, φ ≡ 1 on R n \ B(x 0 , 8 5 r), φ ≡ 0 on B(x 0 , 7 5 r) and |∇φ| ≤ 10 r . Notice that X / ∈ supp(φ), so (11.81) applies and yields (11.98) u φ (X) ≤ C r ˆB(x 0 , 8 5 r)\B(x 0 , 7 5 r) |∇ y g(X, Y )|dm(Y ). By the positivity of the harmonic measure, ω X (Γ \ B(x 0 , 2r)) ≤ u φ (X). We use the Cauchy-Schwarz and Caccioppoli inequalities (see Lemma 8.47), as above, and get that ω X (Γ \ B(x 0 , 2r)) ≤ C r m(B(x 0 , 8 5 r)) 1 m(B(x 0 , 8 5 r)) ˆB(x 0 , 8 5 r)\B(x 0 , 7 5 r) |∇ y g(X, Y )| 2 dm(Y ) 1 2 ≤ C r 2 r d+1 1 m(B(x 0 , 9 5 r)) ˆB(x 0 , 9 5 r)\B(x 0 , 6 5 r) |g(X, Y )| 2 dm(Y ) 1 2 . (11.99) We claim that (11.100) g(X, Y ) ≤ Cg(X, X 0 ) ∀Y ∈ B(x 0 , 9 5 r) \ B(x 0 , 6 5 r) where C > 0 depends only on d, n, C 0 and C 1 . Two cases may happen. If δ(Y ) ≥ r 20 , (11.100) is only a consequence of the existence of Harnack chains (Lemma 2.1) and the Harnack inequality (Lemma 8.42). Otherwise, if δ(Y ) < r 20 then Lemma 11.50 says that g(X, Y ) ≤ Cg(X, X Y ) for some point X Y ∈ B(x 0 , 9 5 r) \ B(x 0 , 6 5 r) that lies at distance at least ǫr from Γ. Here ǫ comes from Lemma 11.46 and thus depends only on d, n and C 0 . Together with the existence of Harnack chains (Lemma 2.1) and the Harnack inequality (Lemma 8.42), we find that g(X, X Y ), or g(X, Y ), is bounded by Cg(X, X 0 ). We use (11.100) in the right hand side of (11.99) to get that (11.101) ω X (Γ \ B(x 0 , 2r)) ≤ Cr d-1 g(X, X 0 ), which is the desired result. The second and last assertion of the lemma follows. Lemma 11.102 (Doubling volume property for the harmonic measure). For x 0 ∈ Γ and r > 0, we have (11.103) ω X (B(x 0 , 2r) ∩ Γ) ≤ Cω X (B(x 0 , r) ∩ Γ) for X ∈ Ω \ B(x 0 , 4r) and (11.104) ω X (Γ \ B(x 0 , r)) ≤ Cω X (Γ \ B(x 0 , 2r)) for X ∈ B(x 0 , r/2), where C > 0 depends only on n, d, C 0 and C 1 . Proof. Let us prove (11.103) first. Lemma 11.78 says that for X ∈ Ω \ B(x 0 , 4r), (11.105) ω X (B(x 0 , 2r) ∩ Γ) ≈ r d-1 g(X, A 2r (x 0 )) and (11.106) ω X (B(x 0 , r) ∩ Γ) ≈ r d-1 g(X, A r (x 0 )), where A 2r (x 0 ) and A r (x 0 ) are the points of Ω given by Lemma 11.46. The bound (11.103) will be thus proven if we can show that (11.107) g(X, A 2r (x 0 )) ≈ g(X, A r (x 0 )) for X ∈ Ω \ B(x 0 , 4r). Yet, since Y → g(X, Y ) belongs to W r (Ω \ {X}) and is a solution of L T u :=div A T ∇u = 0 in Ω \ {X} (see Lemma 10.2 and Lemma 10.101), the equivalence in (11.107) is an easy consequence of the properties of A r (x 0 ) (Lemma 11.46), the existence of Harnack chains (Lemma 2.1) and the Harnack inequality (Lemma 8.42). We turn to the proof of (11.104). Set X 1 := A r (x 0 ) and X 1 2 := A r/2 (x 0 ). Call Ξ the set of points X ∈ B(x 0 , r/2) such that |X -X 1 | ≥ 1 4 δ(X 1 ) and |X -X 1 2 | ≥ 1 4 δ(X 1 2 ), and first consider X ∈ Ξ. By Lemma 11.78 again, (11.108) ω X (Γ \ B(x 0 , 2r)) ≈ r d-1 g(X, X 1 ) and (11.109) ω X (Γ \ B(x 0 , r)) ≈ r d-1 g(X, X 1 2 ). Since δ(X 1 ) ≈ δ(X 1 2 ) ≈ r and Y → g(X, Y ) is a solution of L T u =div A T ∇u = 0, the existence of Harnack chains (Lemma 2.1) and the Harnack inequality (Lemma 8.42) give g(X, X 1 ) ≈ g(X, X 1 2 ) for X ∈ Ξ. Hence (11.110) ω X (Γ \ B(x 0 , 2r)) ≈ ω X (Γ \ B(x 0 , r)), with constants that do not depend on X, x 0 , or r. The equivalence in (11.110) also holds for all X ∈ B(x 0 , r/2), and not only for X ∈ Ξ, by Harnack's inequality (Lemma 8.42). This proves (11.104). Remark 11.111. The following results also hold for every α > 1. For x 0 ∈ Γ and r > 0, (11.112) ω X (B(x 0 , 2r) ∩ Γ) ≤ C α ω X (B(x 0 , r) ∩ Γ) for X ∈ Ω \ B(x 0 , 2αr), and (11.113) ω X (Γ \ B(x 0 , r)) ≤ C α ω X (Γ \ B(x 0 , 2r)) for X ∈ B(x 0 , r/α), where C α > 0 depends only on n, d, C 0 , C 1 and α. This can be deduced from Lemma 11.102 -that corresponds to the case α = 2 -by applying it to smaller balls. Let us prove for instance (11.113). Let X ∈ B(x 0 , r/α) be given. We only need to prove (11.113) when δ(X) < r 4 (1 -1 α ), because as soon as we do this, the other case when δ(X) ≥ r 4 (1 -1 α ) follows, by Harnack's inequality (Lemma 8.42). Let x ∈ Γ such that |x -X| = δ(X); then set r k = 2 k-1 r[1 -1 α ] and B k = B(x, r k ) for k ∈ Z. We wish to apply the doubling property (11.104) and get that (11.114) ω X (Γ \ B k ) ≤ Cω X (Γ \ B k+1 ), and we can do this as long as X ∈ B k-1 . With our extra assumption that |x -X| = δ(X) < r 4 (1 -1 α ), this is possible for all k ≥ 0. Notice that (11.115) |x - x 0 | ≤ δ(X) + |X -x 0 | ≤ r 4 (1 - 1 α ) + r α ≤ r 2 (1 - 1 α ) + r α = r 2 [1 + 1 α ] and then |x - x 0 | + r 0 ≤ r 2 [1 + 1 α ] + r 2 [1 -1 α ] = r , so B 0 = B(x, r 0 ) ⊂ B(x 0 , r) and, by the monotonicity of the harmonic measure, (11.116) ω X (Γ \ B(x 0 , r)) ≤ ω X (Γ \ B 0 ). Let k be the smallest integer such that 2 k-1 (1 -1 α ) ≥ 3; obviously k depends only on α, and r k ≥ 3r. Then |xx 0 | + 2r < 3r ≤ r k by (11.115), hence B(x 0 , 2r) ⊂ B k and ω X (Γ \ B k ) ≤ ω X (Γ \ B(x 0 , 2r)) because the harmonic measure is monotone. Together with (11.116) and (11.114), this proves that ω X (Γ \ B(x 0 , r)) ≤ C k ω X (Γ \ B(x 0 , 2r)), and (11.113) follows because k depends only on α. The proof of (11.112) would be similar. Lemma 11.117 (Comparison principle for global solutions). Let x 0 ∈ Γ and r > 0 be given, and let X 0 := A r (x 0 ) ∈ Ω be the point given in Lemma 11.46. Let u, v ∈ W be two nonnegative, non identically zero, solutions of Lu = Lv = 0 in Ω such that T u = T v = 0 on Γ \ B(x 0 , r). Then (11.118) C -1 u(X 0 ) v(X 0 ) ≤ u(X) v(X) ≤ C u(X 0 ) v(X 0 ) for X ∈ Ω \ B(x 0 , 2r), where C > 0 depends only on n, d, C 0 and C 1 . Remark 11.119. We also have (11.118) for any X ∈ Ω \ B(x 0 , αr), where α > 1. In this case, the constant C depends also on α. We let the reader check that the proof below can be easily adapted to prove this too. Proof. By symmetry and as before, it is enough to prove that (11.120) u(X) v(X) ≤ C u(X 0 ) v(X 0 ) for X ∈ Ω \ B(x 0 , 2r). Notice also that thanks to the Harnack inequality (Lemma 8.42), v(X) > 0 on the whole Ω \ B(x 0 , r), so we we don't need to be careful when we divide by v(X). Set Γ 1 := Γ ∩ B(x 0 , r) and Γ 2 := Γ ∩ B(x 0 , 15 8 r). Lemma 11.102 -or more exactly (11.112) -gives the following fact that will be of use later on: (11.121) ω X (Γ 2 ) ≤ Cω X (Γ 1 ) ∀X ∈ Ω \ B(x 0 , 2r). with a constant C > 0 which depends only on d, n, C 0 and C 1 . We claim that (11.122) v(X) ≥ C -1 ω X (Γ 1 )v(X 0 ) for X ∈ Ω \ B(x 0 , 2r). Indeed, by Harnack's inequality (Lemma 8.42), (11.123) v(X) ≥ C -1 v(X 0 ) for X ∈ B(X 0 , δ(X 0 )/2). Together with Lemma 11.39, which states that g(X, X 0 ) ≤ Cδ(X 0 ) 1-d ≤ Cr 1-d for any X ∈ Ω \ B(X 0 , δ(X 0 )/4), we deduce the existence of K 1 > 0 (that depends only on d, n, C 0 and C 1 ) such that (11.124) v(X) ≥ K -1 1 r d-1 v(X 0 )g(X, X 0 ) for X ∈ B(X 0 , 1 2 δ(X 0 )) \ B(X 0 , 1 4 δ(X 0 )) Let us apply the maximum principle (Lemma 11.32, with E = R n \ B(X 0 , δ(X 0 )/4) and F = R n \ B(X 0 , δ(X 0 /2))), to the function X → v(X) -K -1 1 r d-1 v(X 0 )g(X, X 0 ). The assumptions are satisfied because of (11.124), the properties of the Green function given in Lemma 10.2, and the fact that v ∈ W is a non-negative solution of Lv = 0 on Ω. We get that (11.125) v(X) ≥ K -1 1 r d-1 v(X 0 )g(X, X 0 ) for X ∈ Ω \ B(X 0 , 1 4 δ(X 0 )) ⊃ Ω \ B(x 0 , 2r). The claim (11.122) is now a straightforward consequence of (11.125) and Lemma 11.78. We want to prove now that (11.126) u(X) ≤ Cu(X 0 )ω X (Γ 2 ) for X ∈ Ω \ B(x 0 , 2r). First, we need to prove that (11.127) u(X) ≤ Cu(X 0 ) for X ∈ B(x 0 , 13 8 r) \ B(x 0 , 11 8 r) ∩ Ω. We split B(x 0 , 13 8 r) \ B(x 0 , 11 8 r) ∩ Ω into two sets: The proof of (11.127) for X ∈ Ω 2 is a consequence of the existence of Harnack chain (Lemma 2.1) and the Harnack inequality (Lemma 8.42). So it remains to prove (11.127) for X ∈ Ω 1 . Let thus X ∈ Ω 1 be given. We can find x ∈ Γ such that X ∈ B(x, 1 8 r). Notice that x ∈ B(x 0 , 7 4 r) because X ∈ B(x 0 , 13 8 r). Yet, since u is a non-negative solution of Lu = 0 in B(x, 1 4 r) ∩ Ω satisfying T u = 0 on B(x, 1 4 r) ∩ Γ, Lemma 11.50 gives that u(Y ) ≤ Cu(A r/8 (x)) for Y ∈ B(x, 1 8 r) and thus in particular u(X) ≤ Cu(A r/8 (x)). By the existence of Harnack chains (Lemma 2.1) and the Harnack inequality (Lemma 8.42) again, u(A r/8 (x)) ≤ Cu(X 0 ). The bound (11.127) for all X ∈ Ω 1 follows. We proved (11.127) and now we want to get (11.126). Recall from Lemma 11.73 that ω X (B(x 0 , 7 4 r) ∩ Γ) ≥ C -1 for X ∈ B(x 0 , 13 8 r) \ Γ. Hence, by (11.127), (11.130) u(X) ≤ Cu(X 0 )ω X (B(x 0 , 7 4 r) ∩ Γ) for X ∈ B(x 0 , 13 8 r) \ B(x 0 , 11 8 r) ∩ Ω. Let h ∈ C ∞ 0 (B(x 0 , 15 8 r)) be such that 0 ≤ h ≤ 1 and h ≡ 1 on B(x 0 , 7 4 r). Then let u h ∈ W be the solution of Lu h = 0 with the Dirichlet condition T u h = T h. By the positivity of the harmonic measure, (11.131) u(X) ≤ Cu(X 0 )u h (X) for X ∈ B(x 0 , 13 8 r) \ B(x 0 , 11 8 r) ∩ Ω. The maximum principle given by Lemma 11.32 -where we take E = R n \ B(x 0 , 11 8 r) and F = R n \ B(x 0 , 13 8 r) -yields (11.132) u(X) ≤ Cu(X 0 )u h (X) for X ∈ Ω \ B(x 0 , 13 8 r) and hence (11.133) u(X) ≤ Cu(X 0 )ω X (Γ 2 ) for X ∈ Ω \ B(x 0 , 13 8 r), where we use again the positivity of the harmonic measure. The assertion (11.126) is now proven. We conclude the proof of the lemma by gathering the previous results. Because of (11.122) and (11.126), (11.134) u(X) v(X) ≤ C u(X 0 ) v(X 0 ) ω X (Γ 2 ) ω X (Γ 1 ) for X ∈ Ω \ B(x 0 , 2r), and (11.120) follows from (11.121). Lemma 11.117 follows. Note that the functions X → ω X (E), where E ⊂ Γ is a non-trivial Borel set, do not lie in W and thus cannot be used directly in Lemma 11.117. The following lemma solves this problem. Lemma 11.135 (Comparison principle for harmonic measures / Change of poles). Let x 0 ∈ Γ and r > 0 be given, and let X 0 := A r (x 0 ) ∈ Ω be as in Lemma 11.46. Let E, F ⊂ Γ ∩ B(x 0 , r) be two Borel subsets of Γ such that ω X 0 (E) and ω X 0 (F ) are positive. Then (11.136) C -1 ω X 0 (E) ω X 0 (F ) ≤ ω X (E) ω X (F ) ≤ C ω X 0 (E) ω X 0 (F ) for X ∈ Ω \ B(x 0 , 2r), where C > 0 depends only on n, d, C 0 and C 1 . In particular, with the choice F = B(x 0 , r)∩Γ, (11.137) C -1 ω X 0 (E) ≤ ω X (E) ω X (B(x 0 , r) ∩ Γ) ≤ Cω X 0 (E) for X ∈ Ω \ B(x 0 , 2r), where again C > 0 depends only on n, d, C 0 and C 1 . Proof. The second part of the lemma, that is (11.137) an immediate consequence of (11.136) and the non-degeneracy of the harmonic measure (Lemma 11.73). In addition, it is enough to prove (11.138) C -1 ω X 0 (E) u(X 0 ) ≤ ω X (E) u(X) ≤ C ω X 0 (E) u(X 0 ) , where u ∈ W is any non-negative non-zero solution of Lu = 0 in Ω satisfying T u = 0 on Γ \ B(x 0 , r), and C > 0 depends only on n, d, C 0 and C 1 . Indeed, (11.136) follows by applying (11.138) to both E and F . Incidentally, it is very easy to find u like this: just apply Lemma 9.23 to a smooth bump function g with a small compact support near x 0 . Assume first that E = K is a compact set. Let X ∈ Ω \ B(x 0 , 2r) be given. Thanks to Lemma 9.38 (i), the assumption ω X 0 (K) > 0 implies that ω X (K) > 0. By the the regularity of the harmonic measure (see (9.32)), we can find an open set U X ⊃ K such that (11.139) ω X 0 (U X ) ≤ 2ω X 0 (K) and ω X (U X ) ≤ 2ω X (K). Urysohn's lemma (see Lemma 2.12 in [Rud]) gives a function h ∈ C 0 0 (Γ) such that 1 K ≤ h ≤ 1 U X . Write v h = U(h) for the image of the function h by the map given in Lemma 9.23. We have seen for the proof of Lemma 9.23 that h can be approximated, in the supremum norm, by smooth, compactly supported functions h k , and that the corresponding solutions v k = U(h k ), and that can also obtained through 9.3, lie in W and converge to v h uniformly on Ω. Hence we can find k > 0 such that (11.140) 1 2 v k ≤ v h ≤ 2v k everywhere in Ω. Write v for v k . Notice that v depends on X, but it has no importance. The estimates (11.139) and (11.140) give (11.141) 1 4 v(X 0 ) ≤ ω X 0 (K) ≤ 2v(X 0 ) and 1 4 v(X) ≤ ω X (K) ≤ 2v(X). We can even choose U X ⊃ K so small, and then g k with a barely larger support, so that T v = g k is supported in B(x 0 , r). As a consequence, the solution v satisfies the assumption of Lemma 11.117. Hence, the latter entails (11.142) C -1 v(X 0 ) u(X 0 ) ≤ v(X) u(X) ≤ C v(X 0 ) u(X 0 ) with a constant C > 0 that depends only on d, n, C 0 and C 1 . Together with (11.141), we get that (11.143) C -1 ω X 0 (K) u(X 0 ) ≤ ω X (K) u(X) ≤ C ω X 0 (K) u(X 0 ) with a constant C > 0 that still depends only on d, n, C 0 and C 1 (and thus is independent of X). Thus the conclusion (11.136) holds whenever E = K is a compact set. Now let E be any Borel subset of Γ ∩ B(x 0 , r). Let X ∈ Ω \ B(x 0 , 2r). According to the regularity of the harmonic measure (9.32), there exists K X ⊂ E (depending on X) such that (11.144) ω X 0 (K X ) ≤ ω X 0 (E) ≤ 2ω X 0 (K X ) and ω X (K X ) ≤ ω X (E) ≤ 2ω X (K X ). The combination of (11.144) and (11.143) (applied to K X ) yields (11.145) C -1 ω X 0 (E) u(X 0 ) ≤ ω X (E) u(X) ≤ C ω X 0 (E) u(X 0 ) where the constant C > 0 depends only upon d, n, C 0 and C 1 . The lemma follows. Let us prove now a comparison principle for the solution that are not defined in the whole domain Ω. Theorem 11.146 (Comparison principle for locally defined functions). Let x 0 ∈ Γ and r > 0 and let X 0 := A r (x 0 ) ∈ Ω be the point given in Lemma 11.46. Let u, v ∈ W r (B(x 0 , 2r)) be two non-negative, not identically zero, solutions of Lu = Lv = 0 in B(x 0 , 2r), such that T u = T v = 0 on Γ ∩ B(x 0 , 2r). Then (11.147) C -1 u(X 0 ) v(X 0 ) ≤ u(X) v(X) ≤ C u(X 0 ) v(X 0 ) for X ∈ Ω ∩ B(x 0 , r), where C > 0 depends only on n, d, C 0 and C 1 . Proof. The plan of the proof is as follows: first, for y 0 ∈ Γ and s > 0, we construct a function f y 0 ,s on Ω such that (i) f y 0 ,s (X) is equivalent to ω X (Γ \ B(y 0 , 2s)) when X ∈ B(y 0 , s) is close to Γ and (ii) f y 0 ,s (X) is negative when X ∈ Ω \ B(y 0 , Ms) -with M depending only on d, n, C 0 and C 1 . We use f y 0 ,s to prove that v(X) ≥ v(A s (y 0 ))ω X (Γ \ B(y 0 , 2s)) whenever X ∈ B(y 0 , s) and B(y 0 , Ms) ⊂ B(x 0 , 2r) is a ball centered on Γ. We use then an appropriate covering of B(x 0 , r) by balls and the Harnack inequality to get the lower bound v(X) ≥ v(X 0 )ω X (Γ \ B(x 0 , 4r)), which is the counterpart of (11.122) in our context. We conclude as in Lemma 11.117 by using Lemma 11.50 and the doubling property for the harmonic measure (Lemma 11.102) Let y 0 ∈ Γ and s > 0. Write Y 0 for A s (y 0 ). The main idea is to take (11.148) f y 0 ,s (X) := s d-1 g(X, Y 0 ) -K 1 ω X (Γ \ B(y 0 , K 2 s)) for some K 1 , K 2 > 0 that depend only on n, d, C 0 and C 1 . With good choices of K 1 and K 2 , the function f y 0 ,s is positive in B(y 0 , s) and negative outside of a big ball B(y 0 , 2K 2 s). However, with this definition involving the harmonic measure, the function f y 0 ,s doesn't satisfy the appropriate estimates required for the use of the maximum principle given as Lemma 11.32. So we shall replace ω X (Γ \ B(y 0 , K 2 s)) by some solution of Lu = 0, with smooth Dirichlet condition. Let h ∈ C ∞ (R n ) be such that 0 ≤ h ≤ 1, h ≡ 0 on B(0, 1/2) and h ≡ 1 on the complement of B(0, 1). For β > 0 (which will be chosen large), we define h β by h β (x) = h( x-y 0 βs ). Let u β be the solution, given by Lemma 9.3, of Lu β = 0 with the Dirichlet condition T u β = T h β . Notice that u β ∈ W because 1u β is the solution of L with the smooth and compactly supported trace 1h. Observe that for any X ∈ Ω and β > 0, (11.149) ω X (Γ \ B(y 0 , βs)) ≤ u β (X) ≤ ω X (Γ \ B(y 0 , βs/2)). The functions u β will play the role of harmonic measures but, unlike these, the functions u β lie in W and are thus suited for the use of Lemma 11.32. By Lemma 11.39, there exists C > 0, that depends only on d, n, C 0 and C 1 , such that (11.150) g(X, Y 0 ) ≤ Cδ(Y 0 ) 1-d for X ∈ Ω \ B(Y 0 , δ(Y 0 )/4). Moreover, since Y 0 comes from Lemma 11.46, we have ǫs ≤ δ(Y 0 ) ≤ s with an ǫ > 0 that does not depend on s or y 0 , and hence (11.151) s d-1 g(X, Y 0 ) ≤ C for X ∈ Ω \ B(y 0 , 2s). From this and the non-degeneracy of the harmonic measure (Lemma 11.73), we deduce that for β ≥ 1, (11.152) s d-1 g(X, Y 0 ) ≤ K 1 ω X (Γ \ B(y 0 , βs)) ≤ K 1 u β (X) for X ∈ Ω \ B(y 0 , 2βs), where the constant K 1 > 0, depends only on d, n, C 0 and C 1 . Our aim now is to find K 2 ≥ 20 such that (11.153) where u 4 is defined as u β (with β = 4). As a consequence, there exists K 3 > 0, that depends only on d, n, C 0 , and C 1 , such that for β ≥ 20, (11.156) u β (X) ≤ K 3 β -α u 4 (X) for X ∈ Ω ∩ [B(y 0 , 10s) \ B(y 0 , 8s)]. K 1 u K 2 (X) ≤ 1 2 s d- We just proved that for β ≥ 20, the function u ′ = K 3 β -α u 4u β satisfies all the assumption (iii) of Lemma 11.32, with E = B(y 0 , 10s) and F = B(y 0 , 8s). The other assumptions of Lemma 11.32 are satisfied as well, since u ′ ∈ W is smooth and T (u ′ ) = K 3 β -α T u 4 ≥ 0 on Γ ∩ E. Therefore, Lemma 11.32 gives (11.157) u β (X) ≤ K 3 β -α u 4 (X) for X ∈ Ω ∩ B(y 0 , 10s). Use now (11.149) and Lemma 11.78 to get for X ∈ Ω ∩ [B(y 0 , s) \ B(Y 0 , δ(Y 0 )/4)], (11.158) u β (X) ≤ K 3 β -α ω X (Γ \ B(y 0 , 2s)) ≤ Cβ -α s d-1 g(X, Y 0 ), where C > 0 depends only on d, n, C 0 and C 1 . The existence of K 2 ≥ 20 satisfying (11.153) is now immediate. Define the function f y 0 ,s on Ω \ {Y 0 } by (11.159) f y 0 ,s (X) := s d-1 g(X, Y 0 ) -K 1 u K 2 (X). The inequality (11.152) gives (11.160) f y 0 ,s (X) ≤ 0 for X ∈ Ω \ B(y 0 , 2K 2 s), and the estimates (11.153) and (11.80) imply that (11.161) f y 0 ,s (X) ≥ 1 2 s d-1 g(X, Y 0 ) ≥ C -1 ω X (Γ \ B(y 0 , 2s)) for X ∈ Ω ∩ [B(y 0 , s) \ B(Y 0 , δ(Y 0 )/4)]. Let us turn to the proof of the comparison principle. By symmetry and as in Lemma 11.117, it suffices to prove the upper bound in (11.147), that is (11.162) u(X) v(X) ≤ C u(X 0 ) v(X 0 ) for X ∈ Ω ∩ B(x 0 , r). We claim that (11.163) v(X) ≥ C -1 v(X 0 )ω X (Γ \ B(x 0 , 2r)) for X ∈ Ω ∩ B(x 0 , r), where C > 0 depends only on n, d, C 0 and C 1 . So let X ∈ Ω ∩ B(x 0 , r) be given. Two cases may happen. If δ(X) ≥ r 8K 2 , where K 2 comes from (11.153) and is the same as in the definition of f y 0 ,s , the existence of Harnack chains (Lemma 2.1), the Harnack inequality (Lemma 8.42) and the non-degeneracy of the harmonic measure (Lemma 11.73) give (11.164) v(X) ≈ v(X 0 ) ≈ v(X 0 ) ω X (Γ \ B(x 0 , 2r)) ω X 0 (Γ \ B(x 0 , 2r)) ≈ v(X 0 )ω X (Γ \ B(x 0 , 2r)) by (11.77). The more interesting remaining case is when δ(X) < r 8K 2 . Take y 0 ∈ Γ such that |Xy 0 | = δ(X). Set s := r 8K 2 and Y 0 = A s (y 0 ). The ball B(y 0 , 1 2 r) = B(y 0 , 4K 2 s) is contained in B(x 0 , 7 4 r). The following points hold : • The quantity ´B(y 0 ,4K 2 s)\B(Y 0 ,δ(Y 0 )/4) |∇v| 2 dm is finite because v ∈ W r (B(x 0 , 2r)). The fact that ´B(y 0 ,4K 2 s)\B(Y 0 ,δ(Y 0 )/4) |∇f y 0 ,s | 2 dm is finite as well follows from the property (10.3) of the Green function. Indeed, v ≥ 0 on B(y 0 , 4K 2 s) and, thanks to (11.160), f y 0 ,s ≤ 0 on Ω \ B(y 0 , 2K 2 s). • The trace of v -K 4 v(Y 0 )f y 0 ,s is non-negative on B(y 0 , 4K 2 s)∩Γ again because T v = 0 on B(y 0 , 4K 2 s) ∩ Γ and T [f y 0 ,s ] ≤ 0 on B(y 0 , 4K 2 s) ∩ Γ by construction. The previous points prove that v-K 4 v(Y 0 )f y 0 ,s satisfies the assumptions of Lemma 11.32 with E = B(y 0 , 4K 2 s) \ B(Y 0 , δ(Y 0 )/4) and F = B(y 0 , 2K 2 s) \ B(Y 0 , δ(Y 0 )/2). As a consequence, for any Y ∈ B(y 0 , 4K 2 s) \ B(Y 0 , δ(Y 0 )/4) (11.168) v(Y ) -K 4 v(Y 0 )f y 0 ,s (Y ) ≥ 0, The bound (11.162) is now a consequence of the above inequality and the doubling property of the harmonic measure (Lemma 11.102,or more exactly (11.113)). check that if g ∈ H, then (1.8) lim r→0 Γ∩B(x,r) |g(y)g(x)|dσ(y) = 0 for σ-almost every x ∈ Γ. We typically use the fact that |u(x)u(y)| ≤ ´[x,y] |∇u| for almost all choices of x and y ∈ Ω, for which we can use the absolute continuity of u ∈ W on (almost all) line segments, plus the important fact that, by (1.1), Γ ∩ ℓ = ∅ for almost every line ℓ. |dy ≤ Cr -d ˆB(x,r) |∇u(y)|w(y)dy for u ∈ W , x ∈ Γ, and r > 0 such that T u = 0 on Γ ∩ B(x, r) and, if m(B(x, r)) denotes ´B(x,r) w(y)dy, Theorem 3.13. There exists a bounded linear operator T : W → H (a trace operator) with the following properties. The trace of u ∈ W is such that, for σ-almost every x ∈ Γ, ) -T u(x)|dy = 0. )g s (x)|dydσ(x) ≤ Cr -d B(0,r) |∇u(ξ)|δ(ξ) 1+d-n dy.Denote by I(s) the left-hand side. By (3 Lemma 8.26 (interior Caccioppoli inequality). Let E ⊂ Ω be an open set, and let u ∈ W r (E) be a non-negative subsolution in E. Then for any α ∈ C ∞ 0 (E), (8.27) ˆΩ α 2 |∇u| 2 dm ≤ C ˆΩ |∇α| 2 u 2 dm, where C depends only upon the dimensions n and d and the constant C 1 . Lemma 8 . 8 71 (Moser estimate at the boundary for general p). Let p > 0. Let B be a ball centered on Γ. Let u ∈ W r (2B) be a non-negative subsolution in 2B \ Γ such that T u = 0 a.e. on 2B. Then depends only on the dimensions n and d, the constants C 0 and C 1 , and the exponent p. Lemma 9.13. Let u ∈ W be a supersolution in Ω satisfying T u ≥ 0 a.e. on Γ. Then u ≥ 0 a.e. in Ω.Proof. Set v = min{u, 0} ≤ 0. According to Lemma 6.1 (b)min{T u, 0} = 0 a.e. in Γ. Since u k is a solution for every k, ˆΩ A∇u • ∇ϕ dm = ˆΩ A∇(φu) • ∇ϕ dm = lim k→∞ ˆΩ A∇(φu k ) • ∇ϕ dm = lim k→∞ ˆΩ A∇u k • ∇ϕ dm = 0. (9.27) This proves (9.26) and (iii) follows. |y-z|+ρ) |ϕ| ≤ C y,z,ρ ϕ W by Lemma 4.1. By the Lax-Milgram theorem, there exists then a unique function g ρ = g ρ (., y) ∈ W 0 such that (10.20) a(g ρ , ϕ) = ˆΩ A∇g ρ • ∇ϕ dm = Bρ ϕ ∀ϕ ∈ W 0 . ( 10.120) g(x) ≤ Cδ(x) α |x -y| 1-d-α for x ∈ Ω such that |x -y| ≥ 4δ(x), with constants C > 0 and α > 0 that depend only on n, d, C 0 and C 1 . By Lemma 10.2-(v), (10.121) g(z) ≤ C|z -y| 1-d for z ∈ Ω \ B(y, δ(y)/4). Let x be such that |x -y| ≥ 4δ(x), choose x 0 ∈ Γ such that |xx 0 | = δ(x), and set r = |x -y| and B = B(x 0 , |x -y|/3); thus x ∈ B. We shall need to know that (10.122) δ(y) ≤ δ(x) + |x -y| ≤ r 4 + r = 5r 4 . Then let z be any point of Ω∩B. Obviously |z-x| ≤ |z-x 0 |+|x 0 -x| ≤ r 3 +δ(x) |y -z| ≥ |y -x| -|z -x| ≥ r -Hence by (10.121), g(z) ≤ C|z-y| 1-d . Notice also that |y-z| ≤ |y-x|+|z-x| ≤ r+ 7r 12 = 19r 12 , so, with (10.123), 5r 12 ≤ |y -z| ≤ 19r 12 and y)| 2 dm is finite, and T u = K 1 δ(y) 1-d is non-negative a.e. on Γ. In addition, due to (11.42), we have u ≥ 0 on B(y, δ(y)/4) \ B(y, δ(y)/4). Thus u satisfies all the assumption of Lemma 11.32 (the maximum principle), where we choose E = R n \ B(y, δ(y)/8) and F = R n \ B(y, δ(y)/8), and which yields (11.43) g(x, y) ≤ Cδ(y) 1-d for x ∈ Ω \ B(y, δ(y)/8).It remains to prove that (11.44) g(x, y) ≤ Cδ(x) 1-d for x, y ∈ Ω such that |x -y| ≥ δ(y)/4.But Lemma 10.101 says that g(x, y) = g T (y, x), where g T is the Green function associated to the operator L T =div A T ∇. The above argument proves that (11.45) g(x, y) = g T (y, x) ≤ Cδ(x) 1-d for x, y ∈ Ω such that |x -y| ≥ δ(x)/8, which is (11.44) once we remark that |x -y| ≥ δ(y)/4 implies that |x -y| ≥ δ(x)/8. is bounded and coercive on W 0 and the map (11.83) ϕ ∈ W 0 → ˆΩ A∇φ • ∇ϕ = ˆΩ A∇φ • ∇ϕ dm is bounded on W 0 . So the Lax-Milgram theorem yields the existence of v ∈ W 0 such that (11.84) ˆΩ A∇φ • ∇ϕ = ˆΩ A∇v • ∇ϕ ∀ϕ ∈ W 0 . 1 g(X, Y 0 ) for X ∈ Ω ∩ [B(y 0 , s) \ B(Y 0 , δ(Y 0 )/4)].According to the Hölder continuity at the boundary (Lemma 8.106), we have(11.154) supB(y 0 ,10s) u β ≤ Cβ -αfor any β ≥ 20, where C and α > 0 depend only on d, n, C 0 and C 1 . Moreover, due to (11.149) and the non-degeneracy of the harmonic measure (Lemma 11.73), (11.155) u 4 (X) ≥ C -1 for X ∈ Ω \ B(y 0 , 8s) • There exists K 4 > 0 (depending only on d, n, C 0 and C 1 ) such that(11.165) v(Y ) -K 4 v(Y 0 )f y 0 ,s (Y ) ≥ 0 for Y ∈ B(Y 0 , δ(Y 0 )/2) \ B(Y 0 , δ(Y 0 )/4).This latter inequality is due to the following two bounds: the fact that (11.166)f y 0 ,s (Y ) ≤ s 1-d g(Y, Y 0 ) ≤ C for Y ∈ B(Y 0 , δ(Y 0 )/2) \ B(Y 0 , δ(Y 0 )/4),which is a consequence of the definition (11.159) and (10.7), and the bound(11.167) v(Y ) ≥ C -1 v(Y 0 ) for Y ∈ B(Y 0 , δ(Y 0 )/2),which comes from the Harnack inequality (Lemma 8.42).• The function v -K 4 v(Y 0 )f y 0 ,s is nonnegative on Ω ∩ [B(y 0 , 4K 2 s) \ B(y 0 , 2K 2 s)]. d+1-n dζ = C u 2 W by definition of the I k (r), then (3.40) and the definition of W . We may now look at the definition (3.34) of I(r), let r tend to 0, and get that (3.47) by Fatou's lemma, as needed for the trace theorem. g H ≤ C u 2 W 4. Poincaré inequalities Lemma 4.1. Let Γ be a d-ADR set in R n , d < n -1, that is, assume that (1.1) is satisfied. Then ˆB(x,r) (4.2) B(x,r) |u(y)|dy ≤ Cr -d |∇u(y)|w(y)dy ´B u by the average u B , where B is a ball near B. We choose for B a ball with same radius as B and contained in 3B \ B, because this way u B = 0 since u is supported in To prove (4.34), the main idea is that we can replace in (4.14) the quantity u B = m(B) -1 B. Then 1 m(3B) ˆ3B |u(y)| p w(y)dy 1/p = 1 m(3B) ˆ3B |u(y) -u B | p w(y)dy 1/p (4.35) ≤ 1 m(3B) ˆ3B |u(y) -u 3B | p w(y)dy 1/p + |u 3B -u B | Yet, using Jensen's inequality and then Hölder's inequality, |u 3B -u B | is bounded by 1 1/p . If we use in addition the doubling property given by m( B) ´3B |u(y) -u 3B | p w(y)dy (4.31), we get that |u 3B -u B | is bounded by 1 m(3B) ´3B |u(y) -u 3B | p w(y)dy 1/p , that is, (4.36) 1 m(3B) ˆ3B |u(y)| p w(y)dy 1/p ≤ 1 m(3B) ˆ3B |u(y) -u 3B | p w(y)dy 1/p . 1 2 (4.32) , which proves Lemma 4.13. Remark 4.33. If B ⊂ R n and u ∈ W is supported in B, then for any p ∈ [1, 2n/(n -2)] (or p ∈ [1, +∞) if n = 2), there holds (4.34) 1 m(B) ˆB |u(y)| p w(y)dy 1/p ≤ Cr 1 m(B) ˆB |∇u(y)| 2 w(y)dy 1/2 . That is, we can choose u B = 0 in (4.14). the definition yields ϕ ∈ W . Moreover ϕ is compactly supported in E (and in particular ϕ ∈ W 0 ). By the product rule, ∇ϕ = α 2 ∇u + 2αu∇α. Thus (8.29) becomes ˆΩ α 2 A∇u • ∇u dm ≤ -2 ˆΩ αu A∇u • ∇α dm. (8.30) It follows from this and the ellipticity and boundedness conditions (8.10) and (8.9) that ˆΩ α 2 |∇u| 2 dm ≤ C ˆΩ |α||∇u||u||∇α| dm (8.31) and then ˆΩ α 2 |∇u| 2 dm ≤ C ˆΩ α 2 |∇u| 2 dm Lemma 8.34 (interior Moser estimate). Let p > 0 and B be a ball such that 3B ⊂ Ω. If u ∈ W r (3B) is a non-negative subsolution in 2B, then The first item of Lemma 8.16 yields (8.29) ˆΩ A∇u • ∇ϕ dm ≤ 0. (8.32) 1 2 ˆΩ u 2 |∇α| 2 dm 1 2 by the Cauchy-Schwarz inequality. Consequently, (8.33) ˆΩ α (8.35) sup B 2 |∇u| 2 dm ≤ C ˆΩ |∇α| 2 u 2 dm, which is (8.28). Lemma 8.47 follows since (8.48) is a straightforward application of (8.28) when E = 2B, α ≡ 1 on B and |∇α| ≤ 2 r . s) |u -k| 2 dm. and ˆA(h,s) (8.58) u(h, s) = |u -h| 2 dm; thus (8.59) Define (8.57) a(h, s) = m(A(h, s)) equal to 1 on B r/2 , and such that |∇η| ≤ 4 r . Use η as a test function in (10.94) to get that (10.99) 1 = ˆBr\Br/2 A∇g • ∇η dm ≤ C r ˆBr\Br/2 |∇g| dm, ˆΩ A∇v • ∇v dm = ˆE A∇u • ∇v dm ≤ 0.Together with the ellipticity condition (8.10), we obtain v W ≤ 0. Recall that . W is a norm on W 0 ∋ v, hence v = 0 a.e. in Ω. We conclude from the definition of v that u ≥ 0 a.e. in E ∩ Ω.Let us use the maximum principle above to prove the following result on the Green function. and so (11.36) becomes (11.38) Lemma 11.39. We have (11.40) where the constant C > 0 depends only on d, n, C 0 and C 1 . g(x, y) ≤ C min{δ(y), δ(x)} 1-d for x, y ∈ Ω such that |x -y| ≥ δ(y)/4, Remark 11.41. Lemma 11.39 is an improvement on the pointwise bounds (10.7) only when d < 1. Proof. Let y ∈ Ω. Lemma (10.2) (v) gives (11.42) Definition 8.15 gives (11.34) ˆE A∇u • ∇v k dm = ˆΩ A∇u • ∇v k dm ≤ 0 and since the map (11.35) ϕ ∈ W → ˆE A∇u • ∇ϕ dm is bounded on W thanks to assumption (i) and (8.9), we deduce that (11.36) ˆE A∇u • ∇v dm ≤ 0. Now Lemma 6.1 gives (11.37) ∇v = ∇u if u < 0 0 if u ≥ 0 g(x, y) ≤ K 1 δ(y) 1-d for x ∈ B(y, δ(y)/4) \ B(y, δ(y)/8) We inject this last estimate in (8.92) and get that Lemma 8.98 (Oscillation estimates on the boundary). Let B be a ball centered on Γ and u ∈ W r (4B) be a solution in 4B \ Γ such that T u is uniformly bounded on 4B. Then, there exists η ∈ (0, 1) such that (8.99) osc The constant η depends only on the dimensions n, d and the constants C 0 and C 1 . Proof. Let us first prove that (8.100) for some c ∈ (0, 1]. Notice that (8.100) is trivially true if M 4 -M = 0. Otherwise, we apply Lemma 8.84 to the non-negative supersolution min{ M 4 -u M 4 -M , 1} whose trace equals 1 on 4B (with Lemma 6.1) and we obtain for some constant c ∈ (0, 1] which gives (8.100) if we multiply both sides by M 4 -M. In the same way, (8.101) is true if mm 4 = 0 and otherwise, we apply Lemma 8.84 to the non-negative supersolution min{ u-m 4 m-m 4 , 1} and we get for some c ∈ (0, 1] which is (8.101). We sum then (8.100) and (8.101) to get that is which is exactly the desired result. We end the section with the Hölder continuity of solutions at the boundary. In the sequel, for any s > 0 and y ∈ Γ, A s (y) will denote any point in Ω satisfying the conditions (i) and (ii) of Lemma 11.46. Proof. Let x 0 ∈ Γ and r > 0 be given. Let ǫ ∈ (0, 1/8) be small, to be chosen soon. Let z 1 , • • • z N be a maximal collection of points of B(x 0 , (1 -2ǫ)r) that lie at mutual distances at least 4ǫr. Set B i = B(z i , ǫr); notice that the 2B i = B(z i , 2ǫr) are disjoint and contained in B(x 0 , r), and the 5B i cover B(x 0 , (1 -2ǫ)r) (by maximality), so Suppose for a moment that every B i meets Γ. Pick y i ∈ Γ∩B i notice that B(y i , ǫr) ⊂ 2B i , and then use the Ahlfors-regularity property (1.1) to prove that 0 . We pick ǫ like this, and by contraposition get that at least one B i does not meet Γ. We choose A r (x 0 ) = z i , and notice that δ(x i ) ≥ ǫr because B i ∩ Γ = ∅, and |z ix 0 | ≤ r by construction. The lemma follows. We also need the following slight improvement of Lemma 11.46. Lemma 11.48. Let M 1 ≥ 1 be given. There exists M 2 > M 1 (depending on d, n, C 0 and M 1 ) such that for any ball B of radius r and centered on Γ and any Proof. The proof is almost the same. Let M 1 ≥ 1 be given, and let M 2 ≥ 10M 1 be large, to be chosen soon. Then let B = B(x 0 , r) and x ∈ B be as in the statement. Set B ′ = B(x 0 , r -2M 1 δ(x)) ∩ B(x, (M 2 -2M 1 )δ(x)); notice that the two radii are larger than Pick a maximal family (z i ), 1 ≤ i ≤ N, of points of B ′ that lie at mutual distances at least 4M 1 δ(x) from each other, and set Suppose for a moment that every B i meets Γ. Then pick y i ∈ B i ∩ Γ and use the Ahlfors regularity property (1.1) and the fact that the 2B i contain the B(y i , M 1 δ(x)) and are disjoint to prove that and this contradicts our other bound for N if M 2 /M 1 is large enough. We choose M 2 like this; then some B i doesn't meet Γ, and we can take y = z i . and hence, for any by (11.161). Since both v and Y → ω Y (Γ\B(y 0 , 2s)) are solutions on B(y 0 , 2s), we can use the Harnack inequality (Lemma 8.42) to deduce, first, that (11.169) holds for any Y ∈ B(y 0 , s) and second, that we can replace v(Y 0 ) by v(X 0 ) (recall that at this point, s r = 1 8K 2 is controlled by the usual constants). Therefore, In particular, with our choice of y 0 and s, the inequality is true when X = Y , that is, where C > 0 depends only on d, n, C 0 and C 1 . The claim (11.163) follows. Now we want to prove that By Lemma 11.50, r), and h ′ ≡ 0 on B(x 0 , 5 4 r). Let u h ′ = U(h ′ ) be the solution of Lu h ′ = 0 with the data T u h ′ = T h ′ (given by Lemma 9.3). As before, u h r)) by monotonicity. So (11.76), which states the non-degeneracy of the harmonic measure, gives (11.174) The combination of (11.173) and (11.174) yields the existence of K 5 > 0 (that depends only on d, n, C 0 and C 1 ) such that K 5 u(X 0 )u h ′ -u ≥ 0 on Ω∩[B(x 0 , 7 4 r)\B(x 0 , 13 8 r)]. It is easy to check that K 5 u(X 0 )u h ′u satisfies all the assumptions of Lemma 11.32, with E = B(x 0 , 7 4 r) and F = B(x 0 , 13 8 r). This is because u ∈ W r (B(x 0 , 2r)), u h ′ ∈ W , T u h ′ ≥ 0, and T u = 0 on Γ ∩ B(x 0 , 2r). Then by Lemma 11.32 (11.175) u ≤ K 5 u(X 0 )u h ′ for X ∈ Ω ∩ B(x 0 , 7 4 r), and since u h ′ (X) ≤ ω X (Γ \ B(x 0 , 5 4 r)) for all X ∈ Ω, (11.176) u(X) ≤ Cu(X 0 )ω X (Γ \ B(x 0 , 5 4 r)) for X ∈ Ω ∩ B(x 0 , 7 4 r). The claim (11.172) follows. The bounds (11.163) and (11.172) imply that (11.177) u(X) v(X) ≤ C u(X 0 ) v(X 0 ) ω X (Γ \ B(x 0 , 5 4 r)) ω X (Γ \ B(x 0 , 2r)) for X ∈ Ω ∩ B(x 0 , r).
244,525
[ "849452" ]
[ "40", "105079", "105079" ]
01472882
en
[ "phys", "sdv" ]
2024/03/04 23:41:46
2016
https://hal.science/hal-01472882v2/file/tex00000109.pdf
Rohan-Jean Bianco Pierre-Jean Arnoux Ph.D Eric Wagnac Jean-Marc Mac-Thiong Ph.D Carl-Éric Aubin email: [email protected] P.Eng Full Minimizing pedicle screw pullout risks: a detailed biomechanical analysis of screw design and placement Keywords: Finite Element Analysis, Pedicle Screw, Pullout Test, Spinal Instrumentation Study design: Detailed biomechanical analysis of the anchorage performance provided by different pedicle screw design and placement strategies under pullout loading. Objective: To biomechanically characterize the specific effects of surgeon-specific pedicle screw design parameters on anchorage performance using a finite element model (FEM). Summary of background data: Pedicle screw fixation is commonly used in the treatment of spinal pathologies. However, there is little consensus on the selection of an optimal screw type, size, and insertion trajectory depending on vertebra dimension and shape. Methods: Different screw diameters and lengths, threads and insertion trajectories were computationally tested using a design of experiment (DOE) approach. A detailed FEM of an L3 vertebra was created including elastoplastic bone properties and contact interactions with the screws. Loads and boundary conditions were applied to the screws to simulate axial pullout tests. Force-displacement responses and internal stresses were analyzed to determine the specific effects of each parameter. Results: The DOE analysis revealed significant effects (p<0.01) for all tested principal parameters along with the interactions between diameter and trajectory. Screw diameter had the greatest impact on anchorage performance. The best insertion trajectory to resist pullout involved placing the screw threads closer to the pedicle walls using the straight-forward insertion technique, which showed the importance of the cortical layer grip. The simulated INTRODUCTION Pedicle screw fixation is commonly used in spinal instrumentation surgeries to connect rods to vertebrae in order to correct spine alignment, stabilize vertebrae and reach an arthrodesis [START_REF] Lenke | Rationale behind the current state-of-the-art treatment of scoliosis (in the pedicle screw era)[END_REF] . To be effective, the pedicle screw constructs must withstand intra-operative loading as well as physiological forces due to daily post-operative activities. In current practice, numerous screw designs (various screw shaft threads and shape, screw heads articulated with the screw shaft, materials), insertion and manipulation techniques afford the surgeon many options. However, there is little consensus on the selection of an optimal type of screw, size, and trajectory (insertion point, tapping and screw alignment) depending on vertebra dimension and shape and bone mechanical properties. These choices are determined at the discretion of the surgeon based on his/her experience and practice [START_REF] Dhawan | Thoracic pedicle screws: comparison of start points and trajectories[END_REF][START_REF] Lehman | Straight-forward versus anatomic trajectory technique of thoracic pedicle screw fixation: a biomechanical analysis[END_REF] . Computer-assisted surgery systems guide surgeons, in real-time, to properly insert the screws through the pedicles [START_REF] Nottmeier | Placement of thoracolumbar pedicle screws using three-dimensional image guidance: experience in a large patient cohort[END_REF] , but the insertion strategy itself, generally, remains empirical. In vitro experiments, such as axial pullout tests, provide significant insight into the biomechanics of screw-bone interactions and failure forces. Several surgical parameters have been studied such as the trajectory and entry point [START_REF]!!! INVALID CITATION !!![END_REF] . However, experimentations reveal limitations in terms of inter-individual variability (bone density, pedicle morphology, etc.) and reproducibility. Also, the surgical techniques used during those tests can significantly affect mechanical strength [START_REF] Brown | The effect of starting point placement technique on thoracic transverse process strength: an ex vivo biomechanical study[END_REF] . Such limitations in determining the optimal parameters for obtaining strong pedicle screw fixation could be overcome by finite element analysis [START_REF] Wagnac | Biomechanical analysis of pedicle screw placement: a feasibility study[END_REF] . A few finite element models (FEM) have been developed, but most fail to take into account local geometric details and advanced mechanical properties such as plastic deformation, bone fracture, material properties distribution and contact friction at the bone-screw interface. Previous models focused either on detailed pedicle models [START_REF] Zhang | Effects of bone materials on the screw pull-out strength in human spine[END_REF] or on simplified complete vertebra models [START_REF] Chen | Failure analysis of broken pedicle screws on spinal instrumentation[END_REF] , which did not permit studying the detailed effects of every screw design parameter and insertion trajectory individually or combined simultaneously. The objective of this study was to analyze the bone-screw mechanical interaction and test several parameters such as the pedicle screw size, thread design, insertion point and trajectory that could minimize the risk of instrumentation failure using a detailed FEM of an instrumented vertebra. MATERIALS AND METHODS For this study, two different existing multi-axial screws were used with different thread patterns (Figure 2) from the CD Horizon spinal systems (Medtronic Inc., Memphis, USA). The first screw (CD HORIZON ® LEGACY ™ screw) was a cylindrical equally spaced singlelead thread screw, while the second (CD HORIZON ® OSTEOGRIP ™ screw) had a slightly conical inner core and the pitch of the distal part was dual-lead thread (double pitch in the pedicle region). In addition, the single-lead thread screw crests were thicker and had spherical bases, while the dual-lead thread screw crests were thinner and had conical bases. Two different screw lengths (40mm -50mm) and screw diameters (6.5mm -8.5mm) were tested. The screws were virtually positioned and placed through the pedicle following the free hand localization technique, which used anatomical landmarks on vertebrae during an open surgery [START_REF] Modi | Accuracy and safety of pedicle screw placement in neuromuscular scoliosis with free-hand technique[END_REF] . The entry point was enlarged by removal of the superficial cortical elements simulating the use of a bone rongeur or a burr [START_REF] Gaines | The use of pedicle-screw internal fixation for the operative treatment of spinal disorders[END_REF] . The screw tapping was modeled using a boolean operation method to remove bone at the future location of the screw. Two common trajectories (anatomic (AN) and straight forward (SF) [START_REF] Dhawan | Thoracic pedicle screws: comparison of start points and trajectories[END_REF][START_REF] Lehman | Straight-forward versus anatomic trajectory technique of thoracic pedicle screw fixation: a biomechanical analysis[END_REF] ) were tested for each screw (Figure 2). A design of experiment (DOE) was performed in order to biomechanically investigate both the individual and combined effects of the thread type, lengths, diameters and insertion trajectories on the fixation strength of the pedicle screws. Each parameter had two extreme values, as described above. A DOE is a statistical method enabling to determine if there is a statistically significant effect that a particular factor exerts on the dependent variables of Copyright © Lippincott Williams & Wilkins. Unauthorized reproduction of the article is prohibited. interest [START_REF] Montgomery | Design and analysis of experiments[END_REF] . The DOE was based on a Box, Hunter and Hunter full plan with four factors leading to 16 runs. The statistical analysis was performed using Statistica 8 (StatSoft, Inc., Tulsa, USA). Due to the determinist aspect of FEM simulations an alpha acceptance of less than 0.01 was chosen for significance. The geometry used for the FEM was built from CT-scan images of a L3 vertebra (contiguous slices of 0.6 mm thick) of a 50 th percentile human volunteer (European, 32 years old, 75 kg, 1,75 m) with no known spinal pathology to obtain a "generic shape". The vertebra (Figure 1) was modeled by taking into account the separation of the trabecular and cortical bone with realistic regional thickness from morphologic measurements [START_REF] Hirano | Structural characteristics of the pedicle and its role in screw stability[END_REF][START_REF] Silva | Direct and computed tomography thickness measurements of the human, lumbar vertebral shell and endplate[END_REF] . The pedicles were 13 mm in height and 11 mm in width, while the cortical bone thickness varied from 1.0 to 1.5 mm. The vertebra FEM was meshed with four node tetrahedral elements of 0.5 mm characteristic length in the peri-implant region (region of interest) and 1 mm characteristic length in the farter regions. The mesh distribution was refined through a convergence study to adapt to the region of interest and minimize the number of nodes to satisfactorily balance accuracy and computer resources. The screws external surface was modeled as a shell with characteristic triangular elements of 0.5 mm. The triangle-based elements were chosen for their ability to comply with complex geometry and their non-warpage properties. The model as a whole contained ~50 000 nodes (~250 000 elements). Rigid body properties were applied to some node groups of the model away from the region of interest and to all nodes of the screw, thus reducing computational time. The screw was considered rigid due to the high material property gradient at the interface, which was several times higher than the bone. These assumptions were verified to have marginal impact on the results. The bone/screw interface was modeled using a point/surface penalty method for the contact interface with a Coulomb type friction coefficient of 0.2 [START_REF] Liu | Biomechanical evaluation of a new anterior spinal implant[END_REF] and minimal gap of 0.05 mm. The cortical and trabecular bones were considered as homogeneous isotropic materials. Their properties were estimated from a previous study using an inverse finite element method based upon experimental tests performed on fresh post-mortem elderly human subjects [START_REF] Garo | Calibration of the mechanical properties in a finite element model of a lumbar vertebra under dynamic compression up to failure[END_REF] . The model integrates an elastoplastic material law (Johnson-Cook) to simulate bone failure (Table 1) [START_REF] El-Rich | Finite element investigation of the loading rate effect on the spinal load-sharing changes under impact conditions[END_REF] . Thus, before plastic deformation occurs (equivalent stress < yield stress), the material behaves as linear elastic. During plastic deformation, the equivalent stress was computed using the relation σ = a + b ε p n , where σ = equivalent stress, a = yield stress, b = hardening modulus, ε p = plastic strain (true strain), and n = hardening exponent. Once the failure plastic strain (ε max ) of a given element locally was reached, failure occurred and the corresponding element deleted, thus simulating the bone fracture. Boundary and loading conditions were applied in order to simulate screw pullout as described in the ASTM-F543 standard [START_REF]Standard Specification and Test Methods for Metallic Medical Bone Screws[END_REF] . This specific test was performed to assess the biomechanical strength of the screw anchorage by applying a ramped axial tensile force to the screw until total pullout. The external nodes of the anterior part of the vertebral body were fixed to simulate rigid embedment. In addition, a constraining slide link condition was applied to the whole screw (leaving only the translation in the screw axis free) to simulate the effect of fixation with the loading shaft and avoid any off axis displacement. The simulations were performed using the explicit dynamic FEM solver RADIOSS v5.1 (Altair Engineering inc., Troy, USA) with a kinetic relaxation scheme to perform a quasistatic analysis. The stresses along the screw threads during the pullout were analyzed. The Copyright © Lippincott Williams & Wilkins. Unauthorized reproduction of the article is prohibited. initial stiffness and the peak pullout force extracted from the computed load-displacement curve were compared with available previously published experimental data [START_REF] Abshire | Characteristics of pullout failure in conical and cylindrical pedicle screws after full insertion and back-out[END_REF][START_REF] Inceoglu | Stress relaxation of bone significantly affects the pull-out behavior of pedicle screws[END_REF][START_REF] Mehta | Biomechanical analysis of pedicle screw thread differential design in an osteoporotic cadaver model[END_REF][START_REF] Santoni | Cortical bone trajectory for lumbar pedicle screws[END_REF] for model validation. RESULTS The computed load-displacement curves exhibited a non-linear behavior (Figure 3), which could be divided into three zones. In the first part of the curve (zone A), the bone-screw construct followed a linear elastic stiffness slope, without bone damage or failure. Once yield strength was reached (zone B), bone element failure commenced at the local level in the periimplant area and plastic deformation occurred. The stiffness decreased as plastic strain failure was reached on bone elements, which contributed to an eventual total loss of bone-screw stiffness. Screw failure occurred at the end of zone B at the level of the peak pullout force. Zone C showed a decrease in stiffness and pullout force until the screw was totally pulled out of the bone. Compared to previously published data, the simulated initial stiffness' (1327 N.mm -1 -4800 N.mm -1 ) were slightly higher than that obtained experimentally on human cadaveric vertebrae (1100N.mm -1 -2700 N.mm -1 ) [START_REF]!!! INVALID CITATION !!![END_REF] , while the simulated peak pullout forces (220 N -750 N) were within the published range (218 -840 N) [START_REF]!!! INVALID CITATION !!![END_REF] . The discrepancy between the experimental values can be explained by the natural variability of human subjects (due to age and bone quality, vertebra size and level), the difference in screw design, and also by the poor reproducibility of such experiments [START_REF] Mehta | Biomechanical analysis of pedicle screw thread differential design in an osteoporotic cadaver model[END_REF][START_REF] Pfeiffer | Effect of specimen fixation method on pullout tests of pedicle screws[END_REF] . In the bone structure, the Von Mises stress distribution revealed high stresses at each thread, most pronounced at the tip of the screw and in the pedicle isthmus area. The overall fracture pattern initiated in the trabecular bone, around the screw tip, and propagated to the head of the screw until total pullout. Differences in the stress distribution pattern between the two thread profiles were observed (Figure 4). The cylindrical single-lead thread screw revealed an even bone stress distribution from tip to head, with higher stress reported at the tip of the screw and in the pedicle isthmus area. At the same pullout force, the conical dual-lead thread screw showed an irregular stress distribution with several stress concentration zones at each thread of the trabecular section distally. The DOE analysis revealed significant effects (P<0.01) for all tested major factors (i.e. the type, diameter, length and trajectory of the screw) on both of the indices studied (Figure 5). Screw diameter consistently had the highest effect on anchorage strength. Pareto charts report the effects of the tested design parameters, ordered in rank of importance using the "t values". Looking at peak pullout force response, the significant effects in descending order included screw diameter, insertion trajectory, thread type and screw length. The resulting initial stiffness and peak pullout force were highly correlated (r 2 =0.84), thus showing that one is a good indicator of the other. The study of the interaction between individual parameters showed significant effects for screw trajectory combined with thread type and screw trajectory combined with diameter. The anatomic trajectory allowed larger diameter screws to be placed, in addition to longer screws (Figure 5). Other combinations of factors did not reveal significant effects, as their effects were only linear predictions of the individual variables. The response distribution box plots (Figure 6) indicated that the highest initial stiffness' and peak pullout forces were obtained when the screw was longer (50 mm) and with a larger diameter (8.5 mm). The Straight Forward trajectory exhibited better biomechanical anchorage than the Anatomic trajectory. Better anchorages were obtained with the cylindrical single-lead thread screw than the dual-lead thread screw with the slightly conical diameter. DISCUSSION Screw diameter had the greatest impact on anchorage performances, which is consistent with previous biomechanical studies showing that a screw's major diameter determines pullout strength [START_REF] Cho | The biomechanics of pedicle screw-based instrumentation[END_REF] . The major diameter is directly related to crest height, thus to the contact surface with the bone, leading to better anchorage. Furthermore, the increase of screw diameter in the isthmus of the pedicle leads to a closer connection with the cortical (harder) bone. For instance, a few crests of the 8.5mm screw were gripping the cortical bone in the pedicle region (without cortical wall violation). The pedicle fill could have been another index of interest in addition to the pedicle diameter but a single vertebra only was used in this study; therefore the filling ratio would be quite the same. At a given size, the dual thread screw had a reduced anchorage capability as compared to the single thread screw. This could be explained by the crest design and height, which are different for the same major diameter. The single-lead thread screw type has a spherical and even thread base contrary to the dual-lead thread screw type, which has a spherical thread base in the pedicle region and a conical thread base in the trabecular region. The effect of these designs resulted in different stress distribution and concentration spots for each type of screw (Figure 4). The stress concentration spots near the distal threads in the dual thread screw design lead to an earlier bone fracture, thus weaker anchorage of the dual-lead thread screw. However, the simulated bone fracture occurred at a high force level (500N-600N), such that any difference should not be an issue during "normal" intra-operative correction maneuvers or post-operative functional loads, but could be a problem in the case of excessive loads [START_REF] Wang | Biomechanical Analysis of Corrective Forces in Spinal Instrumentation for Scoliosis Treatment[END_REF] . This study focused only on axial forces; no conclusion could be made on the different loads as could be applied to screw intra-or post-operatively. The screw profile (conical vs. cylindrical) also has an influence on screw performance, as higher insertion torque is correlated to higher peak pullout force [START_REF] Cho | The biomechanics of pedicle screw-based instrumentation[END_REF][START_REF] Kwok | Insertional torque and pull-out strengths of conical and cylindrical pedicle screws in cadaveric bone[END_REF] , however it has been demonstrated that insertion torque is not a good predictor of pullout force as the relations are screw and geometry specific [START_REF] Inceoglu | Pedicle screw fixation strength: pullout versus insertional torque[END_REF][START_REF] Okuyama | Can insertional torque predict screw loosening and related failures? An in vivo study of pedicle screw fixation augmenting posterior lumbar interbody fusion[END_REF] . The results of the current study, revealed a reduced performance when using a screw with a conical profile. It is important to note that this factor was dependent on the thread type, meaning the results are a combination of both the screw profile and thread type. The screw insertion process is generally preceded by a pre-tapping step using a smaller diameter than the screw. This study did not take into account the effect of pressfit and pedicle deformation [START_REF] Inceoglu | Cortex of the pedicle of the vertebral arch. Part I: Deformation characteristics during screw insertion[END_REF] during screw insertion, which has an effect on screw anchorage [START_REF] Defino | Study of the influence of the type of pilot hole preparation and tapping on pedicular screws fixation[END_REF] . This initial stress state might have an impact if other insertion techniques are investigated, particularly related to the tapping process. Further work is needed to understand the influence of the screw insertion in order to implement the stress and bone deformation. The screw diameters used for this current study were larger than those generally used during corrective surgeries. The larger screws were chosen to accommodate the pedicle size of the model. To complete this investigation, further studies with a wider variety of lengths and diameters and vertebrae types should be performed. The SF trajectory leads the screw threads closer to the cortical wall, which could explain the increased stiffness and pullout force resulting in better anchorage. As the cortical bone layer has a major effect on screw anchorage [START_REF] Santoni | Cortical bone trajectory for lumbar pedicle screws[END_REF] , its interaction with the screw trajectory and diameter is particularly important. The simulated fracture patterns, beginning around the tip of the screw and propagating to the head, suggest that higher stresses occurred first in the areas around the tip and the pedicle isthmus. This result is contrary to predicted failure mechanisms from threaded assembly theory [START_REF] Guillot | Assemblages par Elements Filetes Modelisations et Calcul[END_REF] , which describes the higher stress area as the contact area around the three first inserted threads bearing 70% of the total load. The computed results showed a more uniform distribution of contact forces along the screw shaft with an increasing trend in the screw tip area. This discrepancy was also reported in other numerical studies on screw pullout in bone structures [START_REF] Chatzistergos | A parametric study of cylindrical pedicle screw design implications on the pullout performance using an experimentally validated finite-element model[END_REF][START_REF] Wirth | The discrete nature of trabecular bone microarchitecture affects implant stability[END_REF] , which agree with the reported results. Such a divergence could be explained by the material property gradient between bone and screw, but also by the differences in thread design between industrial and medical screws. The results suggest the threaded assembly theory, used for standard industrial screws, [START_REF] Guillot | Assemblages par Elements Filetes Modelisations et Calcul[END_REF] may not be adequate for pedicle screw design anchored in bone. The initial toe-region was not considered in this study as it represented a numerical effect of the contact interface definition. Although this phenomenon is observable in experimental curves, it only represents the early effects of loading and has no outlook on the biomechanical performance of the screws. Even though the geometry used represents a "generic" vertebra shape, the material properties were derived from a finite element inverse method using lumbar vertebrae harvested from elderly subjects (~ 70 years old). This is consistent with most reported experimental tests, mainly performed on elderly and osteoporotic vertebrae [START_REF] Abshire | Characteristics of pullout failure in conical and cylindrical pedicle screws after full insertion and back-out[END_REF][START_REF] Inceoglu | Stress relaxation of bone significantly affects the pull-out behavior of pedicle screws[END_REF][START_REF] Mehta | Biomechanical analysis of pedicle screw thread differential design in an osteoporotic cadaver model[END_REF][START_REF] Santoni | Cortical bone trajectory for lumbar pedicle screws[END_REF] . The bone material properties used in the model resulted from a finite element inverse method of slow dynamic compression tests [START_REF] Garo | Calibration of the mechanical properties in a finite element model of a lumbar vertebra under dynamic compression up to failure[END_REF] , which may explain the higher stiffness values that were numerically obtained. The conditions of mechanical property extraction (axial compression) were different from the conditions described in this study; however it was assumed that the bone properties were isotropic. This assumption might lead to higher anchorage because bone properties have lower Young's modulus in the transverse directions than in the axial direction [START_REF] Schmidt | Application of a calibration method provides more realistic results for a finite element model of a lumbar spinal segment[END_REF] . Furthermore, the model was intended to be used for relative comparisons and not as an absolute prediction tool, thus diminishing any numerical-experimental disparity issues. The elasto-plastic material law (Johnson-Cook) used in this study assumes that the bony structures are isotropic and have a homogeneous distribution. In reality, the pedicle and the vertebral body have a complex and irregular bone distribution [START_REF] Hirano | Structural characteristics of the pedicle and its role in screw stability[END_REF][START_REF] Silva | Direct and computed tomography thickness measurements of the human, lumbar vertebral shell and endplate[END_REF][START_REF] Defino | Role of cortical and cancellous bone of the vertebral pedicle in implant fixation[END_REF] leading to anisotropic properties. Further investigations, implementing heterogeneity in bone properties could be an alternative to investigate such effects. Alternative methods to model more accurately the complex internal bone structures such as micro Finite Element Analysis [START_REF] Wirth | The discrete nature of trabecular bone microarchitecture affects implant stability[END_REF] or Smoothed Particle Hydrodynamics (SPH) exist, but require higher computational resources. This study only focused on the behavior of individual screws under axial loading. Any effects on triangulation screw pairing or instrumentation assembly were not taken into account in the current study. The intended use of this model was to perform relative comparisons using a DOE approach, rather than absolute value analyses. The model created was extracted from a healthy man with no known spinal pathologies or deformities and the material properties were Copyright © Lippincott Williams & Wilkins. Unauthorized reproduction of the article is prohibited. considered as ideal (homogeneous and no osteoporosis). At this current state, no extrapolation should be made for deformed pedicles or osteoporotic vertebrae. Additional work would be required to modify this generic model to a more personalized one, in terms of geometry and material properties. CONCLUSION The design of experiment determined that the diameter of the screw had the highest impact on mechanical anchorage. simulated cylindrical single-lead thread screws presented better biomechanical anchorage than the conical dual-lead thread screws in axial loading conditions. The trajectory promoting closer connection with the cortical bone provided a better mechanical anchorage. A detailed and realistic FEM of an instrumented lumbar vertebra was developed to analyze and compare screw designs and trajectories. The developed comprehensive FEM is a valuable tool to analyze the pedicle screw biomechanics. It is a promising alternative to complex, expensive and specimen-specific in vitro experimental tests. The recommendations provided can improve single screws performance under axial loading. Further studies should be undertaken to refine and fully validate this model, and to examine other types of loads as well as whole construct effects. The model could also be adapted to further analyze patient specific characteristics, such as osteoporotic or deformed vertebrae. Looking to the future, this ideology could lead to a computerized testing platform for new implant designs or as a surgery-planning tool to help clinicians. inferred by the work of Abshire et al., 2001 and Hsu et al., 2005 who determined that conical screws and dual-lead thread screws improve the insertion torque. It seems intuitive that a FIGURES:Figure 1 : 1 FIGURES: Figure 2 : 2 Figure 2: Multi-axial screws and screw trajectories superimposed on the vertebra shape. a) Figure 3 : 3 Figure 3: Generic load-displacement curve from the simulated pullout test. The curve is Figure 4 : 4 Figure 4: Peri-implant Von Mises stress distribution in the trabecular bone structure along the Figure 5 : 5 Figure 5: Pareto chart for the initial stiffness (a) and peak pullout force (b). The standardized Figure 6 : 6 Figure 6: Box plot distribution of initial stiffness (top row) and peak pullout force (bottom TABLE : Table 1 : :1 Material properties of the cortical and trabecular bone used in the FEM[START_REF] Garo | Calibration of the mechanical properties in a finite element model of a lumbar vertebra under dynamic compression up to failure[END_REF] TABLE : Table 1 : :1 Material properties of the cortical and trabecular bone used in the FEM[START_REF] Silva | Direct and computed tomography thickness measurements of the human, lumbar vertebral shell and endplate[END_REF] Material properties Cortical bone Trabecular bone Density (kg/mm 3 ) 2.0E-06 2.0E-7 Young modulus, E (MPa) 2625 48.75 Poisson ratio, ν 0.3 0.25 Yield stress, a (MPa) 105 1.95 Hardening modulus, b (MPa) 875 16.3 Hardening exponent, n 1 1 Failure plastic strain, ε max 0.04 0.04 Funded by the Natural Sciences and Engineering Research Council of Canada (Industrial Research Chair with Medtronic of Canada).
30,500
[ "745148", "769012" ]
[ "486257", "581100", "581100", "486313", "486257", "323994" ]
00869818
en
[ "spi" ]
2024/03/04 23:41:46
2012
https://hal.science/hal-00869818v2/file/doc00015907.pdf
Neïla Bhouri email: [email protected] Maurice Aron Jari Kauppila Relevance of travel time reliability indicators: a managed lanes case study Keywords: Reliability, Travel time, Managed lanes, Hard shoulder, Recurrent congestion, Assesment Managed lanes operations refer to multiple strategies for recurring congestion control by increasing the road capacity or adapting its configuration to the demand level. As a result, the evaluation of their impacts usually focuses on congestion or safety-related indicators. However, with the growing prosperity, consumers demand higher quality transport services, for which reliable transport networks are central. This paper is focused on the travel time reliability assessment of a managed lane experience from a French motorway. The paper shows results from the field test experience of the hard shoulder dynamic use on the A4-A86 motorway in the east of Paris. Further to the reliability assessment, the paper focuses on the reliability indicators. It particularly shows the weakness of the skew of the distribution of travel time indicator. Introduction Managed lanes (e.g., dynamic peak hour lanes, additional lanes, HOV lanes, bus lanes) play an increasing role in traffic operations. This topic is becoming more and more important to tackle recurring congestion. Various practices are already initiated in several European countries. Managed lanes operations refer to multiple strategies for recurring congestion control by increasing the road capacity or adapting its configuration to the demand level. Typically, the increase of capacity is obtained through a redefinition of the transverse profile within the roadway limits. Several technical alternatives are possible, such as the reduction of lane width and the temporary or permanent use of the hard shoulder as a running lane. In France, dynamic use of the hard shoulder dates back to the 1960s with the introduction of reversible lanes (Quai de Seine in Paris, the Olympic Games in Grenoble, the Saint-Cloud Tunnel in Paris) [START_REF] Nouvier | La gestion dynamique des voies. Historique et perspectives[END_REF]. France has two examples of hard-shoulder running schemes. One is on a section of the motorway to the east of Paris, where the hard shoulder is open to cars when traffic becomes saturated. The other is on a section of the motorway leading into Grenoble, where a special lane is open to public transport only when the motorway becomes saturated. 2 In this paper we apply a number of indicators for travel time reliability that have been advocated in a range of studies. The paper is organized as follows; in section 2, the standard traffic impact assessment of any management strategies is described. Section 3 is dedicated to the description of the travel time reliability approaches and in particular the introduction of the definitions of a number of reliability indices used. Section 4 gives descriptions of the French sites where the hard shoulder running (HSR) have been experimented as well as the assessment data. In section 5, travel time reliability results are provided. Based on these results, a discussion regarding the reliability indicators is conducted especially related to the width and skew of the distribution of travel times. Finally, Section 6 draws the main conclusions of this paper. Managed lanes assessment Several service quality indicators have been developed, with a direct impact on network reliability [START_REF] Cohen | CASE STUDIES IDENTIFICATION AND ASSESSMENT. Task 2.4: Local Main Road Networks. WP2[END_REF]. [START_REF] Goodin | Operational Performance Management of Priced Facilities[END_REF] [START_REF] Aron | From traffic indicators to safety indicators. Application for the Safety Assessment of an ITS Activothere Traffic Management Experiment[END_REF] Socioeconomic aspects Until now, however, authors aren't aware of any assessment of the reliability of managed lanes based on the criteria set out in this report. How to measure reliability When monitoring reliability, it is important to distinguish between network operator perspective and user perspective. For the network operator, the focus is on network quality (what is provided and planned) while for the user, the focus is on how the variability of travel time is experienced [START_REF] Bhouri | Isolated versus coordinated ramp metering: Field evaluation results of travel time reliability and traffic impact[END_REF]. Several definitions for travel time reliability exist and many different relevant indicators have been proposed. Here we use the same breakdown as presented in previous studies and divide these measures into four categories as in [START_REF] Lomax | Selecting travel reliability measures[END_REF] and [START_REF] Van Lint | Travel Time unreliability on freeways: Why measures based on variance tell only half the story[END_REF]: 1. Statistical range methods; 2. Buffer time methods; 3. Tardy trip measures; 4. Probabilistic measures. 3 Standard deviation (STD) and coefficient of variation (COV) show the spread of the variability in travel time. They can be considered as cost-effective measures to monitor travel time variation and reliability, especially when variability is not affected by a limited number of delays and when travel time distribution is not much skewed (2). Standard deviation is defined as (1) while coefficient of variation is written as (2) where M denotes the mean travel time, TT i the i th travel time observation and N the number of travel time observations. A further consideration to use the standard deviation as a reliability indicator derives from recent studies that recommend defining travel time reliability as the standard deviation of travel time when incorporating reliability into cost-benefit assessment [START_REF] Heatco | Developing Harmonised European Approaches for Transport Costing and Project Assessment[END_REF]. As a result, standard deviation is used to measure reliability in few countries where guidelines for cost-benefit assessment include reliability (New Zealand Transport Agency, 2008). Both standard deviation and coefficient of variation indicate the spread of travel time around some expected value. Without any assumption on the travel time distribution shape, there isn't a close relationship between the standard deviation and percentiles. An indicator directly based on percentiles has not this drawback, and does not require making any assumption on the travel time distribution. Therefore, studies have proposed metrics for skew skew and width var of the travel time distribution [START_REF] Van Lint | Travel Time unreliability on freeways: Why measures based on variance tell only half the story[END_REF]. The wider or more skewed the travel time distribution the less reliable travel times are. In general, the larger skew indicates higher probability of extreme travel times (in relation to the median). The large values of var in turn indicate that the width of the travel time distribution is large relative to its median value. Previous studies have found that different highway stretches can have very different values for the width and skewness of the travel time and propose another indicator (UL r ) that combines these two and removes the location specificity of the measure [START_REF] Van Lint | Travel Time unreliability on freeways: Why measures based on variance tell only half the story[END_REF]. Skewness and width indicators are defined as (5) where L r denotes the route length and TT X is the X th percentile travel time. Other indicators, especially the Buffer Index (BI) appears to relate particularly well to the way in which travelers make their decisions. Buffer time (BT) is defined as the extra time a user has to add to the average travel time so as to arrive on time 95% of the time. It is computed as the difference between the 95th percentile travel time (TT 95 ) and the mean travel time (M). The Buffer Index is then defined as the ratio between the buffer time and the average travel time (6) The Buffer Index is in users' assessments of how much extra time has to be allowed for uncertainty in travel conditions. It hence answers simple questions such as "How much time do I need to allow?" or "When should I leave?". For example, if the average travel time equals 20 minutes and the Buffer Index is 40%, the buffer time equals 20 × 0.40 = 8 minutes. Therefore, to ensure on-time arrival with 95% certainty, the traveler should allow 28 minutes for the normal trip of 20 minutes. Planning Time (PT) is another concept used often. It gives the total time needed to plan for an on-time arrival 95% of the time as compared to free flow travel time. The Planning Time Index (PTI) is computed as the 95th percentile travel time (TT 95 ) divided by free-flow travel time (TT free-flow ). For example, if PTI = 1.60 and TT free-flow = 15 minutes, a traveller should plan 24 minutes in total to ensure on-time arrival with 95% certainty. Because these indicators use the 95-percentile value of the travel time distribution as a reference of the definitions, they take into account more explicitly the extreme travel time delays. A number of other indicators have been proposed in literature. These include: Misery Index, linked to the relative distance between mean travel time of the 20% most unlucky travelers and the mean travel time of all travelers. Probabilistic indicator, which is the probability that travel times occur within a specified interval of time related to the median travel time. However, we will not consider these in the current paper. Dynamic use of hard shoulder on the French A4-A86 motorway Section TC A4-A86: dynamic use of the hard shoulder In the east of Paris, a three-lane urban motorway (A4) and a two-lane urban motorway (A86) share a four-lane 2.3 km long weaving section. As the traffic flows of the two motorways are added, traffic is particularly dense at some hours on the weaving section, renowned as the greatest traffic bottleneck in Europe. Until summer 2005, 280 000 vehicles using this stretch of road every day used to form one of the M M TT BI 5 A86 A86 A4 A4 worst bottlenecks in French history, with over 10 hours' congestion a day and tailbacks regularly averaging 10 km. Traffic would be saturated by 6.30 a.m. and the situation would not revert to normal until 8.30 p.m. A hard shoulder running (HSR) experiment was launched in July 2005. It gives drivers accessat peak timesto an additional lane on the hard shoulder where traffic is normally prohibited. The size of the traffic lanes has been adjusted. From the standard width of 3.50 m, they have been reduced to 3.2 m. The opening and closure of this lane are activated from the traffic control centre according to the value of the occupancy measured upstream of the common trunk section (hard shoulder opened if occupancy is greater than 20% and closed if less than 15%). Moveable safety barriers are installed on the right side of the additional lane. When this lane is closed, devices pivot leading to the blocking of the hard shoulder. These closure devices are installed at several key locations on the section so that drivers can see them whatever their position and are thus dissuaded from using the lane (Figure 2). The barriers were tested between June and October 2004 on a non-traffic experimental site. The width of the hard-shoulder has been increased (to 3m) and the width of the other lanes reduced from the standard 3.5m to 3.2m. 6 Fig. 2. The weaving A4-A86 section, with the 5 th lane, upstream and downstream sections, eastbound Safety has been improved by the installation of automatic incident detection cameras. In the event of an incident or accident when the lane is open, stationary vehicles on the hard shoulder lane can be detected, leading to the closure. Additional safety is provided by speed control radars on the A4 motorway in both traffic directions. Data description Inductive loops provide traffic flow, occupancy and average speed for each lane. Assessing this road operation requires to consider not only the traffic on the 2.3 km weaving section but also the traffic downstream. 0.7 km downstream stretch has been included in each direction. Data has been analysed for three years (2000)(2001)(2002) before the implementation of the device and one year (2006) after. Three inductive loops in the eastbound direction (two on the weaving section and one downstream) and four inductive loops in the westbound direction were used for computing the travel times. At each six-minute period of the year 2006 where traffic data were available we associated one period in 2000, 2001, or 2000 with corresponding available traffic data (same month, same day in the week, same hour in the day and six-minute period in an hour). A few traffic data were missing, some others have been disregarded when the recorded average speed appears extreme (higher than 150km/h or lower than 5 km/h). We will focus on this paper only on 2002 and 2006 data and only to the eastbound part of the motorway. Cleaning the 87600 six-minutes periods data by year (2006 and 2002) The great number of six-minute periods available allows for some confidence in the following analysis. Note that in 2002, as HSR was not installed, the "open" periods are the periods corresponding to the 2006 periods where HSR was effectively opened. This matching prevents to potential bias if unavailable data in 2006 were not distributed as unavailable data in 2002. Findings The impacts of HSR on the travel time and on travel time reliability are identified with an observational before/after study on the weaving section completed by downstream sections. As Jacques 7 Chirac, former president of France, launched in 2003 an important campaign for road safety and against speeding, it is necessary to study the impact of this campaign on speed thus on travel time, in order not to confound the impacts of HSR and of the speed reduction campaign. Fortunately, the speed reduction, which is synonymous of an increase in travel time, was important only at off-peak (when HSR was not opened). We can assume that, during peak hours, speeding was very limited in the "before" period, since the average speed was very low. Table 2 shows this increase in travel time between 2002 and 2006 when HSR was closed, and a decrease of travel time (due to the decrease of congestion) when HSR was opened during daylight. Note that HSR was also opened during (limited) nightly periods which were partly congested (HSR leads then to decrease TT) and partly non congested (the speeding campaign leads then to increase TT)the final result being an increase of average TT. " Before" means 2002, and "open PT (the 95 th TT percentile) decreases when HSR is open, due to the reduction of congestion. On the contrary PT is stationary when HSR is closed (daylight). BT (the difference between the 95 th TT percentile and the average TT) decreases when HSR is open, due to the decrease of the 95 th TT percentile, although the average TT also decreases. Note that BT also decreases when HSR is closed (daylight)-this is then due to the increase in TT average and not in any decrease in TT 95 .; this is less favourable for drivers, but still remains an increase in reliability: Remark. Due to a decrease of night traffic (not presented in Table 2.), TT reliability is also improved during night, when HSR is closed. The HSR effect may be split in two components: A direct effect on travel time reduction and on travel time variance reduction, 8 an "indirect" effect; indeed when comparing the daily distribution of traffic between off-peak and peak hours (before HSR implementation) to the distribution after, a shift of some traffic from daylight off-peak hours (HSR closed) to peak hours (HSR open) has been observed; daylight traffic increased by 2% at peak hours, and decreased by 5% at off-peak This shift might be due to the better traffic conditions when HSR is opened. We assume that some vehicles willing to drive during peak hours, were, during the period "before", constrained to drive during off-peak, in order to avoid very bad peak-hour traffic conditions. Thanks to HSR and to the resulting decrease of congestion, more drivers chose to circulate at peak hours, and less at off peak periods. Reductions of travel time and of its variance result at off peak. Without the "indirect" effect, the travel time reduction during peak hours as well as the travel time increase during off-peak, would have been larger. However it is no use to try to distinguish the part of each component in the travel time reduction or in the travel time variance reduction, because the drivers experienced the global result of these two components. Are skewness and width metrics good indicators for the reliability assessment ? ( [START_REF] Van Lint | Travel Time unreliability on freeways: Why measures based on variance tell only half the story[END_REF] present λ Var , a robust measure for width of travel time. Indeed it is the width of the interval [TT 90 ; TT 10 ], where lie travel times of 80% of drivers, divided (for standardisation) by the percentile TT 50 . A large width leads to unreliability, forbidding drivers to predict accurately their travel time. [START_REF] Van Lint | Travel Time unreliability on freeways: Why measures based on variance tell only half the story[END_REF] argue that during congestion, unreliability of travel time is predominantly proportional to λ Var . This is not refuted here: the value λ Var =0.77 in 2002 can be considered as large, whereas the value λ Var =0.54 in 2006 is much less, while congestion decreased from 2002 to 2006. [START_REF] Van Lint | Travel Time unreliability on freeways: Why measures based on variance tell only half the story[END_REF] present also λ Skew =(TT 90 -TT 50 )/(TT 50 -TT 10 ), a robust measure for the skew of the travel time distribution. They argue that, in transient periods (congestion and dissolve), unreliability is predominantly proportional to λ Skew . However we cannot have this interpretation of λ Skew here, since we have computed λ Skew for all opened HSR periods, which include transient periods, congested and not congested periods. We say that on this large set of periods, the interpretation of λ Skew is miscellaneous, since the λ Skew numerator and denominator depend on the location of TT 50 related to the congestion. Different cases may happen. Here, in daylight periods (HSR open) in 2002, TT 50 =155,9 seconds was in congestion (speed=69,3 km/h), whereas in 2006, TT 50 =124,9 seconds (speed=86,5 km/h) was no more in congestion. In 2002, the large TT 50 (due to congestion for half drivers) implies a large λ Skew denominator (TT 50 -TT 10 )= 67.7s, and a relatively low λ Skew nominator (TT 90 -TT 50 )= 52.4s, despite of congestion. Both reasons lead to a not so high λ Skew value (0.77). In 2006, the not-large TT 50 (congestion concerning less than 50% drivers) implies a small λ Skew denominator (TT 50 -TT 10 )= 11.1s, and implies a λ Skew numerator (TT 90 -TT 50 )= 55.7s higher than in 2002. Both reasons lead to a very high λ Skew value (greater than 500% in daylight periods of 2006). Note that the high skew in 2006 is not mainly due to the right part of the distribution (high travel times) but to the left part (low travel times). The very low λ Skew denominator (TT 50 -TT 10 )= 11.1s is mainly due to a great speed homogeneity for 40% of drivers who drive in 2006 round the speed limit of 90 km/h (86,5 km/h at TT 50 , 94,9 km/h at TT 10 ). This was not the case in 2002, where the speeds corresponding to TT 50 and TT 10 were respectively 69,3 km/h and 122,km/h; In summary, λ Skew seems to be a promising indicator, even computed on a set of inhomogeneous periods, but its evolution must be discussed according to the sense of variation of TT 10 , TT 50 , TT 90 and according to the location of TT 50 . CONCLUSIONS Reliability is a new dimension for assessing traffic operations and is as important as the traditional factors such as road capacity, safety, equipment and maintenance costs. This paper presents the travel time reliability assessment of a Hard Shoulder Running experiment from a French motorway. Results reveal a positive effect on travel time reliability. In addition to the reliability assessment of the HSR running, we discuss the ability of five indicators known to accurately reveal the travel time reliability improvement. Results show that lower Planning Time increases driver satisfaction. Perhaps easier to attain, a smaller Buffer Time implies a better reliability, even if the Planning Time does not decrease. Further to these classical indicators, the paper discusses the robustness of λ Var and λ Skew indicators proposed by [START_REF] Van Lint | Travel Time unreliability on freeways: Why measures based on variance tell only half the story[END_REF] to measure respectively the width and the skew of TT distribution. It shows the effectiveness of the λ Var indicator and its robustness to indicate both reliability and congestion. Results from this HSR French experiment show however that the λ Skew indicator is not always suitable for the reliability assessment. Indeed, two factors impact traffic in this experiment: on one hand the HSR implementation and on the other hand the speed limit campaign, supported by the automatic speed control systems. The speed limit affects traffic only for non-congested periods and hence when HSR isn't open. It affects however the denominator of the λ Skew indicator which depends on this non-congested traffic. The use of this part of the TT distribution as a Bhouri, Aron, Kauppila EWGT 2012, Marne-la-Vallée 10 10 component of the λ Skew definition affects the quality of this indicator: values of λ Skew reveal more a less a lower TT median value rather than a more reliable traffic. As λ Skew isn't an effective indicator for reliability assessment, the combined indicator of width and skew, the ULr indicator is also affected and cannot therefore be considered as an effective indicator. In the future, the optimization of traffic operations should be developed with respect, among other criteria, to travel time reliability, in its various forms. research team developed guiding principles for identification, selection, and communication of performance measures. Impact assessments for dynamic use of the hard shoulder have focused on the: General indicators:  Volume of traffic, i.e. total distance covered by vehicles (in vehicle/km)  Total time spent in traffic (in vehicle/km)  Volume of congestion (in h/km). This indicator describes the size of traffic jams. It is obtained by multiplying the length of roadway -reduced to one lane of saturated traffic -by the length of time during which traffic is saturated. Impact on capacity Improvement in traffic levels of service (LoS) Average journey speed Reduced congestion Environment impact Number of accidents by traffic type/scenario Fig. 1 1 Fig. 1 The weaving section A4-A86 in both senses (additional lanes in red) Daily statistics on the duration of hard-shoulder running on working days in 2006 show an average of 5 hours' use inward Paris and 4 hours' use eastward out of the city. On Saturdays, the hard shoulder is open for an average of 4 hours into Paris and 3 hours 45 minutes in the opposite direction. On Sundays it is open in both directions for 3 hours 20 minutes. Table 2 . 2 " means that we simulate the time when it would be opened in 2002 but of course the HSR didn't exist. Impacts of HSR and of the speed reduction campaign on travel time and buffer times Unreliability decreases between 2002 and 2006 when HSR is open, as shown by the indicators: Travel time Buffer time Planning time before After Gain Before after Gain before After gain Open Daylight 160 137 -14% 86 68 -21% 246 205 -17% Night 101 125 24% 94 61 -35% 195 186 -5% Travel time Buffer time Planning time before After Gain Before after Gain before after gain Closed Daylight 132 156 18% 112 91 -19% 244 246 1% Night 95 115 21% 54 14 -74% 148 129 -13% Times are in seconds and correspond to the 3km Eastbound stretch Table 3 . 3 Percentiles of the travel time distribution and λ Var in 2002 and 2006 Travel Time (in seconds) Corresponding speed (km/h) Year 2002 2006 2002 2006 TT 90 208.3 180.6 51,8 59,8 TT 50 155.9 124.9 69,3 86,5 TT 10 88.2 113.8 122,4 94,9 λ Var . 0.77 0.54 This table corresponds to daylight periods when HSR is open Open Closed Indicator Daylight night daylight night Before After trend before after trend before after trend before after trend λ Var 77% 54% - 127% 36% - 127% 83% - 65% 17% - λ Skew 77% 549% + 1267% 421% - 341% 515% + 321% 159% - UL r (1/km) 26% 31% + 107% 17% - 52% 46% - 25% 3% - Hard shoulder Running effect Speed limit campaign effect Table 4. Effects of HSR and speed limits campaign on skew and width indicators Bhouri, Aron, Kauppila EWGT 2012, Marne-la-Vallée
25,129
[ "1278852" ]
[ "144000", "144000", "233007" ]
00854613
en
[ "shs" ]
2024/03/04 23:41:46
2013
https://hal.science/hal-00854613v2/file/doc00015135.pdf
Bhouri & Aron Traffic management aims to ensure a high quality of service for a maximum of users, by decreasing congestion and increasing safety. However, uncertainty regarding travel time (TT) decreases the quality of service and causes users to modify their plans regardless of the average TT. Indicators describing TT reliability are being developed and should be used in the future both for the optimization and for the assessment of active traffic management operation. This paper describes a managed lane experiment on a motorway weaving section in France -Hard Shoulder Running (HSR) operation-at rush hours. The paper describes the data measurement and the missing data replacement process. It focuses, however, on TT reliability indicators and on their use for reliability assessment on the basis of an observational before/after study. It provides some discussion on the advantages and drawbacks of reliability indicators under different traffic conditions. The before/after study reports not only the effect of the HSR operation but also of a speed limit campaign which affected the free flow travel time. The study particularly shows the difference between using buffer times or using buffer indexes. The paper also discusses the difficulty of interpreting the skew of TT distribution for travel reliability. INTRODUCTION Managed lanes operations refer to multiple strategies for increasing road capacity or adapting its configuration, in order to favor one transportation mode (bus, taxis, high occupancy vehicles), or to reduce recurrent congestion. In this latter case, typically the increased road capacity is obtained through a redefinition of the transverse profile within the roadway limits. Several technical alternatives are possible, such as the reduction of lane width and the temporary or permanent use of the hard shoulder as a running lane. In France, dynamic reversible lanes (according to the commuter traffic direction) have been introduced since the 1960s (Quai de Seine in Paris, Saint-Cloud Tunnel in Paris and for the Winter Olympic Games in Grenoble, in 1968). A static Hard Shoulder Running (HSR) operation has been implemented on a motorway weaving section (A3-A86 motorway) with a slight negative impact on safety due to higher speeds even at peak hours. Later, a dynamic HSR operation was implemented on A4-A86 motorway weaving section, only at rush hours. This HSR operation assessment did not show a negative impact on safety [START_REF] Aron | Application for the Safety Assessment of an ITS Activothere Traffic Management Experiment[END_REF] and showed an improvement of traffic conditions with regards to classical traffic impact indicators [START_REF] Princeton | A Priori Assessment of Safety Impacts of Traffic Management Operations[END_REF][START_REF] Princeton | Traffic Management Evaluation Based on Link Performance Measures[END_REF]. However until now, reliability aspects have not been given the attention they deserve. Reliability is a new issue. Studies show that congestion has an impact not only on average travel time (TT) but also on TT reliability, and there is much evidence that the variability of TT may be more important than its mean value. If a road is constantly congested, users can plan their travel accordingly while unpredictable travel conditions impose great frustration [START_REF]OECD/ITF Improving Reliability on Surface Transport Networks[END_REF][START_REF] Fhwa | TT reliability: Making it there on time, all the time[END_REF]. Many researchers have been interested in finding ways to measure and gave a value to TT reliability [START_REF]Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability[END_REF][START_REF] Chen | Travel-Time Reliability as a Measure of Service[END_REF] but little has been done on reliability benefits of management operations [START_REF] Bhouri | Managing Highways for Better Reliability -Assessing Reliability Benefits of Ramp Metering, Transportation Research Record[END_REF][START_REF] Bhouri | Isolated versus coordinated ramp metering: Field evaluation results of travel time reliability and traffic impact[END_REF]. Our objective in this paper is: first, to assess the impact of the A4-A86 HSR operation on TT reliability by the use of well-known key performance indicators; and second, to discuss the effectiveness of these indicators. Discussion on the use of reliability indicators according to the traffic conditions and to their evolution from the "before" period to the "after" period the implementation of this HSR operation is very interesting. It should be noted that in the interval between the "before" and "after" periods, a speed limit campaign for greater safety was launched by Jacques Chirac, former president of France. This speed campaign reduced high speeds and was thus effective only at off-peak hours when HSR was not opened. As a consequence, it affected the free-flow value as well as the mean value of TT. This contributed to show the weakness of reliability indicators based on these values as it does not indicate the effects of the congestion, which is the main source of unreliability, but only the free-flow variation consequences. Impacts of HSR on the TT and on its reliability are identified with an observational before/after study on the weaving section completed by downstream sections. Data was analyzed for the three years (2000, 2001 and 2002) before the implementation of the HSR operation and for one year (2006) after. First of all, speed data were cleaned in order to eliminate outlier values: the empirical 6-minute average speeds are considered as outliers when they are below or above thresholds. In this case, a function of the ratio flow/occupancy was used in order to try to replace outlier speeds. This paper, however, focuses only on the reliability results and the forcefulness of the indicators on the basis of the year 2002 data for the "before" period and the year 2006 data for the "after" period. This paper is organized as follows: Section 2 is dedicated to the description of the TT reliability approaches and in particular the introduction of the definitions of a number of Bhouri & Aron 4 reliability indices used. Section 3 gives the descriptions of the French site where the HSR was tested. The assessment data and the methods used to ensure their quality and replace outlier data are described in Section 4. Section 5 provides TT reliability results and a discussion of the reliability indicators, especially related to the width and skew of the TT distribution. Finally, Section 6 gives the conclusion of the paper. RELIABILITY INDICATORS When monitoring reliability, it is important to distinguish between the network operator perspective and the user perspective. For the network operator, the focus is on network quality (what is provided and planned) whereas, for the user, the focus is on how the variability of TT is experienced. Many different relevant indicators have been proposed. Here we use the same breakdown as presented in previous studies and divide these measures into four categories as in [START_REF] Lomax | Selecting travel reliability measures[END_REF] and ( 11 They can be considered as cost-effective measures to monitor TT variation and reliability, especially when variability is not affected by a limited number of delays and when TT distribution is not greatly skewed. Standard deviation is defined as: ∑ - - = N 2 ) M TT ( 1 N 1 STD i (1) while coefficient of variation is written as M STD COV = (2) where M denotes the mean TT, TT i the i th TT observation and N the number of TT observations. Standard deviation is often used in statistical studies because it is easy to compute and because it is linked to a confidence interval assuming a normal distribution. However, the TT distributions are often dissymmetric (due to congestion) and thus far from the Gaussian distribution. So the coefficients linking the width of the confidence intervals to the standard deviation (as the value "1.96 x standard deviations" for the 95% confidence interval) are no longer valid. Therefore, studies have proposed metrics for skew λ skew and width λ var of the TT distribution [START_REF] Van Lint | TT unreliability on freeways: Why measures based on variance tell only half the story[END_REF]. TT X is the X th percentile TT. The wider or more skewed the TT distribution, the less reliable TTs are. In general, the larger λ skew indicates higher probability of extreme TTs (in relation to the median). The large values of λ var in turn, indicate that the width of the TT distribution is large relative to its median value. Previous studies have found that different highway stretches can have very different values for the width and skewness and proposed another indicator (UL r ) that combines these two and removes the location specificity of the measure [START_REF] Van Lint | TT unreliability on freeways: Why measures based on variance tell only half the story[END_REF]. Where L r denotes the route length and. - The buffer methods focus on "the extra percentage of TT due to TT variability on a trip that a traveler should take into account in order to arrive on time". These types of indices, especially the Buffer Index (BI) appears to relate particularly well to the way in which travelers make their decisions [START_REF] Bhouri | Managing Highways for Better Reliability -Assessing Reliability Benefits of Ramp Metering, Transportation Research Record[END_REF]. Buffer time (BT) is defined as the extra time a user has to add to the average TT so as to arrive on time 95% of the time. It is computed as the difference between the 95th percentile TT (TT 95 ) and the mean TT (M). The BI is then defined as the ratio between the BT and the average TT M M TT BI -= 95 [START_REF]Cost-Effective Performance Measures for Travel Time Delay, Variation, and Reliability[END_REF] Planning Time (PT) is another concept often used. It gives the total time needed to plan for an on-time arrival 95% of the time as compared to free-flow TT. The Planning Time Index (PTI) is computed as the ratio between the PT and the free-flow TT (TT free-flow ) flow free TT TT PTI -= 95 [START_REF] Chen | Travel-Time Reliability as a Measure of Service[END_REF] For example, if PTI = 1.60 and TT free-flow = 15 minutes, a traveler should plan for 24 minutes in total to ensure on-time arrival 19 times of 20. 3-Tardy trip measures indicate unreliability impacts using the amount of late trips. If travelers only use the average trip time for their travel plans, they will not arrive exactly on time, but can arrive either early or late to their destinations. A Misery Index (MI) calculates the relative distance between the mean of the 20% of drivers having the highest TT and the mean TT of all travelers. It is defined as M M M MI 80 TT TT i - =  (8) Where TT 80 is the 80 th percentile travel time. 4 -Probabilistic measures (Pr) calculate the probability that TTs occur within a specified interval of time. Probabilistic measures are parameterized in the sense that they use a threshold TT, or a predefined time window, around some specific travel time threshold to differentiate between reliable and unreliable TTs. Probabilistic measures are useful to present policy goals, such as the Dutch target for reliability, according to which "at least 95% of all TT should not deviate more than 10 minutes from the median TT" : Pr (TT i ≤ β + TT 50 ) ≥ 0.95, β =10 minutes for routes longer than 50 km. 50 ). 1 ( ) ( Pr TT TT i β + ≤ (9) In our case, we used the proportional formula given by equation 9. β =0.2 is used, which means that we compute the probability that TTs do not deviate by more than 20% from the median TT. THE TEST SITE DESCRIPTION: THE A4-A86 MOTORWAY In the east of Paris, a three-lane urban motorway (A4) and a two-lane urban motorway (A86) share a four-lane, 2.3 km long, weaving section. As the traffic flows of the two motorways are merged, traffic becomes particularly dense at some hours on the weaving section, known as the greatest traffic bottleneck in Europe. Until summer 2005, 182 000 vehicles using this stretch of road every day used to form enormous bottlenecks, with over 5 hours' congestion a day and tailbacks regularly averaging 10 km. Traffic would be saturated by 6.30 a.m. and the situation would not revert to normal until 8.30 p.m. A HSR experiment was launched in July 2005, which gives drivers access -at peak times -to an additional lane on the hard shoulder where traffic is normally prohibited. The openings and closings of this lane are activated by the traffic control centre according to the value of road occupancy of the common trunk section, measured upstream (opening if road occupancy is greater than 20% and returning to closing if less than 15%). Moveable safety barriers are installed on the right side of the additional lane. When this lane is closed, closure devices pivot to block the hard shoulder. These devices are installed at several key locations on the section so that drivers can see them whatever their position and are thus dissuaded from using the lane. The width of the hard-shoulder has been increased to 3m and the width of the other lanes reduced from the standard 3.5m to 3.2m. Safety has been improved by the installation of automatic incident detection cameras. In the event of an incident or accident when the lane is open, vehicles on the hard shoulder lane can be detected and the hard shoulder is then to the closed. Additional safety measures are provided by speed control radars on the A4 motorway in both traffic directions. DATA DESCRIPTION Inductive loops provide data regarding traffic flow, occupancy and average speed for each lane every six minutes. Although data were available on a 8-km-long stretch (in each direction), analyzed here are only data on a 3-km long stretch in the eastbound direction (2.3 km on the weaving section and 0.7 km downstream). Data have been analyzed for three years (2000, 2001 and 2002) before the implementation of the device and one year (2006) after. Four inductive loops in the eastbound direction (three on the weaving section and one downstream) were used for calculating the TTs presented here. Travel Time Calculation The TT for the route is calculated from the four consecutive traffic stations as follows: • At each traffic station, for each lane, the TT is the ratio of the length of the stretch of road covered by the traffic station, divided by the 6 minute average speed for the lane. However this TT might be disregarded (considered as missing data), if the average speed for the lane does not respect the thresholds (explained in the next section). • At each traffic station, the average TT over the lanes is the weighted sum of the nonmissing TTs of the different lanes. Each lane TT is weighted by the proportion of the traffic flow circulating on this lane (over the total traffic flow for the period). This process requires that the speed on at least one lane is relevant. • The TT of the route constituted by the four consecutive stretches is the sum of the TTs of the stretches. This requires that the process described in the previous paragraph is successful for the four stretches. A comparison between TTs in 2006 and 2002 is possible for all pairs of periods where this whole process was successful both in 2006 and in 2002. The frequency of success is high in absolute value (53,574 periods) out of the 87,600 periods of the year, even though there were many cases of missing or irrelevant data: in percentage terms, the frequency of success is 61% (=53,574/87,600). Data Quality and Missing Data Although data seem generally very good, some are missing, inaccurate or irrelevant. It is crucial to ensure that this does not distort the mean TT or the TT distribution. Anomalies in traffic data are identified when they are higher or lower than given thresholds -some data are unrealistic, such as an occupancy greater than 100%, a 6-minute average speed greater than 150 km/h, a 6-minute flow (by lane) greater than 400 vehicles. In these cases data for the corresponding period and lane are eliminated and then considered as missing. Thresholds for discarding very high or very low speed data impact the TT distribution and therefore could have an influence on data accuracy: errors can be made both when discarding or keeping the data. We preferred thresholds that would allow the elimination of large amounts of data as long as these missing data could be recovered. Recovery was processed as follows: • Assuming that all vehicles using the lane have the same length L, the harmonic speed average ( V , in km/h) on a given lane is inversely proportional to the occupancy (o) as given by equation [START_REF] Lomax | Selecting travel reliability measures[END_REF]. ( ) 1/V 100 o / N L   = × + λ   ( 10 ) Where L is the length of the vehicles and λ the length of the sensor (in meters). The occupancy is proportional to the time of presence on the sensor of the set of the N vehicles during the 6-minute measuring period. Generally, road occupancy data are recorded using two digits (no decimal places), which does not always give an accurate picture of a situation. Fortunately, the road occupancy data for the HSR site were recorded using four digits (two more decimal places), thereby providing a more accurate estimation of speeds. For example, for N=100 vehicles/6min, an occupancy o= 5% and (L+λ)=5m, the speed is 100km/h, it is 84.74 km/h if o=5.9% and it is 99.8 km/h if o= 5.01%. The speed of a vehicle is generally obtained, within two inductive loops, as the ratio of the distance between the two loops and the difference between the times of passage of the vehicle right at the two loops. Errors occur when one of the two loops defaults. Obvious errors can be corrected using one single loop: the inverse of the speed is estimated from the sum of the values of the time spent by the vehicles on the loop which is supposed to be correct. Before using this recovery, it was necessary to check whether the occupancy-based speeds were close (or not) to the empirical speeds (see Table 1). The average TT and percentiles 90% and 95% are slightly smaller with the occupancy-based recovery. This means that for very low empirical speeds (less than 5km/h) the occupancy-based formula gives a higher speed value. Note also, that it is not easy to estimate the accuracy of the equipment. As traffic measurement equipment is periodically updated, measurement accuracy may change. This fact may mitigate certain results. Traffic trend between 2000-2002 and 2006 The level of the traffic volume exerts an influence on the TT and its reliability. When analyzing the TT reliability for two years, the traffic volume trend between theses years must be taken into account. The examination of vehicles x kilometers traveled per year on the weaving section in both directions showed that traffic decreased by 2% between 2000-2002 and 2006. Traffic increased at night by 7% and decreased during the day by 4%. This corresponds to a change in drivers' behaviors. These traffic variations are not very high, so their impact on TT, although difficult to estimate without using a simulation model, should be rather low. A modification of TT reliability may have an impact on the traffic level, because some drivers are sensitive to traffic conditions; therefore, some trips are advanced, postponed or rerouted. In the case of an ex-post assessment of a new traffic management system, it is easy to describe what happened (in terms of traffic volume or of its distribution during the day). This contributes to a better understanding of the link between driver choice and TT reliability. This should be helpful for building and calibrating a driver behaviour model based on TT reliability -such a model is being required for ex-ante studies. According to these values, it was not necessary to expressly consider the special case of rain in the reliability studies. RELIABILITY ANALYSIS Impacts of HSR on the TT and on its reliability are identified with an observational before/after study on the 3 kilometers of weaving section completed by downstream sections. Furthermore, in 2003 Jacques Chirac, former president of France, launched an important campaign for road safety and against speeding. This speed limit campaign reduced the high speeds. We observed a reduction of the free-flow speed from 104 km/h in the year 2002 to 94 km/h in the year 2006 [START_REF] Cohen | A Cost-Benefit assessment of a dynamic managed lanes operation WCTR[END_REF]. Analyses are conducted on both the HSR and the speed limit campaign. Fortunately, the speed limit campaign was important only at off-peak period (when HSR was not opened). We can assume that, during peak hours, speeding was very limited in the "before" period, since the average speed was very low and therefore the speed limit campaign did not impact the periods when HSR would be opened. The reliability analyses in this paper were made for four different situations: day-open / closed and night open/closed. Global evaluation The HSR effect may be split in two components: • A direct effect on TT reduction and on TT variance reduction, • An indirect effect on the daily traffic distribution. Indeed, when comparing off-peak and peak hours before and after HSR implementation a shift of some traffic from day off-peak hours (HSR closed) to peak hours (HSR open) was observed. Day traffic increased by 2% at peak hours, and decreased by 5% at off-peak hours. This shift is most likely related to the improvement of traffic conditions when HSR was opened: the supply during peak hours allows the passage of more vehicles. Also a part of the demand might shift from off peak to peak hours; this part corresponds to drivers who were constrained, before the HSR implementation, to drive during off-peak hours in order to avoid very congested peakhour traffic. This shift implies a reduction of TT and its variance at off-peak hours. Without this indirect effect, the TT reduction during peak hours as well as the TT increase during off-peak hours would have been greater. However there is no point in trying to distinguish the part of each component in the TT and its variance because the drivers experienced the global result of these two components. We can see from Figure 3 that average TT increased for all periods except for the dayopen periods. During the day periods, the decrease of TT was due to the positive impact of HSR; the increase, when HSR is closed can be due to two different factors: • HSR closed periods are generally during off-peak periods; and the speed limit enforcement due to the speed limit campaign led to increased TTs, • there were some peak hours in 2006 where HSR was unavailable -for instance it was under maintenance in August 2006. The average TT increase during night periods when HSR was closed can be easily attributed to the speed limit campaign, which induced the average TT increase for the few HSR nightly opened periods. These periods were partly congested, and HSR opening then led to a decreased TT. But this congestion cleared up quickly as the demand is low at night and the speed limit campaign let to an increase in the TT as compared to the "before" period -the final result being an increase of average TT. Buffer times and buffer indexes As one can see on the left-hand side of Figure 3, BT decreased for all periods. PT decreased except for the day periods when HSR was closed, where it was nearly stationary. This difference between the PT and BT evolution for the day-closed periods is due to the increase in TT average and not in any decrease in TT 95 ; this is less favorable for drivers, but still remains an increase in reliability. FIGURE 3 Impacts of HSR and the speed limit campaign on Buffer indexes Remark On the right-hand side of Figure 3, we can see that PTI decreased sharply in 2006 for the four situations (day_open/closed and night_open/closed) differently from the PT which remains stable for the day-closed periods and decreases slightly for the night-open periods. The decrease in PTI is due to the rise in the free-flow TT. This rise is only due to the speed limit campaign and isn't influenced by traffic conditions (congestion or fluid). We can therefore conclude that: . The strong decrease of BT in 2006 at night, when HSR was closed, is most likely due to the speed limit enforcement campaign which increased the mean TT, inducing more homogeneous speeds in 2006 than in 2002, therefore an improvement in TT reliability. When comparing the situations in 2002 and 2006, the evolution of PTI (which decreased) is misleading for users because the PT did not always decrease. Comparing the evolution of BT and BI, we can see that both have the same evolution between 2002 and 2006. This is because the average TT depends also on the amount of congestion. The average in the denominator of the BI formula, cannot inverse the trend of the numerator, it can only accentuate or reduce the trend. The BI decrease is less important than the BT decrease for the day-open periods (BI 2002 = 54%, BI 2006 = 50%; BT 2002 = 86,3 s and BT 2006 = 68,4 s). This is because in 2002, the average speed corresponded to congested traffic conditions only for the day_open periods where average TT was equal to 160 seconds, meaning a speed of 67.5 km/h. Reliability can be defined either in time by the BT, the extra time which must be added to the average or in percentage by the BI. In this example these indicators are not equivalent since the average TT varies from one period to another, but both remain valid reliability indicators. Bhouri & Aron Tardy trip and probabilistic indicators Tardy trip measures indicate unreliability impacts using the amount of late trips. A Misery Index (MI) calculates the relative distance between the mean of the 20% highest TT (mean of TT>TT 80 ) and the mean TT of all travelers. Figure 4 shows the evolution of the Misery Index. It shows that it decreased significantly for all the periods. The decrease when HSR is closed is due to the increase of mean TT due to the speed limit campaign. This means that TT reliability is noticeably improved but not only due to the HSR effect. However, for the day-open period, mean TT decreases as shown on figure 3. Therefore reliability improvement is only due to the HSR effect. FIGURE 4 Impacts of HSR and speed limit campaign on the Misery Index and the Probabilistic Indicator The probabilistic indicator gives a different point of view. We can see on Figure 4 that the probability indicator decreases very slightly for the day_open periods (around 2%). It increases for other periods. This slight decrease comes from a decrease of the median; therefore the value (1.2 x median) corresponds to a shorter TT, more frequently exceeded. The number of travelers having high TT values was not any fewer in 2006 than in 2002, but the mean of these high TT was lower in 2006 (MI passed from 54% to 41%). Skew and width indicators Van Lint et al. [START_REF] Van Lint | TT unreliability on freeways: Why measures based on variance tell only half the story[END_REF] presented λ Skew (equation 3) and λ Var (equation 4) as a robust measure for skew and width of TT. They argued that: • During congestion, unreliability of TT is predominantly proportional to λ Var . This is not refuted here: the value λ Var =0.77 in 2002 can be considered as large, whereas the value λ Var =0.54 in 2006 is much lower, while congestion decreased from 2002 to 2006. • In transient periods (beginning of congestion and end of congestion), unreliability is predominantly proportional to λ Skew . However we cannot have this interpretation of λ Skew here, since we have calculated λ Skew for all opened HSR periods, which include transient periods, congested and non-congested periods. We say that for this large set of periods, the interpretation of λ Skew is difficult, since the λ Skew numerator and denominator depend on the location of TT 50 related to the congestion. Different situations may occur. In 2002, in day_open periods, TT 50 =155,9 seconds was in congestion (speed=69,3 km/h), whereas in 2006, TT 50 =124,9 seconds (speed=86,5 km/h) was no longer congested. In 2002, TT 50 was high because more than half of the periods were congested. It implied a large λ Skew denominator (TT 50 -TT 10 )= 67.7s, and a relatively low λ Skew nominator despite of congestion (TT 90 -TT 50 )=52.4s. Both reasons lead to a relatively low λ Skew value (0.77). FIGURE 5 Evolution of the width and skewness indexes CONCLUSIONS Reliability is a new dimension for assessing traffic operations and is as important as the traditional factors such as road capacity, safety, equipment and maintenance costs, etc. This paper presents the TT reliability assessment of a HSR field test from a French motorway and discusses the effectiveness of some key performance indicators. Field tests provide large amounts of data which are necessary for any assessment. The first concern is the quality of data. Bhouri & Aron In this field test, TT is estimated from speeds which are measured by inductive loop sensors. The use of six-minute average speeds instead of individual speeds (not available) tends to hide parts of the TT variability, therefore of its reliability, but nevertheless reliability indicators based on six-minute data remain meaningful. Data analysis shows the accuracy of data, although some outlier speeds were identified and considered as missing data. The missing data for the "before" period was replaced on the basis of an historical method. For the "after" period, missing data were replaced with a flow/occupancy method. In order to distinguish between the HSR effects and other concomitant aspects, traffic analyses were performed with regard to day and night periods and open and closed HSR. The results show a positive effect of HSR on TT reliability. In addition to the reliability assessment of the HSR, the paper discusses the ability of different indicators known to accurately report the TT reliability improvement. Results show that lower PT increases driver satisfaction and a smaller BT implies greater reliability, even if the PT does not decrease. Comparisons between PTI from different years may be misleading to travelers. In this field test example, reduction in PTI was due to the increase in free flow travel time and not to a decrease of the PT. Increase in free flow time is due to a greater respect of the motorway speed limit imposed by the speed limit campaign. Further to these classical indicators, the paper discusses the robustness of λ Var and λ Skew indicators to measure respectively the width and the skew of TT distribution. It shows the effectiveness of the width indicator and its robustness to indicate both reliability and congestion. Results from this HSR French experiment show however that the λ Skew indicator is not always suitable for the reliability assessment. Indeed, two factors impact traffic in this experiment: on the one hand the HSR implementation, and on the other hand the speed limit campaign, supported by the automatic speed control systems. The speed limit affects traffic only for non-congested periods and therefore when HSR is closed. As a consequence it affects the denominator of the λ Skew indicator which depends on this non-congested traffic. The use of this part of the TT distribution as a component of the λ Skew definition affects the quality of this indicator: values of λ Skew reveal more a lower TT median value rather than reliable traffic conditions. As λ Skew is not an effective indicator for reliability assessment, the combined indicator of width and skew, the Ulr indicator is also affected, and cannot therefore be considered as an effective indicator. In the future, the optimization of traffic operations should be developed with respect to, among other criteria, TT reliability in its various forms. ): 1 . 1 Statistical range methods 2. Buffer methods 3. Tardy trip measures 4. Probabilistic measures 1 -Statistical range methods, the main statistical indicators are the Standard deviation (STD) and the coefficient of variation (COV). They show the spread of the variability in TT. FIGURE 1 1 FIGURE 1 The weaving section A4-A86. History-based recovery: For the years 2000, 2001 and 2002, missing data for a given period and lane were substituted, when possible, by data of another year, for the same 6minute interval at approximately the same date and time (same month, same day of the week, same hour). The data from 2000-year and 2001-year were used to reconstitute the 2002 missing data and the same process was used to reconstitute data for years 2000 and 2001).• Occupancy-based recovery. In most cases of missing data, flow and occupancy seem correct whereas speed is missing. Therefore, for the "after" period (2006), a speedrecovery method based on a function of the flow/occupancy ratio was introduced. The breakdown of vehicles x kilometers when HSR is opened/closed shows an open part equal to 19% in the before periods (years 2000-2002) and 20.1% for the after period (year 2006). Note that in 2000, 2001 and 2002, HSR was not installed; the open periods are the periods corresponding to the 2006 periods where HSR was effectively opened. The correspondence between the periods is made on calendar principles. Bhouri & Aron 10 The slight increase in 2006 of the part of drivers traveling during rush hours (20,1% in 2006 against 19% in 2000-2002) corresponds to a shift in 2006 of some drivers from offpeak to peak hours (now less congested). Figure 2 2 Figure 2 Evolution between 2003 and 2006 of vehicles x kilometers (millions) on the HSR site, by year, according to the period in the day, the HSR status. Table 1 : Comparison between empirical 2006-year TTs , and occupancy-based speeds 1 Method Average percentile 50% 90% 95% Night_ closed empirical (**) occupancy 118.3 116.0 113.2 113.3 126.8 124.8 139.3 129.6 Night_open(*) empirical occupancy 125.7 125.2 116.6 116.9 148.4 150.5 185.0 192.0 Day_closed empirical occupancy 156.0 155.8 131.2 130.9 221.7 221.7 246.5 245.8 Day_ open empirical occupancy 137.9 137.0 125.0 124.3 182.4 180.8 206.1 205.1 TTs are in seconds (*) only a few (138) values (**) observations are disregarded when measured average speed is outside the range {5;150} km/h Bhouri & Aron TRB 2013 Annual MeetingPaper revised from original submittal. TRB 2013 Annual MeetingPaper revised from original submittal.Bhouri & Aron
34,728
[ "1278852" ]
[ "222120", "222120" ]
00908897
en
[ "info" ]
2024/03/04 23:41:46
2011
https://hal.science/hal-00908897v2/file/2012_grettia_bhouri_collaborative_agents_for_modeling_traffic_regulation_systems_P.pdf
Neila Bhouri email: [email protected] Flavien Balbo email: [email protected] Suzanne Pinson email: [email protected] Mohamed Tlig Neïla Bhouri Collaborative agents for modeling traffic regulation systems The development of surface public transportation networks is a major issue in terms of ecology, economy and society. Their quality in term of punctuality and passengers services (regularity between buses) should be improved. To do so, cities often use regulation systems at junctions that grant priority to buses. However, most of them hardly take into account both public transport vehicles such as buses and private vehicle traffic. This paper proposes a bimodal urban traffic control strategy based on a multi-agent model. The objective is to improve global traffic, to reduce bus delays and to improve bus regularity in congested areas of the network. In our approach, traffic regulation is obtained thanks to communication, collaboration and negotiation between heterogeneous agents. We tested our strategy on a complex network of nine junctions. The results of the simulation are presented. [START_REF] Balbo | Using intelligent agents for Transportation Regulation Support System design[END_REF] Intr oduction To improve route times of public surface transportation (bus, tramways, shuttles, etc.), cities often use regulation systems at junctions that grant priority to vehicles. These systems are referred to systems equipped with bus priority. The aim of these strategies is to increase the average speed of public transport vehicles as well as other vehicles that has to cross a junction. The use of these systems is efficient when traffic is light or when they need to improve a single congested bus route. However, reducing the time of bus journey, although very important for operating a route, is not the primary factor considered by public transport operators whose obligation is to provide passengers services e.g. keeping interval between buses. To take into account public transport vehicles specificity, TRSS (Transportation Regulation Support Systems) were developed. TRSS systems follow a micro-regulation based approach, i.e. an approach that models the behavior of each bus [START_REF] Balbo | Using intelligent agents for Transportation Regulation Support System design[END_REF], [START_REF] Balbo | Dynamic modeling of a disturbance in a multi-agent system for traffic regulation[END_REF], [START_REF] Cazenave | Monte-Carlo Bus Regulation[END_REF]. One of the weaknesses of these systems is that the private vehicle traffic flow is hardly taken into account. Another weakness is that traffic light management which is one of the key factors of traffic jams and bus delays, is not included in the TRSS systems Our objective is to build a traffic control strategy for bimodal traffic that is able to regulate both private vehicle traffic and public vehicle traffic. Classical control theory used to regulate bimodal traffic (public and private vehicles) is confronted to the modeling problem. Traffic flow can be modeled at a macroscopic or at a microscopic level. Microscopic modeling is timeconsuming, and it is therefore not well adapted to build real time control strategies for wide urban networks [START_REF] Papageorgiou | ITS and Traffic Management[END_REF]. Macroscopic modeling has been used in [START_REF] Bhouri | Constrained Optimal Control strategy for multimodal urban traffic network[END_REF], [START_REF] Bhouri | An intermodal traffic control strategy for private vehicle and public transport[END_REF]. However, macroscopic representation of buses does not allow more than an indirect consideration of the intervals. In these systems, the objective was to reduce the time spent in traffic jams so that buses respect their schedule. In [START_REF] Kachroudi | A multimodal traffic responsive strategy using particle swarm optimization[END_REF] a hybrid model was used macroscopic modeling for private vehicles and microscopic modeling for public transport. The complexity of the bimodal traffic regulation strategy shows the limits of these classical modeling approaches. Multi-Agent modeling can be a suitable answer to this complex regulation problem. We note that multiagent systems are increasingly present in the field of traffic regulation [START_REF] Balbo | Using intelligent agents for Transportation Regulation Support System design[END_REF] [2] [START_REF] Mailler | Solving distributed constraint optimization problems using cooperative mediation[END_REF] and [START_REF] Mizuno | Urban Traffic Signal Control Based on Distributed Constraint Satisfaction[END_REF]. The problem of traffic lights coordination on the thoroughfares of the route network has been solved in [START_REF] Oliveira | Using Cooperative Mediation to Coordinate Traffic Lights: a case study[END_REF] [9] [START_REF] France | A multiagent system for optimizing urban traffic[END_REF] and [START_REF] Roozemond | Using intelligent agents for pro-active, real-time urban intersection control[END_REF]. The regulation system presented in [START_REF] Vasirani | A marketinspired approach to reservation-based urban road traffic management[END_REF] is related to traffic assignment using negotiation between vehicles and junctions. We already developed a first prototype that shows promising results [START_REF] Bhouri | A Multi-Agent System to Regulate Urban Traffic: Private Vehicles and Public Transport[END_REF]. The second section focuses on traffic regulation systems and describes our model: the network model and the identification of the agents with a detailed description of agents, their attributes, their objectives, as well as communication and collaboration protocols. The third section provides the first results of the simulation tests carried out on the Jade platform. Finally, we conclude in the fourth section. Network modeling In our model, the urban network is represented by an oriented graph G= (I, A). The nodes {I} represent the junctions (or intersections) and the arcs {A} represent the lanes that connect the junctions. Two intersections can be connected by one or several arcs depending on the number of lanes on the thoroughfare. An arc a i corresponds to a lane. It is characterized by a set of static information (such as its length L i (in meters), its capacity C i (maximum number of vehicles on arc a i in private car unit, PCU), its saturation output D i which is the maximum exit output from the given arc (in PCU/second), and dynamic information i.e. the number of vehicles N i on the arc (in PCU), the state of the traffic lights at the extremity of the arc, green or red, if the light is green, then the vehicles present on the arc can depart. By private car unit (PCU), we mean that all vehicles on the arc are converted to their equivalent in private vehicles, for example a bus is 2.3 PCU depending on its length, a truck can be 2.3 or 4 PCU and so on. A junction is specified by the set E of arcs that enter it and S the set of arcs that leave from it. A junction is managed by a set of stages P. Each of the stages specifies the list of arcs for which the green light is awarded if the stage is active (see Figure 1). The network is used by a number of bus routes. Each route comprises the number of buses from the same origin and in the same direction, and that service a number of predefined commercial bus stops at regular time intervals. The time spent by a bus at a commercial stop is equal to the pre-set time for passengers to mount, plus additional time to regulate the interval, if required. Agent modeling In order to identify agents and design the MAS, we present an abstraction of the real system; for every entity of the real world is associated an agent in the virtual world to form a Multi-Agent System (MAS). Homogenous agents are called "agent-type". The developed MAS is made up of the following agenttypes: J unction Agent (JA): is the key agent of our architecture. It is in charge of controlling a junction with traffic lights, and of developing a traffic signal plan. The junction agent modifies the planning of the lights according to data sent by approaching buses. Stage Agent (SA): the traffic signal plan is elaborated thanks to the collaboration of the junction and stage agents. Each SA is expected to determine the optimal green light split to clear the waiting vehicles on the arcs concerned by the stage. Thus, whatever the complexity of the junction is (and its physical configuration), it is managed by a set of stage agents interacting with the junction agent in order to develop a plan of actions for the traffic lights. The number of stage agents is given a priori depending of the junction topology. Bus Agent (BA): represents a bus in the real world. It circulates from one arc to another, halts at commercial stops, halts at red lights and obeys the instructions of the bus route agent. The objective of each bus agent is to minimize the time spent at traffic lights (to minimize journey times). Bus Route Agent (BRA): bus agent only provides a local view of their environment and, in particular, only the journey covered by the BA. Thus, local optimization carried out by bus agents can have a negative impact on the route, notably on its regularity (i.e. the formation of bus queues). To tackle this problem, we propose an agent who has a global view of the route agents, and who can control and modify their behavior in order to guarantee an efficient and regular service. Descr iption of agent behavior Bus Agent (BA): In order to minimize the time spent at traffic lights the bus agent interacts with junction agents and its hierarchical superior agent (BRA). All the buses have to provide a regular service and avoid bus queues, in other words, the frequency of buses passing commercial stops must remain stable. To achieve this objective, the BA receives orders from the BRA (for example, stay at the stop for t seconds, if the bus is ahead with respect to the position of the preceding bus). The BA is composed of a data module, which represents its internal state, and a communication module, that enables exchange with other system agents. Behavior of a bus agent: Let t 0 be the entering time of the bus agent which behaves in the following way:  On entering arc i, the BA retrieves information from the arc (the number of vehicles that precede it, the length, capacity, and exit output of the arc). By using these data, the BA calculates a time-space request that is transmitted to the JA in order to prevent an eventual stop at the red light at the following junction. The JA then attempts to satisfy the demand (see junction agent below);  When approaching a stop, the BA informs the associated BRA. The bus route agent then calculates the duration of the regulation interval and its level of priority and sends it to the bus. Priority level is a function of the predecessor bus delay: the greater the delay, the higher the priority. The bus must wait during the passenger loading time, as well as the potential regulation time, before leaving the stop. Calculation of a green light request. This calculation is specified by the interval of time during which the green light is granted to the actual arc so that the bus can pass without stopping at the next junction. Let R be the requested interval: R = [t b , t e ], with t b and t e the beginning and ending times of the request interval respectively. The calculation of these times is carried out as follows: the bus enters the arc and finds N v vehicles ahead of it, the vehicles move to the traffic lights lane to wait for the green light thus forming a queue of length F (see Figure 2). In order to continue along its route, the queue of vehicles has to be dispersed before it arrives. The green light should thus be granted at the arc at the instant: t b = t o + T -T F with T be the time necessary for the bus to cover the distance Dis between the beginning of the arc and the end of the queue, and T F be the time necessary to disperse the queue F. This request interval R together with other information (id-number of the bus, its priority, the actual arc of the bus, the next arc to be traveled by the bus) are sent to the JA (at the next junction) who attempts to modify the plan for the lights to satisfy the request. The JA is the key agent of our architecture. The JA supervises the group of stage agents (SA) who collaborate together to establish a plan for traffic lights. This plan will, on one hand, maximize the capacity of the junction and, on the other hand, attempt to satisfy, as far as possible, the request interval of the bus. The JA is characterized by static and dynamic data. The static data represent the constraints that characterized the JA. It contains the maximum value of the traffic light cycle (120 seconds). For each cycle, there is an interval of lost time i.e. the period of orange or all red. The all red light is a period during which all the arcs from the same junction have a red light in order to clear the centre of the junction and thus prevent accidents. This fixed period, in conformity with the architecture of the junction, does not depend on the length of the cycle. It is fixed here to a two second period after each stage. It contains also the set of stages of the junction: P = {P 1 , ..., P m }. The set of stages represents the configuration of the junction (the permitted movements and turns). Determining the stages is a task executed offline by the traffic experts. They are two types of dynamic data, the first is related to the traffic signal plan: it specifies the order of the stages as well as duration of each stage. Between two successive stages, a two second period of all red is imposed. The second is related to the list of received reservation request data from the bus agents: each request is specified as follows: R = (P i , t b , t e , Priority), where P i is the stage that will allow the passage of the given bus, t b the time when the bus is expected to arrive at the traffic light, t e , the time when the rear of the bus leaves the arc, and finally Priority is the level of bus priority defined by the bus route agent. Dis F Bus L At the end of each cycle, the JA triggers the process of calculating the traffic signal plan for the given cycle. This signal plan determines the duration of the green light and the ranking of each stage. When the JA receives a request from a bus, it records it in the database. The JA then decides to accept or to refuse this request at time t b . The modification of a traffic signal plan following a priority request by a bus is as follows: 1) Extension or reduction of a stage (delay or advance), without exceeding the maximal duration of a stage; 2) Introducing a new stage into the plan. It is explained thanks to the following example (see Figure 3) in which the junction has a plan with four stages and two antagonistic bus routes. In this example, in the initial traffic signal plan, the order of stages is P 2 during 20 seconds, P JA may receive more reservations. In this example, at instant t 2 the junction agent receives another reservation, R 2 from bus n°2 which has a level of priority equal to 4 and requests the stage P 4 . As it is not possible to satisfy both requests because they involve two different stages (P 4 and P 2 ) for two time intervals that overlap, JA gives the stage to the bus with the highest priority index. In our example, R 1 is refused because reservation R 2 is more prioritized. This planning process is fundamental to regulate bus intervals. 1-JA interacts with its stage agents (SA) that needs to be managed. Let's call this set of SAs collab-group. 2-Initialization of variables: C = CycleMax; C is the size of the calculated cycle (in the example C=120s). t= 0 3-JA sends a message inform to the stage agents to inform them of the protocol initiated to calculate the traffic signal plan. 4-JA sends a message request to the stage agents asking them for the time necessary to clear all the vehicles from their stages, beginning at instant t. 5-Each stage agent i calculates its desired green light duration d i and an index that measures the urgency I i of the stage, and sends them to the manager (see next page for I i and d i calculation). Calculation of the desired duration of the green light The optimal duration of green light is computed by the following formula: { } i m i T T ,..., 1 max = = , i i i i i i V * C L * N ). 1 ( D N i i i w w T - + = where m is the number of entering arcs at this stage, T i the time necessary to clear arc i, L i the length (meter), V i the average speed (meter/second), N i the number of vehicles, D i is the saturation flow and C i the capacity of the arc i. The number of vehicles N i and the capacity Ci are expressed in private car unit (PCU) w i =N i /C i ∈[0,1] is a parameter that indicates the degree of congestion of the arc. When the arc is congested, w i =1, which means that only the first part of the equation is used. Urgency index of a stage. To award priority to a bus, the urgency index of a stage j is defined by the fact that the higher the index, the greater the urgency of the stage: ) ( 0 i i b m i w j e e I + = ∑ = with: w i the parameter indicating the degree of congestion of arc I, b i is the number of buses present on arc i, m is the number of arcs entering via stage j and e is the Euler constant in our example. We can note that if there are several buses on arc i (if b i > 1), the term e bi is dominant and therefore gives priority to stages with buses; if b i =0, the degree of congestion is then taken into account. Bus Route Agent (BRA).The role of the route agent is to supervise bus agents so as to prevent a local level regulation and the creation of bus queues. In other words, this agent can modify the behavior of bus agents in two different ways: 1) directly: by keeping hose buses, which are ahead in the plan compared to the preceding ones, at the bus stop for a certain period of time; 2) indirectly: by modifying bus priorities. This agent has a global view of the route it operates on, and can therefore detect bus queues and react to prevent queue formation. Internal state of the route agent. The route agent encompasses the following data: 1) the set of arcs traveled by the bus on its route; 2) The set of stops on the route: for each stop, its position, and the distance separating it from the next stop; 3) The set of buses on the route; 4) The frequency of buses introduced onto the route. For two consecutive stops A i-1 and A i , the route agent maintains the journey time tt j of the last bus. This helps to follow the bus journey and to calculate whether the bus is ahead or late compared to the bus immediately preceding it. Behavior of the route agent. When a bus j moves to a stop, the time tt j taken to cover the distance that separates the two stops A i and A i-1 , is transmitted to the route agent. The route agent then compares tt j to the time (tt j-1 ) taken by its preceding bus and consequently decides whether the bus is ahead or late. The route agent computes the new priority of the bus agent as well as the length of time the bus should wait at the commercial stop if it is ahead [START_REF] Bhouri | A Multi-Agent System to Regulate Urban Traffic: Private Vehicles and Public Transport[END_REF]. Exper imentation and r esults To test our bimodal control strategy, we developed a Multi-Agent System prototype on the JADE 1 We have tested the strategy on a network of nine intersections (Figure 6): platform (Java Agent Development Framework). JADE offers Java middleware based on a peer-to-peer architecture with the overall aim to provide a runtime support for agents  The distance between two adjacent junctions belongs to [200,400] meters.  Each section comprises one or two lanes.  The saturation flow, which is the maximum exit output of the arcs, is the same for each arc and equal to 0.5 vehicle/second.  At each entry onto the network, we have installed a source that generates vehicles at a frequency F∈ [4 s ... 10 s].  Some of the junctions have two stages while others have three stages.  Two buses enter the network. For Bus Route 1, the frequency of the generated buses is 50 seconds and for Bus Route 2, the frequency is 40 seconds. Knowing that bus priority can have negative impact on the global traffic conditions and can hence have negative impact on buses traffic, we tested our MAS strategy in two cases. In the first case, we use the 1 jade.tilab.com/ MAS strategy as presented above. In the second case, we inhibit the bus priority function. This last case will be called the " without priority" strategy. These delays correspond to the sum of time lost by all buses (resp. vehicles) during stops at traffic lights. Figure 7.a shows that, on the simulation period, the MAS strategy improves bus travel time (cumulated bus delays) of 85% compared to the Fixed Time Strategy (FTS) whereas the "Without priority" strategy improves buses traffic of only 76%. Furthermore, this last FTS strategy doesn't reduce private vehicles delays (see Figure 7.b). Figure 7.b shows interesting results: "without priority" strategy and MAS strategy give same cumulated delays for private vehicles. They both improve vehicles delays by 30%. Thanks to the MAS strategy, the average lost time by bus is equal to 23 seconds, when it is equal to 2.6 minutes with the Fixed Time strategy. Considering these two results, we can conclude that MAS strategy is the best one since it improves significantly buses traffic as well as private vehicles traffic, and, using bus priority, helps traffic regulation. Simulation r esults at junction level In this section, we study the MAS strategy at a microscopic level: the junction level. We choose J2 and J3 junctions (see Figure 6). Each of them has 3 phases. In J2, bus lines use arcs with different phases. This means that the two bus lines are conflicting; if two buses at the same time ask for priority, junction J2 will have to solve a conflicting situation as explained in Figure 3. In J3, bus lines are not conflicting although junction J3 has also 3 phases. Buses run during the same phases. Table 1: Average travel time (TT) between the two bus stops around the junctions J2 and J3 for BR1 and BR2. TT is expressed in seconds. Table 1 gives the average travel time for vehicles on BR1, first between bus stops A12 and A13, showing the impacts of the three strategies on junction J2 traffic, and then between stops A13 and A14, showing the situation on junction J3. For vehicles on BR2, travel time is measured between a23 and a24 and then between a22 and a23. One can notice that the MAS strategy improves bus travel time even for junction J3, where as explained before, buses can cross J3 on two stages among the three ones available and hence can cross quickly. We can also notice that MAS strategy succeed to suppress the delays on this junction, as if there is no congestion, the minimum time needed to travel the 300 meters separating the two bus stops a22 and a23 is 27 seconds. On junction J2, where buses can run only on one stage, benefits on travel time are more important. They are equal to 27% for BR1 and 28% for BR2. 9) gives bus delays for BR1 (respectively BR2) buses at junction J2 and J3. The X axis represents the bus numbers generated by the simulation. We can see on these figures that another advantage of MAS strategy is that it preserves regularity of travel time (delays stay nearly constant) across these junctions. Conclusion In this paper, we have developed a bimodal traffic control strategy based on a multi-agent system. Unlike other approaches, our model takes into account both public transport vehicles such as buses and private vehicle traffic and studies the regulation in a whole network. The objective of this research was to improve global traffic, to reduce bus delays and to improve bus regularity in congested areas (keeping regular interval between buses) of the network. In our approach, traffic regulation is obtained thanks to communication, collaboration and negotiation between heterogeneous agents at different levels of abstraction and at different level of granularity (microscopic vs macroscopic level). Firstly, we have shown that classical methods of traffic regulation present several weaknesses. Secondly, we have presented our multi-agent strategy that computes traffic signals plans based on the actual traffic situation and on priority given to buses. Thirdly, we have run a simulation prototype on the JADE platform. The results show that our MAS strategy with priority improves both buses travel time and buses regularity. Our results also show that this bimodal MAS strategy improves buses as well as private vehicles traffic and reduces bus delays. Further work needs to be done: a more realistic network should be defined in the simulation run and more validation and testing should be undertaken with the definition of several indicators. It would also be interesting to have more testing to find Pareto front and multi-criteria optimization in order to get equilibrium between public transport delays and private traffic delays 5 Refer ences Figure 1 : 1 Figure 1: Example of a junction with 4 arcs and two stages P={P1, P2}. P1 allows for the clearing of the arcs a1 and a3, because the entry flow a1 and a3 can leave the junction at the same green light period. Simi- Figure 2 : 2 Figure 2: Reservation of green light duration by a bus J unction Agent (J A): The JA is the key agent of our architecture. The JA supervises the group of stage Figure 3 : 3 Figure 3: Example of a traffic signal plan and its modification Calculation of a traffic signal plan. The plan is calculated through the collaboration of the junction agent (JA) and the Stage Agents (SAs). The JA plays the role of a manager in supervising the SAs that act as participants. The protocol follows by the junction agent JA is as follows: 6 - 6 Figure4). 7 - 7 JA selects P j , the most urgent stage; let d j be its duration. 8-JA sends an accept message to the stage agent in charge of operating this stage. 9-JA withdraws the corresponding stage agent from collab_group. 10-JA updates the variables C= C-d; t= t+ d j ; 11-If collab_group is not empty, the protocol returns to step 4. Figure 4 : 4 Figure 4: Collaboration protocol in conflict resolution Conflict resolution. When the sum of green light durations requested by stage agents exceeds the size of the accepted value of the cycle, the JA must restore this sum to the maximal value of the cycle. To achieve a Δt reduction, at minimum cost, the manager negotiates with the JA using a Contract Net Protocol. The Figure 5 : 5 Figure 5: Supervision of bus agents (BA) by the route agent (BRA). Figure 6 : 6 Figure 6: The simulated network 3.1 Simulation r esults at networ k level Figures 7 gives simulation results of the three strategies for very heavy traffic conditions.Figure 7.a shows the recorded delays for buses with the three control strategies and Figure 7.b shows the same kinds of curves for private vehicles. Figures 7 gives simulation results of the three strategies for very heavy traffic conditions. Figure7.a shows the recorded delays for buses with the three control strategies and Figure7.b shows the same kinds of curves for private vehicles. Figure 7 . 7 Figure 7.a: Buses cumulated delays Figure 8 . 8 Figure 8.a: Buses travel time on BR1 from A12 to A13 Figure 8 ( 8 Figure8(respectively Figure9) gives bus delays for BR1 (respectively BR2) buses at junction J2 and J3. The X axis represents the bus numbers generated by the simulation. We can see on these figures that another advantage of MAS strategy is that it preserves regularity of travel time (delays stay nearly constant) across these junctions. [START_REF] Balbo | Using intelligent agents for Transportation Regulation Support System design[END_REF] and P 3 during 30 seconds each and finally P 4 for 20 seconds. At time t 1 , the JA receives a first request R 1 (P 2 , t 3 , t 4 , 2) which means that bus n°1 is asking for stage P 2 . It needs a green light at this stage during the interval [t 3 ,t 4 ] and it has a priority index of "2". The JA doesn't plan this request immediately but it waits until the start time t 3 . R 1 is studied at time t 3 (start time) and not at time t 1 (received time), because during this delay, the
28,547
[ "1278852" ]
[ "144000", "144000", "989", "144000" ]
00908083
en
[ "info" ]
2024/03/04 23:41:46
2011
https://hal.science/hal-00908083v2/file/PAAMS_BBP_mars_2011.pdf
Neïla Bhour email: [email protected] Flavien Balbo email: [email protected] Suzanne Pinson email: [email protected] Towar ds Ur ban Tr affic Regulation using a Multi-Agent System Abstr act: This paper proposes a bimodal urban traffic control strategy based on a multi-agent model. We call bimodal traffic a traffic which takes into account private vehicles and public transport vehicles such as buses. The objective of this strategy is to improve global traffic and reduce the time spent by buses in traffic jams so that buses cope with their schedule. Reducing bus delays is done by studying time length of traffic lights, giving priority to buses, more precisely to buses running late. Regulation is obtained thanks to communication, collaboration and negotiation between the agents of the system. The implementation has been done using the JADE platform. We have tested our strategy on a small network of six junctions. The first results of the simulation are given. They show that our MAS control strategy improves both bus traffic and private vehicle traffic, decreases bus delays and improve its regularity compared to a classical strategy called fixedtime control. INTRODUCTION To improve route times of public surface transportation (bus, tramways, shuttles, etc.), cities often use regulation systems at junctions that grant priority to vehicles. These systems are referred as systems equipped with bus priority. The aim of these strategies is to increase the average speed of all vehicles as well as public transport vehicles needed to cross a junction. The use of these systems is efficient when traffic is light or when they are used to improve a single congested bus route. However, reducing the time of bus journey, although very important for operating a route, is not the primary factor considered by public transport operators whose obligation is to provide the passengers services i.e. keeping interval between buses. In order to take into account the public transport vehicles specificity, TRSS (Transportation Regulation Support Systems) were developed. TRSS systems follow a micro-regulation based approach i.e. modeling the behavior of each bus [START_REF] Balbo | Using intelligent agents for Transportation Regulation Support System design[END_REF], [START_REF] Cazenave | Monte-Carlo Bus Regulation[END_REF]. One of the weaknesses of these systems is that the private vehicle traffic flow is hardly taken into account. If it is taken into account, this is only as an external parameter that modify the route times of the buses. Another weakness is that traffic light management, which is one of the key factors of traffic jam and bus delays, is not included in the TRSS systems That's why our objective is to build a traffic control strategy for the bi-modal traffic, able to regulate both the private vehicles traffic and the public transport. Classical control theory to regulate the bi-mode traffic (public transport and private vehicles) is confronted with the modeling problem. Traffic flow can be modeled in a macroscopic or microscopic level. Microscopic modeling is timeconsuming, and it is therefore not well adapted to build real time control strategies for wide urban networks. Macroscopic modeling has been used in by [START_REF] Bhouri | An intermodal traffic control strategy for private vehicle and public trans-port[END_REF] and [START_REF] Bhouri | Constrained Optimal Control strategy for multimodal urban traffic network[END_REF]. However, the macroscopic representation of buses does not allow more than an indirect consideration of the intervals. In these systems the problematic was to reduce the time spent in traffic jams so that buses cope with their bus schedule. In [START_REF] Kachroudi | A multimodal traffic responsive strategy using particle swarm optimization[END_REF] a hybrid model was used : macroscopic modeling for private vehicles and microscopic for the public transport The complexity of the strategy shows the limits of these classic modeling approaches to build a bimodal traffic regulation strategy. Multi-Agent modeling can be a suitable answer to this scaling problem. We can notice that multi-agent systems are increasingly present in the field of traffic regulation [START_REF] Bazzan | Opportunities for multiagent systems and multiagent reinforcement learning in traffic control[END_REF], [START_REF] Mailler | Solving distributed constraint optimization problems using cooperative mediation[END_REF]. The problem of traffic lights coordination on the thoroughfares of the route network has been solved in [START_REF] Oliveira | Using Cooperative Mediation to Coordinate Traffic Lights[END_REF], [START_REF] Roozemond | using intelligent agents for pro-active, real-time urban intersection control[END_REF]. We already developed a first prototype which shows promising results [START_REF] Bhouri | A Multi-Agent System to Regulate Urban Traffic: Private Vehicles and Public Transport[END_REF]. The second section focuses on traffic regulation systems and describes our model: the network model and the identification of the agents with a detailed description of agents, their attributes, their objectives, as well as communication and collaboration protocols. The third section provides the first results of the simulation tests carried on the Jade platform. Finally, we conclude in the fifth section. Network modeling In our model, the urban network is represented by an oriented graph G= (I, A). The nodes {I} represent the junctions (or intersections) and the arcs {A} represent the lanes that connect the junctions. Two intersections can be connected by one or several arcs depending on the number of lanes on the thoroughfare. An arc corresponds to a lane. It is characterized by a set of static information (such its length, its capacity, its saturation output which is the maximum output of exits from the given arc) and dynamic information (.the number of vehicles on the arc, the state of the traffic lights at the extremity of the arc: green or red. If the light is green then the vehicles present on the arc can depart). A junction is specified by the set of the arcs that enter it E and the set of the arcs that leave it S. A junction is managed by a set of stages P. Each of the stages specifies the list of arcs for which the green light is awarded if the stage is active (see figure 1). The network is used by a number of bus routes. Each route comprises the number of buses of the same origin and in the same direction, and which services a number of predefined commercial bus stops at regular time intervals. The time spent by a bus at a commercial stop will be equal to the pre-set time for passengers to mount, plus additional time to regulate the interval, if required. Fig. 1. Example of a junction with 4 arcs and two stages P={P1, P2}. P1 allows for the clearing of the arcs a1 and a3, because the entry flow a1 and a3 can leave the junction at the same green light period. Similarly, P2 clears arcs a5 and a8. The entries and exits of the junction are respectively E={a1, a3, a5, a8} and S={a2, a4, a7, a6}. Agent modeling In order to identify agents and design the MAS we represent an abstraction of the real system; for every entity of the real world is associated an agent in the virtual world to form a Multi-Agent System (MAS). Homogenous agents are called "agent-type". The developed MAS is made up of the following agent-types: J unction Agent (JA): is the key agent of our architecture. It is in charge of controlling a junction with traffic lights, and of developing a traffic signal plan. The junction agent modifies the planning of the lights according to data sent by approaching buses. Stage Agent (SA): the traffic signal plan is elaborated thanks to the collaboration of the junction and stage agents. Each SA is expected to determine the optimal green light split to clear the waiting vehicles on the arcs concerned by the stage. Thus, whatever the complexity of the junction is (and its physical configuration), it is managed by a set of stage agents interacting with the junction agent in order to develop a plan of actions for the traffic lights. Bus Agent (BA): represents a bus in the real world. It circulates from one arc to another, halts at commercial stops, halts at red lights and obeys the instructions of the bus route agent. The objective of each bus agent is to minimize the time spent at traffic lights (i.e. to minimize journey times). Bus Route Agent (BRA): the bus agent only provides a local view of their environment and, in particular, only the journey covered by the BA. Thus, local optimization carried out by bus agents can have a negative impact on the route, notably on its regularity (i.e. the formation of bus queues). To tackle this problem, we propose an agent who has a global view of the route agents, and who can control and modify their behavior in order to guarantee an efficient and regular service. Description of agent behavior Bus Agent (BA): In order to minimize the time spent at traffic lights the bus agent interacts with junction agents and its hierarchical superior agent (BRA). All the buses have to provide a regular service and avoid bus queues, in other words, the frequency of buses passing commercial stops must remain stable. To achieve this objective, the BA receives orders from the BRA (for example, stay at the stop for t seconds, if the bus is ahead with respect to the position of the preceding bus). The BA is composed of a data module, which represents its internal state, and a communication module, which enables exchange with other system agents. Behavior of a bus agent: Let t 0 be the entering time of the bus agent which behaves in the following way:  On entering arc i, the BA retrieves information from the arc (the number of vehicles that precede it, the length, capacity, and exit output of the arc). By using this data, the BA calculates a time-space request, which is transmitted to the JA in order to prevent an eventual stop at the red light at the following junction. The JA then attempts to satisfy the demand (see junction agent below);  When approaching a stop, the BA informs the associated BRA. The bus route agent then calculates the duration of the regulation interval and its level of priority and sends it to the bus. The bus must wait during the passenger loading time, as well as the potential regulation time, before leaving the stop. Calculation of a green light request. This calculation is specified by the interval of time during which the green light is granted to the actual arc so that the bus can pass without stopping at the next junction. Let R be the requested interval: R = [t b , t e ], with t b and t e be the beginning and ending times of the request interval respectively. The calculation of these times is carried out as follows: the bus enters the arc and finds N v vehicles ahead of it, the vehicles move to the traffic lights lane to wait for the green light thus forming a queue of length F In order to continue along its route, the queue of vehicles has to be dispersed before it arrives. The green light should thus be granted at the arc at the instant: t b = t o + T -T F with T be the time necessary for the bus to cover the distance between the beginning of the arc and the end of the queue, and T F be the time necessary to disperse the queue F. This request interval R together with other information (number of bus, its priority, the actual arc of the bus, the next arc to be traveled by the bus) are sent to the JA (at the next junction) who attempts to modify the plan for the lights to satisfy the request. J unction Agent (J A): The JA is the key agent of our architecture. The JA supervises the group of stage agents (SA), who collaborate together to establish a plan for traffic lights, which will, on one hand, maximize the capacity of the junction and, on the other hand, attempt to satisfy, as far as possible, the request interval of the bus. The JA is characterized by static and dynamic data. The static data represents the constraints which characterized the JA. It contains the maximum value of the traffic light cycle (120 seconds). For each cycle, there is an interval of lost time i.e. the period of orange or all red. The all red light is a period during which all the arcs from the same junction have a red light in order to clear the centre of the junction and thus prevent accidents. This fixed period, in conformity with the architecture of the junction, does not depend on the length of the cycle, it is fixed here to a two second period after each stage. It contains also the set of stages of the junction: P = {P 1 , ..., P m }. The set of stages represents the configuration of the junction (the permitted movements and turns). Determining the stages is a task executed offline by the traffic experts. They are two types of dynamic data, the first is related to the traffic signal plan: it specifies the order of the stages as well as duration of each stage. The second is related to the list of received request data from the bus agents: each request is specified as follows: R = (P i , t b , t e , Priority), where P i is the stage that will allow the passage of the given bus, t b the time when the bus expects to arrive at the traffic light, t e , the time when the rear of the bus leaves the arc, and finally 'Priority' is the level of bus priority defined by the bus route agent. At the end of each cycle, the JA triggers the process of calculating the traffic signal plan for the given cycle. This plan determines the duration of the green light and the ranking of each stage. When the JA receives a request, it records it in the database. The JA decides to accept or to refuse this request at time t b . The modification of a traffic signal plan following a priority request by a bus is as follows: 1) Extension of a stage (delay or advance), without exceeding the maximal duration of a stage; 2) Introducing a new stage into the plan. Calculation of a traffic signal plan. The plan is calculated thanks to the collaboration of the junction agent (JA) and the Stage Agents (SAs). The JA plays the role of a manager in supervising the SAs, which act as participants. The JA begins by forming a group of collaborators called collab_group including the list of stage agents that needs to be managed. JA initializes the variables: C = CycleMax, and t=0. Variable C controls the size of the calculated cycle. JA sends a message to the stage agents to inform them of the protocol initiated to calculate the traffic light plan. JA sends a message request to the agents of the col-lab_group asking them for the time necessary to clear all the vehicles from their stages, beginning at instant t. Every agent, i, of the collab_group calculates its desired green light duration d i and an index that measures the urgency I i of the stage, and sends them to the manager. When the manager receives all the responses, the sum d of durations is calculated. If d > C then the manager has to solve a conflict (i.e. the size of the cycle exceeds the maximum). Conflict is solved when d previously calculated becomes less or equal to C: d ≤ C. The manager selects the most urgent stage, that is P j , its duration is d j . It sends an accept message to the stage agent in charge of operating this stage; It withdraws the corresponding stage agent of collab_group; It updates the variables C=C-d j , t=t+d j ; finally JA sends request as long as collab_group is not empty. Conflict resolution. When the sum of green light durations requested by stage agents exceeds the size of the accepted value of the cycle, the manager must restore this sum to the maximal value of the cycle. To achieve a Δt reduction, the manager negotiates with the JAs using a Contract net Protocol. The cost of the offer is the number of buses penalized if the stage agent reduces its duration of Δt. Stage Agent (SA): This agent has a collection of both static and dynamic data that represents its internal state. The Static data are related to the list of entry arcs, the set of arcs authorized to clear if the stage is active (or green) Dynamic data are related to 1) the state of the stage: active or inactive; 2) the duration of green light attributed to the stage; 3) the starting time of stage execution. 'Active' means that the traffic lights controlling the arcs concerned by this stage are green. The vehicles are therefore authorized to depart. Behavior of the stage agent. The SA participates to the calculation of the traffic signal plan, and is in charge of fixing the optimal duration of green light for the given stage. When the stage agent is asked about the desired duration of green light by the junction agent, this duration d i and an index I i that measures the urgency of the stage are computed and transmitted to the junction agent. If the stage agent receives confirmation from the junction agent, the stage agent stops the process. If the stage agent receives a cfp (call for propose) with a cost c, it computes an offer and sends it to the junction agent. Calculation of the desired duration of the green light The optimal duration of green light is computed by the following formula: { } i m i T T ,..., 1 max = = , i i i i i i V * C L * N ). 1 ( D N i i i w w T - + = with m is the number of entering arcs at this stage, T i the time necessary to clear arc i, L i the length (meter), V i the average speed (meter/second), N i the number of vehicles, D i is the saturation flow and C i the capacity of the arc i. The number of vehicles N i and the capacity Ci are expressed in private car unit (pcu) which means that all vehicles on the arc are converted to their equivalent on private vehicles, for example a bus is 2.3 pcu, depending on its length a track can be 2 3 or 4 pcu, etc. D i is a traffic flow and hence expressed in pcu/second. w i =N i /C i ∈[0,1] is a parameter which indicates the degree of congestion of the arc. When arc is congested, w i =1, which means that only the first part of the equation is used. Urgency index of a stage. In order to award priority to a bus, the urgency index of a stage j is defined by the fact that the higher the index, the greater the urgency of the stage: Bus Route Agent (BRA).The role of the route agent is to supervise bus agents so as to prevent a local level regulation and the creation of bus queues. In other words, this agent can modify the behavior of bus agents in two different ways: 1) Directly: by keeping those buses, which are ahead in the plan compared to the preceding ones, at the bus stop for a certain period of time; 2) Indirectly: by modifying bus priorities. This agent has a global view of the route it operates on, and can therefore detect bus queues and react to prevent queue formation Internal state of the route agent. The route agent encompasses the following data: 1) the set of arcs traveled by the bus on its route; 2) The set of stops on the route: for each stop, its position, and the distance separating it from the next stop; 3) The set of buses on the route; 4) The frequency of buses introduced onto the route. For two consecutive stops A i and A j , the route agent maintains the journey time d i,j of the last bus. This helps to follow the bus journey and to calculate whether the bus is ahead or late compared to the bus immediately preceding it. Behavior of the route agent. When a bus agent moves to a stop, the time t taken to cover the distance L i,i-1 which separates the two stops A i and A i-1 , is transmitted to the route agent. The route agent then compares t to the time (d i,j ) taken by the preceding bus and consequently decides whether the bus is ahead or late. The route agent computes the new priority of the bus agent as well as the length of time the bus should wait at the commercial stop if it is ahead [START_REF] Bhouri | A Multi-Agent System to Regulate Urban Traffic: Private Vehicles and Public Transport[END_REF]. Results In order to test our bimodal control strategy, we have developed a Multi-Agent System prototype on the JADE 1 platform (Java Agent Development Framework). JADE offers Java middleware based on a peer-to-peer architecture with the overall aim to provide a runtime support for agents (JADE, 2009). We have tested the strategy on a small network of six intersections (figure 7):  The distance between two adjacent junctions belongs to [200,400] meters.  Each section comprises one or two lanes.  The saturation flow, which is the maximum exit output of the arcs, is identical for each arc and equal to 0.5 vehicles/second.  At each entry onto the network we have installed a source that generates vehicles at a frequency F∈ [4 s ... 10 s].  Some of the junctions have two stages and the others have three stages.  Two buses enter the network. For Bus 1, the frequency of the generated buses is 80 seconds and 180 seconds for Bus 2. We have compared the developed MAS strategy to a fixed time strategy with 30 seconds for each stage. We have run the simulation with these two strategies and for half-hour simulation time. Without priority With priority Bus number Fig. 8. Buses travel time with and without bus priority Figure 8, depicts buses travel time : the higher curve shows buses travel time between the stops BR1_A1 and BR1_A2 when buses do not request priority from the junction J1; the lower curve show buses travel time between the two bus stops J2 J1 BR1_A2 and BR1_A3 when buses are asking for priority at junction J2. We can notice buses travel time improves and becomes very regular when bus priority is taken into account. Figure 9 gives results of the two strategies for very heavy traffic conditions. These delays correspond to the sum of time lost by all buses (resp. vehicles) at stops on the traffic lights. As shown on figure 9, the MAS strategy improves both traffic of buses and traffic of private vehicles. As we can see, there is a decrease of 38% on lost time spent by buses on traffic light; for the private vehicles, we got a decrease of about 51%. CONCLUSION In this paper, we have developed a bimodal traffic control strategy based on a multi-agent system. It takes into account two transportation modes: public transportation i.e. buses and private vehicles. The originality of this strategy is the application of the new information and agent technologies, the entities representing the urban network can communicate among themselves and negotiate in order to solve traffic regulating problems. First, we have shown that classical methods of control systems of traffic regulation present several weaknesses: at a macroscopic level, they do not take into account mixed traffic and does not allow for the regulation of intervals between buses Furthermore computations at a microscopic level are time-consuming specially for regulating large networks. In a second part, we have presented the multi-agent strategy that computes traffic signals plans based on the actual traffic situation and on the priority needed by the buses. The priority is given to those buses that do not deteriorate the intervals between the vehicles on the same route. In the third part, we have run a simulation prototype on the JADE platform. Comparison between buses travel time with and without bus priority shows the capacity of the priority method we developed to improve both the travel time and the regularity of buses. Results also show that this bimodal MAS strategy improves conditions of global traffic and reduces bus delays. More work should be done: a more realistic network should be defined in the simulation run and more validation and more testing should be undertaken with the definition of several indicators. i the parameter indicating the degree of congestion of arc I, b i is the number of buses present on arc I, m is the number of arcs entering via stage j and e is the Euler constant in our example. Fig. 7. The simulated network Figure 9 . 9 Figure 9 gives results of the two strategies for very heavy traffic conditions.Figure 9.a shows recorded delays for buses with the two control strategies and figure 9.b shows the same kinds of curves for private vehicles. Fig. 9 . 9 Fig. 9.a. Buses cumulated delays Fig. 9.b Private vehicles cumulated delays
24,368
[ "1278852" ]
[ "144000", "989", "989" ]
01473376
en
[ "spi" ]
2024/03/04 23:41:46
2008
https://hal.science/hal-01473376/file/2008_gretia_kachroudi_multimodal_model_urban_traffic_control_policy_P.pdf
S Kachroudi email: [email protected] A multimodal Keywords: Traffic control, Predictive model, Multimodal urban network come A multimodal model for an urban traffic control policy Sofiane KACHROUDI * , , Neila BHOURI * INTRODUCTION In last few decades, major improvements were realized to deal with the complex problem of the traffic control in urban areas. The main way to regulate the urban traffic is via traffic lights. To quote only the best known, TUC, PRODYN, UTOPIA, SCOOTS, CRONOS are strategies that acts on traffic lights using optimization methods to regulate the traffic of private vehicle. However most of these strategies regulate the traffic of public transport vehicle using only rules-based techniques. In other terms, these strategies do not consider the public transport as a full-fledged mode as the private vehicle mode. In addition, some strategies like PRODYN have a local point of view that is applicable only for single junctions. Considering the public transport vehicle as an urban mode in the optimization process is necessary for a best traffic regulation in a whole urban network. This is the aim of our work. To optimize or to control a traffic process, it is obvious that one have to dispose of a mathematical model of this complex process. The fact that these two models would be used for an automatic traffic control impose to have simple models that do not require great time calculations. The private vehicle model is based essentially on the model developed for TUC strategy. It is a macroscopic model. In the opposite, the public transport vehicle mode model is semi-macroscopic. The goal of the paper is to depict a multimodal model of urban traffic used as predictive model for a traffic control strategy. The paper is organised as follows. Firstly, a description of urban network is given. Secondly, we introduce the general architecture of the traffic control based on the model predictive control (MPC). The detailled description of the model is given in the next section. For the private vehicle mode, the model is based essentially on the store-and-forward model. For the public transport mode, we present an innovative model based on mean behavior of public transport vehicles. The next section concerns the simulation tests. The results show that the model of public transport mode is consistent and worthy to be implemented for a real time traffic control policy. In the last section, conclusions and future works are given. SYSTEM DESCRIPTION : MULTIMODAL URBAN ROAD NETWORK A multimodal urban road network is an urban road network crossed by at least two transportation modes. For this study, we restrict to the case of two modes : private vehicles and public transport vehicles (buses). From now, we will talk about buses or public transport vehicles to assign the public transport mode. An urban road network comprises junctions (intersections), approaches (links) that links between two successive junctions and stations in which buses have to stop. A junction is formed by approaches that lead to a common cross area. The urban road network is crossed by public transport vehicles like buses or shuttle. A route is formed by succession of junctions and stations, in which buses have to stop for a time called the dwell time(Fig. 1). The dwell time for each line at any station is considered fixed and known. The traffic at a junction is divided into streams. A stream is formed by all vehicles that cross the junction from the same approach. Two streams are compatible if they can cross safely the junction simultaneously, they are conflicting otherwise. A junction is called signal controlled when the streams are controlled by a traffic light. A traffic light is described by five main variables [START_REF] Papageorgiou | Traffic control[END_REF]): • Lost time : The time inserted, between two consecutive stages, in order to avoid interferences between two conflicting streams. • Cycle time : It is the duration of signal cycle. The signal cycle is formed by succession of stages and lost times (Fig. 2). • Split : It is the relative green duration of each stage, as a proportion of the cycle time. • Offset : It is the time difference between cycles for successive junctions. To improve the traffic conditions via traffic lights, there are four possible actions : cycle time specification [START_REF] Allsop | Optimization of timings of traffic signals[END_REF]), Stage specification [START_REF] Scemama | La conception de plan de feux : une modélisation par la programmation sous contraintes[END_REF] and [START_REF] Heydecker | Calculation of signal settings to minimize delay at a junction[END_REF]), offset specification [START_REF] Stamatiadis | Multiband-96 : A program for Variable-Bandwidth Progression Optimization of Multiarterial Traffic Networks[END_REF]) and split specification according to the demand of the involved streams [START_REF] Diakaki | A multivariable regulator approach to traffic-responsive network-wide signal control[END_REF], [START_REF] Taranto | UTOPIA[END_REF], [START_REF] Barrière | Decentralization vs Hierarchy in Optimal Traffic control[END_REF], [START_REF] Dotoli | A signal timing plan formulation for urban traffic control[END_REF] and [START_REF] Bhouri | Régulation du trafic urbain multimodal avec priorité pour les transports en commun[END_REF]). This present work focus only on Split. Cycle time, stage specification and offset are considered known and fixed for the future developments. TRAFFIC CONTROL The automatic control theory gives many tools and methods to deal with this challenging problem (Papageorgiou [1983]). The Model Predictive Control (MPC) is one of the most effective methods for Optimal control policy. Its main goal is to optimize, over an open loop time sequence of controls, the process response using a model to forecast the future process behavior over a prediction horizon [START_REF] Rao | Moving Horizon Strategies for the Constrained Monitoring and Control of Nonlinear Discrete-Time Systems[END_REF] and Garcia and al. [2000]). It is particularly well fitted for getting of an optimal traffic conditions because of the complexity of the traffic process on a multimodal urban road network. At the beginning of each cycle time, specific traffic measures are gathered from the real Urban Road Network. According to this measures, the automatic control policy(MPC for our case) determines the suitable actions to take for a better service to the road user as well in private vehicle or in public transport vehicle. Despite that the private vehicles and public transport vehicles share the same roads, these two modes have different features. The most important differences are : • There are much more private vehicles than public transport vehicles. • A public transport vehicle route is known in opposition to private vehicle. These two last points lead up to consider : • The number of vehicles in each approach as the state variable of private vehicle mode. • The positions of vehicle in the network as the state variable of public transport mode. From these measures, the Optimal Control policy determines the actions to take in order to minimize the number of private vehicles in each link and to minimize the difference between the current position of public transport vehicle and a pre-specified position. The decision-making variable or control variable is the green duration of each stage of all the junction of the network. The control architecture is based on model predictive control (Fig. 3). A General MPC Policy comprises two blocs : • Modeling bloc : The model is used to predict the behavior of outputs of a dynamical system with respect to changes in the process inputs. The process is the multimodal traffic in an urban road network that is a discrete dynamical system. The inputs are • Optimization bloc : This bloc is used to minimize the number of private vehicle on each link and the difference between the current position of public transport vehicle and a pre-specified position. At time t, the current process state is measured and a cost minimizing control strategy is computed for a time horizon T = N • t in the future: [t, t + T ]. Only the first step of the control strategy is applied to the real process, then the process state is measured again and the calculations are repeated starting from the now current state, yielding a new control and new predicted state path (Fig. 4). The prediction horizon keeps being shifted forward (Garcia and al. [2000] and Mayne and Michalska [2000]). The optimization bloc is iterative and based on metaheuristic because of : • The complex nature of traffic that leads to a nonlinear analytical model for the private vehicle. • The non existence of an accurate analytical model for the circulation of public transport vehicles that leads to an hybrid model. • The relative easiness of implementation of a metaheuristic methods. We will not give further explanations for the optimization bloc because it is not the goal of this paper, which is the model description. MODEL DESCRIPTION The process, to be modeled, is the circulation of both private vehicles and public transport vehicles in an urban road network. From the traffic state at current cycle time and the green duration of each stage of all junctions in the current cycle time, the model predicts the traffic state at the next cycle time. The traffic state is made of : • The positions of all public transport vehicles in the network. • The number of private vehicles in each link of the network. Before evolving mathematical details of the model, we present notations and hypothesis that lead to the final model. Notations and Hypothesis Notations : Hypothesis : Hypothesis 1. The free bus speed V b i in link i is smaller than free private vehicle speed. Indices i : link index Hypothesis 2. For a cycle time C, if the green duration of an approach i is G i (k) then for a given time tr, the green duration allocated to the approach i, during the time tr, is tr C × G i (k). Hypothesis 3. The inflow of private vehicles, during a cycle time, into a given link are divided up uniformly along the link. For example : for a link of length L and into it N v private vehicles drive in, the number of private vehicles existing in a section of length l is l L × N v. Hypothesis 4. If a bus reach a traffic light line, which its green duration is G i (k), at tr seconds before the end of the current cycle, the bus will be stopped for a time of 1 2 × tr × (1 -Gi(k) C ). private vehicle Mode Modeling For the mathematical model of both private and public transport vehicles, the urban road network is represented as a direct graph with links i ∈ I and junctions j ∈ J. Each link i is controlled by a traffic light with green duration G i (k). The private vehicle Mode Model is based essentially on the model developed by [START_REF] Diakaki | A multivariable regulator approach to traffic-responsive network-wide signal control[END_REF] for The strategy TUC. However an important improvement was introduced to take into account the case of undersaturated traffic conditions as introduced by [START_REF] Dotoli | A signal timing plan formulation for urban traffic control[END_REF]. For a link i connecting two junctions j1 and j2 (Fig. 5), the dynamics of this link is expressed by the conservation equation : X i (k + 1) = X i (k) + C • (q i (k) -u i (k)) (1) Where X i (k) is the number of private vehicles within link i at cycle time k, q i (k) and u i (k) are respectively the inflow q i (k) = w∈Inj1 τ w,i • u w (k) where τ w,i is the turning rate from link w toward link i and In j1 is the set of incoming links for junction j1. In the TUC strategy, the calculation of the outflow of link i is given by using the store-and-forward model [START_REF] Gazis | Optimum control of a system of oversaturated intersections[END_REF] and [START_REF] Gazis | The oversaturated intersection[END_REF])leading to this equation of outflow of link i : u i (k) = S i • G i (k) C where S i is the saturation flow of link i. The store-andforward model is fitted to the case of oversaturated traffic conditions. In order to take into account undersaturated traffic conditions, the final expression of outflow of link i is given by : u i (k) = min(S i • G i (k) C , X i (k) C ) Introducing all the above in (1), the final dynamics of link i is expressed by : X i (k + 1) = X i (k) +C • [ w∈Ij1 τ w,i • min(S w • G w (k) C , X w (k) C )] -min(S i • G i (k) C , X i (k) C )) (2) public transport vehicle Mode Modeling The Urban road network is crossed by many bus lines and it is possible to have many buses of the same line crossing simultaneously the network. The dynamics of bus n of the bus line m is represented by the position P m,n (k) in the network, referenced to the start point of the bus line. Over the cycle time C, or control interval, the bus may stride, at maximum, the distance of C •V b where V b is the free speed of the bus in urban area. Along this distance, the bus may cross over i m,n k traffic lights which their green duration are spliced into the set G m,n k and over j m,n k stations which their dwell time are spliced into the set St m,n k (Fig. 6). Strictly speaking, the dynamics of buses is given by the general equation : the sense that the interaction between the public transport vehicles and private vehicles is treated by a macroscopic way. The bus, from its position P m,n (k) at the beginning of cycle k, will cross over, one by one, stations or traffic lights until the end of the current cycle k. After passing over station or traffic light, the new position P n and the remaining time tr n , before the end of the cycle, are reevaluated. Depending on whether it's a station or traffic light, the re-evaluation of the position and the remaining time is carried out by the function g st or the function g light. The algorithm of function F is given in the figure 7. The function g st takes as input variables, the past position P p , the past time remaining tr p and the dwell time of the bus in the station j. It computes the new position P n of the bus and the new time remaining tr n .Obviously, the g st function requires, for the calculation, the free bus speed V b and the position of the station Ls m,j . The figure 8 may help to understand the simple algorithm of this function given in figure 9. The function g light takes as input variables, the past position P p , the past time remaining tr p , the green duration of traffic light i and the number of private vehicle N b i ahead the bus in the link. It computes the new position P n of the bus and the new time remaining tr n .Obviously, the g light function requires, for the calculation, the free bus speed V b and the position of the traffic light position Lf m,i . (1) 1st case : Lf m,i -P p -a • N b i ≥ tr p • V b The bus has not the possibility to reach the private vehicle queue before the end of current cycle. In this case, P n and tr n are given by : P n = P p + tr p • V b tr n = 0 (3) (2) 2nd case : Lf m,i -P p -a • N b i ≤ tr p • V b The bus has the possibility to reach the private vehicle queue before the end of current cycle. In this case, the bus will reach the queue at time tr p + Y . Where Y = C • Lf m,i -P p -a • N b i C • V b -a • S i • G i (k) The details of calculation of Y and the proof that Y ≥ 0 are given in Appendix A. In this case, we have to distinguish between the case in which the bus reaches the queue before the traffic light line and the case in which the bus have the opportunity to reach the queue after the traffic light line. . Here, there are also two possibilities : • tr p • V b ≤ Lf m,i -P p and thus P n and tr n are given by : . According to the hypothesis (4), the bus have to be stopped for a time of 1 2 × (tr p - lP n = P p + tr p • V b tr n = 0 (4) • tr p • V b ≥ Lf m, Lfm,i-Pp V b ) × (1 -Gi(k) C ) and when leaving the link, the bus have the time of (tr p - Lfm,i-Pp V b )× Gi(k) C , before the end of current cycle, to ride in the downstream network. Finally, we have : P n = Lf m,i tr n = 1 2 • (tr p - Lf m,i -P p V b ) •(1 + G i (k) C ) (5) (b) Y • V b ≤ Lf m,i -P p : The bus reaches the queue before the traffic light line. According to the hypothesis (2), the number of private vehicles ahead the bus is N b i -Si•Y •Gi(k) C . The remaining time, before the end of current cycle time, is tr p -Y in which (tr p -Y ) × Gi(k) C is green light (Hypothesis (2)). Again we have to distinguish two cases : (i) N b i ≥ S i • G i (k) • tr p C The queue can not disappear in this cycle. There are two possibilities depending on whether tr p × V b is higher or lower than Lf m,i - P p -a × (N b i - Si•Gi(k)•trp C ). For the first case, we have : P n = P p + tr p • V b tr n = 0 (6) And the second case, we have : P n = Lf m,i -a • (N b i - S i • G i (k) • tr p C ) tr n = 0 (7) (ii) N b i ≤ S i • G i (k) • tr p C The queue disappears in this cycle and the bus reaches the traffic light line. P n and tr n are given by : P n = Lf m,i tr n = 1 2 • (tr p -Y ) • (1 + G i (k) C ) - N b i S i + Y • G i (k) C (8) In order to sum up the public transport modeling, we consider : (1) Conditions : A : Lf m,i -P p -a • N b i ≥ tr p • V b B : N b i ≥ Si•Gi(k)•trp C C : tr p • V b ≤ Lf m,i -P p -a • (N b i - Si•Gi(k)•trp C ) D : G i (k) ≥ C•N b i •V b Si•(Lfm,i-Pp) E : tr p • V b ≤ Lf m,i -P p (2) Equations : EQ 1 : P n = P p + tr p • V b and tr n = 0 EQ 2 : P n = Lf m,i -a•(N b i - Si•Gi(k)•trp C ) and tr n = 0 EQ 3 : P n = Lf m,i et tr n = 1 2 • (tr p -Y ) • (1 + Gi(k) C ) - N b i Si + Y •Gi(k) C EQ 4 : P n = Lf m,i et tr n = 1 2 • (tr p - Lfm,i-Pp V b ) • (1 + Gi(k) C ) With these last considerations, the algorithm is given by : if (A | (A × D × E) | (A × B × C × D)) then EQ1; if (A × B × C × D) then EQ2; if (A × B × D) then EQ3; if (A × D × E) then EQ4; Where | refers to "or", × to "and" and A to the negation of A. SIMULATION RESULTS Function g light The main contribution of this work is the model developed for the public transport vehicle. This model is based especially on the way in which the public transport vehicle rides over a traffic light. This way was described in the g light function. In order to verify the consistency of this important function, a simulation test was carried out. This test is illustrated by the figure 11. The test was carried out with varying the number of private vehicle ahead the bus N b between 0 and 40 and Firstly, one can see clearly that fewer private vehicles are ahead the bus higher the new position of the bus is and vice-versa. This remark is simply logical because : higher the traffic is congested more difficult is the progress of vehicles especially buses. Secondly, one can see that : higher green duration is allocated to the link crossed by the bus, higher is the new position of the bus and vice-versa. Finally, the curve shows two important points : (1) For N b = 40veh, the downstream of the bus is completely congested, and for G = 0s, no green light, the new position P n = 0m synonym that the bus did not move. (2) For G = 80s = C, no red light, and for N b ≤ 32veh, the bus reaches its maximum distance of 400m. These all remarks show that the function g light is globally coherent with the real behavior of a bus crossing over a traffic light in urban area. It remains to verify the consistency of the whole model in an urban network. Network simulation The consistency of the private vehicle model was widely proved in the simulation test and real-life implementation of TUC strategy [START_REF] Diakaki | A multivariable regulator approach to traffic-responsive network-wide signal control[END_REF], [START_REF] Bhouri | Régulation du trafic urbain multimodal avec priorité pour les transports en commun[END_REF] and [START_REF] Kachroudi | Communication interprocessus entre Dynasim et Scilab pour l'évaluation d'une stratégie de régulation du trafic urbain multimodal[END_REF]). In the opposite, the public transport vehicle model is recently developed. For these reasons, only the public transport vehicle model will have a particular attention. The simulation of the whole model is carried out using the network illustrated in the figure 13. The network comprises 16 junctions, 11 Entrances and 49 links. It is crossed by two bus lines. The first pulls in by the entrance 3, cross the junctions 4, 3, 7, 11, 10, 9, 13 before leaving the network. Its transit frequency is The simulations are performed in order to show the behavior of the public transport mode model according to the traffic demand and the state of traffic light of junctions crossed by the bus lines. The simulations are divided into two groups depending on whether we act on the green durations of junctions crossed by bus line 1 or bus line 2. Group 1 corresponds to the bus line 1 and Group 2 corresponds to the bus line 2. The horizon of simulations is 100 cycles. As the traffic demand fluctuates around a mean value, the simulation results presented here are a mean value of all public transport vehicle that crossed the network over the simulation horizon. Each group contains four scenarios. (1) ordinary demand in all and ordinary green duration allocated to links crossed by the bus. (2) ordinary demand in all and low green duration allocated to links crossed by the bus. (3) high demand in all and ordinary green duration allocated to links crossed by the bus. (4) high demand in all and low green duration allocated to links crossed by the bus. Group 1 The goal is to see the progress of the bus line 1 in the network for different scenarios of demand and green durations of traffic light crossed by the bus. The results are The examination of the curve shows these points : (1) For a same traffic demand (scenarios (1, 2) and (3, 4)), the allocated green duration have a great impact on the progress of bus line 1. For scenario 1 and at the end of the 4 th cycle, the bus is at the position of about 1025m while for the scenario 2, the position is about 470m. (2) For a same green duration (scenarios (1, 3) and (2, 4)), a higher traffic demand penalizes more the progess of the bus in the network. For the scenario 1, the bus reaches the position of 1025m while for scenario 3, the bus can not go beyond 830m. For the line 2, the bus position is given, for the 4 scenarios, in the figure 15. Contrary to the bus line 1, the traffic demand has small impact on the progress of bus line 2 (scenarios (1, 3) and (2, 4)). This can be explained by the fact that even with a high demand, the traffic remains smooth along the route of bus line 2. The comparison between scenarios (1, 2) and even between scenarios (3, 4) shows that penalizing the bus line 1 by allocating lower green durations penalize also the progress of bus line 2. This is due to the fact that bus line 2 and bus line 1 have in common a portion of their routes. Group 2 The simulations are exactly the same as for the group 1 but now with the bus line 2. The results are in the figure 16 which underlines the position of the bus of line 2 in the network. For the line 1, the bus position is given, for the 4 scenarios, in the figure 17. The analysis of the curve of figure 16 confirms that the traffic demand have a small impact on the progress of bus line 2 (scenarios (1, 3) and scenarios (2, 4)). It confirms also the great impact of the green durations on the the progress of the bus. The simulation tests carried out in an urban network composed of 16 junctions show that the public transport vehicle model is consistent with the real behavior of public transport vehicles. It confirm that the whole model (private and public transport vehicles) can be used as a predictor for a traffic control strategy because of the low calculation time. The multimodal urban traffic model will be used, in future, as a predictor for a Model Predictive Control policy. This control policy have the challenging aim of regulation of the urban traffic for both modes : private vehicles and public transport vehicles. Fig. 1 . 1 Fig. 1. Multimodal urban road network Fig. 3 . 3 Fig. 3. Control architecture of the traffic process Fig. 4 . 4 Fig. 4. Traffic regulator based on Model Predictive Control the green duration of each stage of all junctions in the network. The outputs are the positions of public transport vehicles and the number of private vehicle on each link. Fig. 5 . 5 Fig. 5. An urban road link j m,n k : number of bus stations that may be crossed by the bus n of line m during the cycle k. tr n , tr p : new and past remaining time before the end of present cycle. Ls m,j : position of bus station j referenced to the start of bus line m. N b i m,n (k) : Number of private vehicles ahead of bus n of line m in link i at cycle k. Fig. 6 . 6 Fig. 6. Bus route and outflow of link i over the period [k • C, (k + 1) • C]. The inflow of link i may be expressed by : F P m,n (k + 1) = F (P m,n (k), G mis the function to determine and N b m,n k is the vector formed by the number of private vehicles ahead of bus n of line m in all the links that may be crossed by the bus over the period [k • C, (k + 1) • C]. The vector N b m,n k is calculated using the private vehicle model and hypothesis (3). This model is called semi-macroscopic in Fig. 7 . 7 Fig. 7. Algorithm of function F Fig. 9 . 9 Fig. 9. Algorithm of function g st (a) Y •V b ≥ Lf m,i -P p : The bus have the opportunity to reach the queue after the traffic light line. The bus rides with his free bus speed V b until the traffic light line and reach this line at time tr p + Lfm,i-Pp V b i -P p . The bus reaches the traffic light line at time C -tr p + Lfm,i-Pp V b . Before the end of the current cycle, the remaining time is tr p -Lfm,i-Pp V b Fig. 11 . 11 Fig. 11. Simulation of function g light Fig. 12 . 12 Fig. 12. Bus position the green duration G between 0 and 80. The new position P n of the bus is given in the figure 12. Fig. 13 . 13 Fig. 13. The example network Fig. 14 . 14 Fig. 14. Comparison between the scenarios for bus line 1 and for group simulation 1 Fig. 16 . 16 Fig. 16. Comparison between the scenarios for bus line 2 and for group simulation 2 a predictive multimodal model of urban traffic involving two modes : private vehicle mode and public transport vehicle mode. The private vehicle model is based on the model used in TUC strategy and its efficiency is proved by many authors. The public transport vehicle model is recent and innovative. The simulation results focus only on the public transport mode and show that the model is consistent and efficient. This model reconcile accuracy and simplicity. . l : junction index. j : bus station index. m : bus line index. n : bus index for a given bus line. k : discrete time index. Sets I : set of links. L : set of junctions. J : set of bus stations. In General variables C : cycle time for all junctions. a : mean effective private vehicle length. V b : maximum free bus speed. Link specified variables X i (k) : number of private vehicles in link i at cycle k. G i (k) : total green duration of approach i at cycle k . S i : saturation flow of link i. q i (k) : inflow of the link i at cycle k. u i (k) : outflow of the link i at cycle k. τ w,i : the turning rate from link w toward link i. Lf m,i : position of light line of link i referenced to the start of bus line m. public transport variables T j : dwell time of a bus in station j. P m,n (k) : position of bus n of line m in the beginning i m,n k : of cycle k. number of links (traffic lights) that may be crossed by the bus n of line m during the cycle k. j : set of incoming links for junction j. M : set of bus lines. N m : set of bus for bus line m. G m,n k : set of green durations of traffic lights that may be crossed by the bus n of line m over the cycle k. St m,n k : set of dwell times in stations that may be crossed by the bus n of line m over the cycle k. N b m,n k : set of the number of private vehicle ahead the bus n of line m in all the links that may be crossed over the cycle k. The bus speed, known and fixed, is V b and the speed of disappearing of the queue is V f . According to the storeand-forward model, the number of private vehicle exiting the link over the cycle k is S•G i (k). Therefore, the distance that disappears from the queue, over the duration cycle This Appendix aims to calculate the time Y corresponding to the time when the bus reaches the queue vehicle. Y have to satisfy the simple equation and so Y ≥ 0.
28,709
[ "1278852" ]
[ "81038", "81038" ]
01473659
en
[ "spi" ]
2024/03/04 23:41:46
2006
https://hal.science/hal-01473659/file/2006_gretia_farhi_bhouri_lolito_regulation_bimodal_traffic_P.pdf
N Farhi N Bhouri P Lotito REGULATION OF THE BIMODAL TRAFFIC Keywords: Traffic Modelling, Bimodal Traffic, Automatic, Traffic Assignment, Public Transportaion . INTRODUCTION Real time traffic-responsive signal control (TRC) has the ability to improve traffic operations in urban areas when compared to traditional fixedtime control. The steadily growth of the mobility in urban areas offset however this improvement and roads are often congested. In recent years, an increasing awareness has been observed around the world for the potential contribution that public transport may have in the amelioration of the overall traffic conditions. For this reason, many measurements encouraging the use of the public transport means are achieved, among them the public transport priority (Diakaki et al.). The TRC systems have been developed with consideration of the unimodal traffic. They have been extended later to give the priority at traffic lights to the public transport vehicles (PTV). These strategies take into account the PTV in a local treatment and not as a mode of traffic at the same level of the particular vehicles (PV). Nevertheless, we can quote the work of [START_REF] Neila | An intermodal traffic control strategy for private vehicle and public tranport[END_REF] which consider two modes of traffic. In this paper, we present a method to control traffic lights of a transportation system with two modes : PV and PTV. We want to control the traffic lights in order to free the roads used by the public transport vehicles. We consider a network of roads with a traffic light controlling the output of each road. On this network, bus lines are given. The states are the car and the bus numbers in the roads, and the controls are the car outflows of the roads obtained by choosing the green/red phasing of the traffic lights. We design a regulator maintaining the system around an ideal trajectory. This ideal trajectory is obtained by solving first a flow assignment giving the PV Wardrop equilibrium, from which we deduce the car numbers in the roads. Then by reducing the car numbers at the time and roads when/where are the PTV. The traffic assignment is obtained using the toolbox CiudadSim1 of Scilab2 .The regulator is obtained by solving a Linear Quadratic Problem. We show the numerical results obtained on an academic example. We check, on this example, the controler robustness by introducing disturbances on the car numbers in the roads and on bus timetables. We compare on the same example this methodology with the one given by [START_REF] Neila | An intermodal traffic control strategy for private vehicle and public tranport[END_REF],where a LQ regulator is also computed but instead of regulating around an ideal trajectory, the standard quadratic criterium is modified in such a way to penalize the traffic on the roads used by the bus (the contoller thus obtained being modified to become admissible). We will call this method Modified LQ regulator (MLQR). MODELING A traffic road network is represented by a graph. The nodes of this graph correspond to the crossroads and the arcs to the roads or to inputs or outputs of cars in the system. The PTV follow fixed lines which are given by paths in the network. Let us take the academic example given 1. It shows a graph corresponding to a small traffic network. This network has two inputs : (13) and ( 14), and two outputs : (15) and ( 16). The PTV follow only one line using the roads (1), (2) then (3), with a stop on the road (2). The traffic light on each crossroads is periodic and we use this period as the discetization time step. Dynamics of the PV The number of PV in a given road which is the state of the system is updated by adding the number of vehicles entering the road, and deducting the number of vehicles leaving it. More precisely, we denote: Then we have x a k+1 = x a k + a ∈Ic 1 b a a u a k + e a k -u a k This equation can be written like this x k+1 = x k + Bu k + e k (1) where B will be called the routing matrix. Dynamics of the PTV We consider stops on each bus line. A line is represented by a path and a word indicating the time spent in the roads. For example, the single line of the network on figure 1 is represented by the path (1 2 3) and by the word 121 which refers to a stop of 1 unit of time on the arc 2 and to 1 unit of travel time on each road. Let's denote: y ha k : the number of PTV of the line h on the road a at time k τ : the estimated time spent on a road where there is a stop (In the following , τ will be equal to 2, and we will suppose that on all the other roads, the bus spends one unit of time). a, a : two successive arcs of a line. The PTV dynamics can be written: y ha k+1 =    y ha k-τ , τ ∈ N * if there is a stop on a y ha k otherwise. (2) How to obtain the ideal trajectory We compute an ideal distribution of PV on the roads, around which a regulator will be designed in the next section. One way of calculating it, is first to solve a traffic assignment problem which determine the car numbers in the roads and the output flows in the absence of priority given to PTV. Then, the ideal distribution of cars on a road used by PTV at a given time is obtained by reducing the number of PV on this road at this time. The flow assignment problem consists of determining the flows f on the arcs a ∈ A, knowing that journey times t a of arcs a are function of the flows f a . A network equilibrium gives us these flows. Wardrop Equilibrium. Each user minimizes its time spent in the network. Therefore at Wardrop equilibrium (user equilibrium), for each pair of nodes (p, q) of traffic demand from p towards q, the time spent on all the used routes from p to q are the same, and they are less than the time spent on any unused route from p to q. Let's note : • D: The set of pairs of traffic demand : origindestination • R pq : The set of paths from the origin p to the destination q • R a : The set of all the paths which include the arc a • d pq : The demand from p to q • f r : The flow on a path r ∈ R pq • f a : The flow on the arc a • t r : The journey time on a path r ∈ R pq • t * pq : The shortest time of journey from p to q • t a (f a ) : The journey time on the arc a, function of the flow f a on this arc A vector f whose components f r represent the flows on the paths r is a Wradrop Equilibrium if: f r (t r -t * pq ) = 0, t r -t * pq ≥ 0, f r ≥ 0 t * ≥ 0, r∈Rpq f r = d pq , ∀r ∈ R pq , ∀p, q ∈ D The variational formulation of the equilibrium :              min f a f a 0 t a (s)ds, f a = r∈R a f r r∈Rpq f r = d pq , f ≥ 0 (3) The problem (3) being static, we consider several stationnary regimes S = {S 1 , • • • , S m } and we solve an assignment problem for each regime. The S i are time sections : S 1 = {0, • • • , i 1 }, S 2 = {i 1 + 1, • • • , i 2 }, • • • , S m = {i m-1 + 1, • • • , T }. The numerical solving of these problems (3) are done thanks to the CiudadSim toolbox of Scilab. We obtain : f * S : the optimal solution of the problem (3) on a time section S ∈ S; t a S : the journey time on the arc a ∈ A during the time section S. Resolution of the linear quadratic problem To determine a regulator stabilizing the system around the ideal trajectory we solve a Linear Quadratic (LQ) problem whose dynamics is given by the equation ( 1), and whose criterion is the quadratic distance to the ideal trajectory. The problem to solve is :              min u T -1 k=0 {(x k -x k (y)) Q(x k -x k (y)) +(u k -u k ) R(u k -u k )}. x k+1 = x k + Bx k + e k (4) where : • Q and R are weighting matrices that we take diagonal Q = λ 1 I 1 , R = λ 2 I 2 , (λ 1 , λ 2 ) ∈ R 2 + and I 1 and I 2 are the corresponding identity matrices, • the ideal flow ū is given by the assignment problem : ūk = f * S , ∀S ∈ S et ∀k ∈ S • the car numbers in the roads x are given using the Little formula : x a k = f * a S • t a S (f * a S ), ∀a ∈ A, ∀S ∈ S et ∀k ∈ S • the ideal car numbers in the roads are obtained mainly by dividing these quantities by the number of PTV on the road at this epoch. xk (y) = β 1 + h y h k • x k , β : a positive parameter We solve the LQ problem (4) by integrating the corresponding Riccati equation. Then the global feedback is : u k -ūk = K k (x k -xk ) + L k , where K k and L k are the gains deduced from the Riccati equation. NUMERICAL SOLVING Let us take the example of the figure (1) which represents a network of 16 roads including 2 entries and 2 exits, one bus line (1, 2, 3) with a stop on road 2 and 4 traffic demand pairs : {1 → 3, 1 → 4, 2 → 3, 2 → 4}. The 4 demands, given in Table 1, are supposed to be the same, and equal to 100 vehicles/time-unit for all the time sections. To check if the obtained PV traffic is reduced on the roads where are the PTV, we take three time sections with different time-tables given by Table 2. We point out that this assignment does not take into account the PTV flows. We solve the regulator problem (4) and we simulate the closed loop system (with the regulator) in different cases. The results are shown in Figure 5. On this figure we have a table of 3 lines and 3 columns. Each line of this table corresponds to one of the three arcs 1, 2, and 10, and each column corresponds to one simulation. Column 1 corresponds to simulation without disturbance, column 2 corresponds to simulation with a disturbance on the number of PV in a road (x disturbance), and column 3 corresponds to simulation with a disturbance on the PTV timetable (y disturbance). On these curves, the x-axis represents time, and the y-axis represents number of vehicles. On each of the 9 subfigures, there are two trajectories : the car numbers trajectory on the top, and the bus numbers one on the bottom with an adapted scaling. On column 1 of the table, we can see that the car traffic follows perfectly the bus one on the roads 1 and 2. The road 10 is not used by the bus but it is influenced by roads 1 and 2 (see the graph of the Figure 1). To measure the robustness of the regulator, we introduce two kinds of disturbances : one on car numbers and the other one on the bus numbers. Firstly, we disturb the car numbers by applying the following dynamics : x k+1 = (1 + w)x k + Bu k + e k where w → N (0, 1/2). This disturbance is about 50% of the PV numbers . We represent the results on the column 2 of Figure 5.As we can see on these curves, the number of (PV) still follows that of (PTV), that is to say that they remain complementary. Secondly, we disturb the bus numbers by replacing y h k by (1 + w)y h k during the simulation. This disturbance is about 50% of the PTV number. The results of the simulation are given on the column 3 of Figure 5. The control remains correct. COMPARISON WITH THE MODIFIED LQ REGULATOR (MLQR) In this section, we try to compare our work with that done by [START_REF] Neila | An intermodal traffic control strategy for private vehicle and public tranport[END_REF]. Their formulation consists of solving the following problem:      min u +∞ k=0 [x k Q 1 x k + x k Q 2 y k + u k Ru k ] x k+1 = x k + Bu k (5) The dynamics of the problem 5 is the same as that of the problem 4. However, the criteria of the two problems differ. The advantage of the criterion of the problem 5 is that we don't need to compute an ideal trajectory. So we can avoid resolving the flow assignment problem. But its drawback is that it gives a non admissible policy ((x, u) is admissible if x ≥ 0 and 0 ≤ u ≤ u + , u + given). Therefore we have to derive an admissible solution from the MLQR solution. For that let us define the two matrices B + and B -as follows : Since it is supposed that there is one control by road, the matrix B -is a permutation matrix and its inverse transforms the positive cone to the positive cone then it is easy to obtain an admissible feedback from a not admissible one. At time k, taking as road inflows the outflows at time k -1, the outflows at time k are obtained by inverting the matrix B -. More precisely, let's denote u k the control given by the MLQR feedback at the moment k, u k give the control to be applied : B + ij = max{0, B ij }∀i, j B - ij = max{0, -B ij }∀i, j x 0 ≥ 0 given v 0 = 0        x k = x k-1 + v k-1 -B -u k-1 v k = B + u k-1 u k = the feedback result at the moment k u k = max{0, min{u k , u + , (B -) -1 (x k + v k )}} On Figure 6, we give the results of a simulation without disturbance. The Table of this Figure contains two lines, the first correponds to the formulation presented in this paper, and the second corresponds to the MLQR formulation. Each column of this table corresponds to one of the three arcs 1, 2 and 10. Surprisingly, as we can see, the MLQR formulation gives the same kind of qualitative stationary regime but have larger initial excursions. Let us remark that in MLQR method, we have not to solve the traffic assignment problem which needs a lot of non reliable data on the system. Indeed in general we don't know the origine destination demands necessary to compute the Wardrop equilibrium. CONCLUSION We have presented here a way to manage the traffic in an urban network with two modes (cars and buses). Our method is based on the traffic light control of the car flows in order that buses be in time. Let's remark that we have only studied the simple light control phase case : on each road there is an independent traffic light which controls the access to the crossroad. If this light is green, the PV being on this road can choose any other road to leave. With this assumption the routing matrix B is full rank. This guarantees the commandability of the system. However, in practice, we often need to use more complicated phases. In this simple case, the linear quadratic problem around an ideal trajectory gives a robust control. It needs the solution of a traffic assignment problem for which it is difficult to obtain reliable data. An alternative approach avoid this difficulty and seems to obtain the same qualitative results. The academic example used here is encouraging but too simple to conclude to the robustness of this approach. Study on more realistic networks would be useful. Fig. 1. Examplein Figure1. It shows a graph corresponding to a small traffic network. This network has two inputs : (13) and (14), and two outputs : (15) and (16).The PTV follow only one line using the roads (1), (2) then (3), with a stop on the road (2). The traffic light on each crossroads is periodic and we use this period as the discetization time step. NFig. 2 . 2 Fig. 2. Dynamics of the (PV) Fig. 3 . 3 Fig. 3. Dynamics of the PTV Figure 4 4 Figure4gives the PV flow obtained by the Wardrop Equilibrium. The thickness of an arc on this figure is proportional to the flow on the road. We point out that this assignment does not take into account the PTV flows. Fig. 4. The flow assignment Fig. 5 .Fig. 6 . 56 Fig. 5. Results of simulations: without disturbance, with a disturbance of the number of PV (x disturbance), and with a disturbance of the number of PTV (y disturbance). Table 1 . 1 PV Flows section time S 1 S 2 S 3 Time 0 to 20 21 to 70 71 to 100 Orig.-Dest. all pairs all pairs all pairs PV Flow 100 100 100 Time section S 1 S 2 S 3 Time 0 to 20 21 to 70 71 to 100 PTV Flow 1 1/3 1/2 Table 2 . 2 PTV Timetables http://www-rocq.inria.fr/metalau/ciudadsim http://scilabsoft.inria.fr
15,452
[ "1278852" ]
[ "2428", "81038", "81038" ]
01473694
en
[ "math" ]
2024/03/04 23:41:46
2017
https://hal.science/hal-01473694/file/CLX-2.pdf
Feng Cheng email: [email protected] Wei-Xi Li email: [email protected] And Chao-Jiang Xu VANISHING VISCOSITY LIMIT OF NAVIER-STOKES EQUATIONS IN GEVREY CLASS Keywords: 2010 Mathematics Subject Classification. 35M30, 76D03, 76D05 Gevrey class, Incompressible Navier Stokes equation, Vanishing viscosity limit In this paper we consider the inviscid limit for the periodic solutions to Navier-Stokes equation in the framework of Gevrey class. It is shown that the lifespan for the solutions to Navier-Stokes equation is independent of viscosity, and that the solutions of the Navier-Stokes equation converge to that of Euler equation in Gevrey class as the viscosity tends to zero. Moreover the convergence rate in Gevrey class is presented. Introduction The Navier-Stokes equations for incompressible viscous flow in T 3 = (-π, π) 3 read        ∂u ν ∂t -ν∆u ν + (u ν • ∇)u ν + ∇p ν = 0, ∇ • u ν = 0, u ν | t=0 = a, (1.1) where u ν (t, x) = (u ν 1 , u ν 2 , u ν 3 )(t, x) is the unknown velocity vector function at point x ∈ T 3 and time t, p ν (t, x) is the unknown scalar pressure function, ν > 0 is the kenematic viscosity, a(x) = (a 1 , a 2 , a 3 )(x) is the given initial data. If the viscosity ν = 0, the equations (1.1) become the Euler equations for ideal flow with the same given initial data a,        ∂u ∂t + (u • ∇)u + ∇p = 0, ∇ • u = 0, u| t=0 = a, (1.2) where we denote the unknown vector velocity function to be u(t, x) = (u 1 , u 2 , u 3 )(t, x) and the unknown scalar pressure function to be p(t, x). The existence and uniqueness of solutions to (1.1) and (1.2) in Sobolev space H r (R 3 ) for r > 3/2 + 1, on a maximal time interval [0, T * ) is classical in [START_REF] Bourguignon | Remarks on the Euler equation[END_REF][START_REF] Kato | Nonstationary flows of viscous and ideal fluids in R 3[END_REF][START_REF] Temam | On the Euler equations of incompressible perfect fluids[END_REF]. There are abundant studies on the analyticities of solutions to (1.1) and (1.2) in various methods, for reference in [START_REF] Bardos | Domaine d'analycité des solutions de l'équation d'Euler dans un ouvert de R n[END_REF][START_REF] Biswas | Gevrey regularity of solutions to the 3-D Navier-Stokes equations with weighted ℓp initial data[END_REF][START_REF] Chae | Remarks on the regularity of weak solutions of the Navier-Stokes equations[END_REF][START_REF] Grujić | Space analyticity for the Navier-Stokes and related equations with initial data in L p[END_REF][START_REF] Sammartino | Zero viscosity limit for analytic solutions of the Navier-Stokes equations on a half-space, I. Existence for Euler and Prandtl equations[END_REF]. The Gevrey regularity of solutions to Navier-Stokes equations was started by Foias and Temam in their work [START_REF] Foias | Gevrey class regularity for the solutions of the Navier-Stokes equations[END_REF], in which the authors developed a way to prove the Gevrey class regularity by characterizing the decay of their Fourier coefficients. And later [START_REF] Kukavica | On the dissipative scale for the Navier-Stokes equation[END_REF][START_REF] Kukavica | On the radius of analyticity of solutions to the three-dimensional Euler equations[END_REF][START_REF] Kukavica | On the analyticity and Gevrey-class regularity up to the boundary for the Euler equations[END_REF][START_REF] Kukavica | The domain of analyticity of solutions to the three-dimensional Euler equations in a half space[END_REF][START_REF] Levermore | Analyticity of solutions for a generalized Euler equation[END_REF] developed this method to study the Gevrey class regularity of Euler equations in various conditions. The subject of inviscid limits of solutions to Navier-Stokes equations has a long history and there is a vast literature on it, investigating this problem in various functional settings, cf. [START_REF] Kato | Remarks on zero viscosity limit for nonstationary Navier-Stokes flows with boundary[END_REF][START_REF] Temam | Navier-Stokes Equations: Theory and Numerical Analysis[END_REF] and references therein. Briefly, convergence of smooth solutions in R n or torus is well developed (cf. [START_REF] Kato | Nonstationary flows of viscous and ideal fluids in R 3[END_REF][START_REF] Swann | The convergence with vanishing viscosity of nonstationary Navier-Stokes flow to ideal flow in R 3[END_REF] for instance). Much less is known about convergence in a domain with boundaries. In fact the vanishing viscosity limit for the incompressible Navier-Stokes equations, in the case where there exist physical boundaries, is still a challenging problem due to the appearance of the Prandtl boundary layer which is caused by the classical no-slip boundary condition. So far the rigorous verification of the Prandtl boundary layer theory was achieved only for some specific settings, cf. [START_REF] Alexandre | Well-posedness of The Prandtl Equation in Sobolev Spaces[END_REF][START_REF] Weinan | Blow up of solutions of the unsteady Prandtl's equation[END_REF][START_REF] Gerard-Varet | Well-posedness for the Prandtl system without analyticity or monotonicity[END_REF][START_REF] Guo | A note on the Prandtl boundary layers[END_REF][START_REF] Li | Gevrey Class Smoothing Effect for the Prandtl Equation[END_REF][START_REF] Oleinik | Mathematical Models in Boundary Layers Theory[END_REF][START_REF] Xin | On the global existence of solutions to the Prandtl's system[END_REF] for instance, not to mention the convergence to Prandtl's equation and Euler equations. Several partial results on the inviscid limits, in the case of half-space, were given in [START_REF] Sammartino | Zero viscosity limit for analytic solutions of the Navier-Stokes equations on a half-space, I. Existence for Euler and Prandtl equations[END_REF] by imposing analyticity on the initial data, and in [START_REF] Maekawa | On the inviscid limit problem of the vorticity equations for viscous incompressible flows in the half-plane[END_REF] for vorticity admitting compact support which is away from the boundary. On the other hand, the Prandtl boundary layer equation is ill-posed in Sobolev space for many case (see [START_REF] Weinan | Blow up of solutions of the unsteady Prandtl's equation[END_REF][START_REF] Gerard-Varet | On the ill-posedness of the Prandtl equation[END_REF][START_REF] Liu | Ill-posedness of the Prandtl equations in Sobolev spaces around a shear flow with general decay[END_REF]), while the Sobolev space is the suitable function space for the energy theory of fluid mechanic. Since the verification of the Prandtl boundary layer theory meet the major obstacle in the setting of the Sobolev space, it will be interesting to expect the vanishing viscosity limit for the incompressible Navier-Stokes equations in the setting of Gevery space as sub-space of Sobolev space, see a series of works in this direction [START_REF] Gerard-Varet | Well-posedness for the Prandtl system without analyticity or monotonicity[END_REF][START_REF] Li | Gevrey Class Smoothing Effect for the Prandtl Equation[END_REF][START_REF] Li | Well-posedness in Gevrey space for the Prandtl equations with non-degenerate critical points[END_REF]. In fact, Gevrey space is an intermediate space between the space of analytic functions and the Sobolev space. On one hand, Gevrey functions enjoy similar properties as analytic functions, and on the other hand, there are nontrivial Gevrey functions having compact support, which is different from analytic functions. As a preliminary attempt, in this work we study the vanishing viscosity limit of the solution of Navier Stokes equation to the solution of Euler equation in Gevrey space. Here we will concentrate on the torus, we hope this may give insights on the case when the domain has boundaries, which is a much more challenging problem. We introduce the functions spaces as follows. We usually suppress the vector symbol for functions when no ambiguity arise. Let L 2 (T 3 ) be the vector function space L 2 (T 3 ) = u = k∈Z 3 ûj e i k•x ; ûk = û-k , û0 = 0, j • ûj = 0, u 2 L 2 = k∈Z 3 |û k | 2 < ∞ , where ûk is the k th order Fourier coefficient of u, i = √ -1. The condition j • ûj = 0 means ∇•u = 0 in the weak sense, so it is the standard L 2 space with the divergence free condition. Let H r (T 3 ) be the vector periodic Sobolev space : for r ≥ 1, H r (T 3 ) = u = k∈Z 3 ûj e i k•x ; ûk = û-k , û0 = 0, j • ûj = 0 u 2 H r = k∈Z 3 (1 + |k| 2 ) r |û k | 2 < ∞ . Here the condition j • ûj = 0 means ∇ • u = 0, so it is the standard Sobolev space H r with the divergence free condition. Denote (•, •) the L 2 inner product of two vector functions. Let us define the fractional differential operator Λ = (-∆) 1/2 and the exponential operator e τ Λ 1/s as follows, Λu = j∈Z 3 |j| ûj e ij•x , e τ Λ 1/s u = j∈Z 3 e τ |j| 1/s ûj e ij•x . The vector Gevrey space G s r,τ for s ≥ 1, τ > 0, r ∈ R is G s r,τ (T 3 ) =    u ∈ H r (T 3 ); u 2 G s r,τ = j∈Z 3 |j| 2r e 2τ |j| 1/s |û j | 2 < ∞    , where the condition j • ûj = 0 means ∇ • u = 0, so it is sub-space of the Sobolev space H r (T 3 ). The following theorem is the main result of this paper. Theorem 1.1. Let r > 9 2 , τ 0 > 0, s ≥ 1. Assume that the initial data a ∈ G s r,τ0 (T 3 ), then there exists ν 0 > 0 and T > 0, τ (t) > 0 is a decreasing function such that, for any 0 < ν ≤ ν 0 , the Navier-Stokes equations (1.1) admit the solutions u ν ∈ L ∞ ([0, T ]; G s r,τ ( • ) (T 3 )); p ν ∈ L ∞ ([0, T ]; G s r+1,τ ( • ) (T 3 )), and the Euler equations (1.2) admit the solution u ∈ L ∞ ([0, T ]; G s r,τ ( • ) (T 3 )); p ∈ L ∞ ([0, T ]; G s r+1,τ ( • ) (T 3 )), Furthermore, we have the following convergence estimates : for any 0 < t ≤ T u ν (t, •) -u(t, •) G s r-1,τ (t) ≤ C √ ν, p ν (t, •) -p(t, •) G s r,τ (t) ≤ C √ ν, (1.3) where C is a constant depending on r, s, a and T . Remark 1.1. The uniform lifespan is 0 < T < T * where T * is the maximal lifespan of H r solutions. The uniform (with respect to ν) Gevrey radius τ (t) of the solution is τ (t) = 1 e C1t 1 τ0 + C2 C1 (e C1t -1) (1.4) where C 1 , C 2 are constants depending on r, s, a, T . Remark 1.2. Comparaison with the known works about Gevery regularity of Navier-Stokes equations and Euler equations [START_REF] Bardos | Domaine d'analycité des solutions de l'équation d'Euler dans un ouvert de R n[END_REF][START_REF] Biswas | Gevrey regularity of solutions to the 3-D Navier-Stokes equations with weighted ℓp initial data[END_REF][START_REF] Foias | Gevrey class regularity for the solutions of the Navier-Stokes equations[END_REF][START_REF] Kukavica | On the radius of analyticity of solutions to the three-dimensional Euler equations[END_REF][START_REF] Kukavica | On the analyticity and Gevrey-class regularity up to the boundary for the Euler equations[END_REF][START_REF] Kukavica | The domain of analyticity of solutions to the three-dimensional Euler equations in a half space[END_REF], the additional difficulties of this work is the uniform estimate of Gevery norm with respect to viscosity coefficients, and the estimate (1.3) with limit rates √ ν. The paper is organized as follows. In section 2, we will give the known results and preliminary lemmas. Section 3 consists of a priori estimate and the existence of the solutions in Gevrey space. The convergence in Gevrey space will be given in section 4. Premilinary lemmas We first recall the following classical result of Kato in [START_REF] Kato | Nonstationary flows of viscous and ideal fluids in R 3[END_REF]. Theorem 2.1. Let a ∈ H m (T 3 ) for m ≥ 3, then the following holds. [START_REF] Alexandre | Well-posedness of The Prandtl Equation in Sobolev Spaces[END_REF].There exists T > 0 depending on a H m but not on ν, such that (1.1) has a unique solution u ν ∈ C([0, T ], H m (T 3 )) . Furthermore, {u ν } is bounded in C([0, T ], H m (T 3 )) for all ν > 0. (2).For each t ∈ [0, T ], u(t) = lim ν→0 u ν (t) exists strongly in H m-1 (T 3 ) and weakly in H m (T 3 ), uniformly in t. u is the unique solution to (1.2) satisfying u ∈ C([0, T ], H m (T 3 )). Remark 2.1. The time T in Theorem 2.1 is actually depending on m and a H m , specifically T < 1 C m a H m , where C m is a constant depending on m. In fact, the constant C m was created by using the Leibniz formula and Sobolev embedding inequality when estimating the nonlinear term. So, if the initial data a ∈ G s r,τ0 (T Cm a H m has a positive lower bound. In this paper, we will pay many attention to the uniform lifespan T > 0 that depends on r, a H r . Remark 2.2. Compared with the known results Theorem 2.1, the additional difficulty arises on the estimate of the convecting term in Gevrey class setting. We need to use the decaying property of the radius of Gevrey class regularity to cancel the growth of the convecting term. We will use the following inequality, for any j, k ∈ Z 3 \{0}, we have |k -j| ≤ 2 |j| |k| . The proof is a simple result of triangle inequality which we omit the details here. And we will give two Lemmas which will be used in the proof of Theorem 1.1. Lemma 2.2. Given two real numbers ξ, η ≥ 1 and s ≥ 1, then the following inequality holds ξ 1 s -η 1 s ≤ C |ξ -η| |ξ| 1-1 s + |η| 1-1 s (2.1) where C is a positive constant depending only on s. Proof. The case for s = 1 is trivial. Let us consider the case for s > 1. Without loss of generality, we may assume ξ > η. Then (2.1) is equivalent with (ξ 1 s -η 1 s )(ξ 1-1 s + η 1-1 s ) ξ -η ≤ C. Then it suffices to show that ( η ξ ) 1-1 s -( η ξ ) 1 s 1 -η ξ ≤ C. By Theorem 42 in [START_REF] Hardy | Inequalities[END_REF], it can be obtained for fixed s > 1 ( η ξ ) 1-1 s -( η ξ ) 1 s 1 -η ξ ≤ max 1 - 2 s , 2 s -1 ≤ C Then the lemma 2.2 is proved. With the use of Lemma 2.2, we have the following estimate about the nonlinear term. Lemma 2.3. Let r > 9 2 , s ≥ 1 and τ > 0 is a constant. Then for any v ∈ G s r+1,τ (T 3 ), the following estimate holds, Λ r e τ Λ 1/s (v • ∇v), Λ r e τ Λ 1/s v ≤ C v H r v 2 G s r,τ + C v 2 H r v G s r,τ + Cτ u H r + Cτ 2 ( u H r + u G s r,τ ) u 2 G s r+ 1 2s ,τ , (2.2) where C is a constant depending only on r and s. Proof. By the definition of the vector function space G s r+1,τ (T 3 ), we have v = j∈Z 3 vj e ij•x and v0 = 0. Using Fourier series convolution property, one have v • ∇v = i k∈Z 3 j∈Z 3 [v j • (k -j)]v k-j e ik•x . Applying the operator Λ r e τ Λ 1/s on v • ∇v, one have Λ r e τ Λ 1/s (v • ∇v) = i k∈Z 3 j∈Z 3 [v j • (k -j)]v k-j |k| r e τ |k| 1/s e ik•x . And Λ r e τ Λ 1/s v = ℓ∈Z 3 |ℓ| r e τ |ℓ| 1/s vℓ e iℓ•x . Now we take the L 2 inner product of Λ r e τ Λ 1/s (v • ∇v) with Λ r e τ Λ 1/s v over T 3 . The orthogonality of the exponentials in L 2 implies Λ r e τ Λ 1/s (v • ∇v), Λ r e τ Λ 1/s v = i(2π) 3 k∈Z 3 j∈Z 3 [v j • (k -j)] (v k-j • v-k ) |k| 2r e 2τ |k| 1/s . The cancellation property of the convecting term implies 0 = v • ∇Λ r e τ Λ 1/s v, Λ r e τ Λ 1/s v = i(2π) 3 k∈Z 3 j∈Z 3 [v j • (k -j)] |k -j| r e τ |k-j| 1/s (v k-j • v-k ) |k| r e τ |k| 1/s . Then we have Λ r e τ Λ 1/s (v • ∇v), Λ r e τ Λ 1/s v = Λ r e τ Λ 1/s (v • ∇v) -v • ∇Λ r e τ Λ 1/s v, Λ r e τ Λ 1/s v = T 1 + T 2 , where T 1 = i(2π) 3 k∈Z 3 j∈Z 3 (|k| r -|k -j| r )e τ |k-j| 1/s [v j • (k -j)] (v k-j • v-k ) |k| r e τ |k| 1/s , and T 2 = i(2π) 3 k∈Z 3 j∈Z 3 |k| r (e τ |k| 1/s -e τ |k-j| 1/s ) [v j • (k -j)] (v k-j • v-k ) |k| r e τ |k| 1/s . Before we come to the estimate of T 1 and T 2 , we recall the following mean value theorem, for ∀ξ, η ∈ R + , there exists a constant 0 ≤ θ, θ ′ ≤ 1 such that ξ r -η r = r(ξ -η) (θξ + (1 -θ)η) r-1 -ξ r-1 + r(ξ -η)η r-1 = r(r -1)θ(ξ -η) 2 [θ ′ (θξ + (1 -θ)η) + (1 -θ ′ )η] r-2 + r(ξ -η)η r-1 . Then there exists a constant C depending only on r such that ||k| r -|k -j| r | ≤ C |j| 2 (|j| r-2 + |k -j| r-2 ) + C |j| |k -j| r-1 . From the inequality e ξ ≤ e + ξ 2 e ξ that holds for all ξ ∈ R, we can bounded the exponential e τ |k-j| 1/s by e + τ 2 |k -j| 2/s e τ |k-j| 1/s . Then T 1 can be bounded by |T 1 | ≤ C k∈Z 3 j∈Z 3 (|j| r + |j| 2 |k -j| r-2 ) |v j | |k -j| |v k-j | (e + τ 2 |k -j| 2/s e τ |k-j| 1/s ) × |v -k | |k| r e τ |k| 1/s + C k∈Z 3 j∈Z 3 |j| |v j | |k -j| r e τ |k-j| 1/s |v k-j | |v -k | |k| r e τ |k| 1/s = T 11 + T 12 + T 13 + T 14 + T 15 . With application of discrete Hölder inequality and Minkowski inequality, one can obtain the following estimates. For example, we give the details for T 11 , and the rest can be estimated in the same way, T 11 = eC k∈Z 3 j∈Z 3 |j| r |v j | |k -j| |v k-j | |k| r e τ |k| 1/s |v -k | ≤ C v G s r,τ    k∈Z 3   j∈Z 3 |j| r |v j | |k -j| |v k-j |   2    1/2 = C v G s r,τ   k∈Z 3 ℓ∈Z 3 |k -ℓ| r |v k-ℓ | |ℓ| |v ℓ | 2   1/2 ≤ C v G s r,τ k∈Z 3 |k -ℓ| 2r |v k-ℓ | 2 1/2 ℓ∈Z 3 |ℓ| (1 + |ℓ| 2 ) r/2 (1 + |ℓ| 2 ) r/2 |v ℓ | ≤ C v 2 H r v G s r,τ ℓ∈Z 3 |ℓ| 2 (1 + |ℓ| 2 ) r 1/2 ≤ C v 2 H r v G s r,τ , where C is a constant depending on r, e and for r > 9/2, the summation in the above ℓ∈Z 3 |ℓ| 2 (1+|ℓ| 2 ) r 1/2 is bounded by some constant depending on r. Similarly with T 11 , we have T 12 = eC k∈Z 3 j∈Z 3 |j| 2 |v j | |k -j| r-1 |v k-j | |k| r e τ |k| 1/s |v -k | ≤ C v 2 H r v G s r,τ , and T 13 = Cτ 2 k∈Z 3 j∈Z 3 |j| r |v j | |k -j| 1+2/s |v k-j | |k| r e τ |k| 1/s |v -k | ≤ Cτ 2 v H r v 2 G s r,τ . Note that v0 = 0, s ≥ 1 in the summation, and |k -j| 1 2s ≤ C |k| 1 2s |j| 1 2s , we can similarly have T 14 = Cτ 2 k∈Z 3 j∈Z 3 |j| 2 |v j | |k -j| r-1+2/s e τ |k-j| 1/s |v k-j | |k| r e τ |k| 1/s |v -k | ≤ Cτ 2 k∈Z 3 j∈Z 3 |j| 2+ 1 2s |v j | |k -j| r+ 1 2s e τ |k-j| 1/s |v k-j | |k| r+ 1 2s e τ |k| 1/s |v -k | ≤ Cτ 2 v H r v 2 G s r+ 1 2s ,τ , and T 15 = C k∈Z 3 j∈Z 3 |j| |v j | |k -j| r e τ |k-j| 1/s |v k-j | |k| r e τ |k| 1/s |v -k | ≤ C v H r v 2 G s r,τ . Noting that v G s r,τ ≤ v G s r+ 1 2s ,τ , then T 13 ≤ T 14 . Thus we obtain |T 1 | ≤ C v 2 H r v G s r,τ + C v H r v 2 G s r,τ + Cτ 2 v H r v 2 G s r+ 1 2s ,τ . As for T 2 , we have T 2 = i(2π) 3 k∈Z 3 j∈Z 3 |k| r (e τ |k| 1/s -e τ |k-j| 1/s )[v j • (k -j)](v k-j • v-k ) |k| r e τ |k| 1/s = i(2π) 3 k∈Z 3 j∈Z 3 |k| r e τ |k-j| 1/s e τ (|k| 1/s -|k-j| 1/s ) -1 [v j • (k -j)] × (v k-j • v-k ) |k| r e τ |k| 1/s . We note that the inequality e ξ -1 ≤ |ξ| e |ξ| holds for ξ ∈ R. Then e τ (|k| 1/s -|k-j| 1/s ) -1 ≤ τ |k| 1/s -|k -j| 1/s e τ ||k| 1/s -|k-j| 1/s | . Since s ≥ 1, we have |k| 1/s -|k -j| 1/s ≤ |j| 1/s . Then we actually have e τ (|k| 1/s -|k-j| 1/s ) -1 ≤ τ |k| 1/s -|k -j| 1/s e τ |j| 1/s . By Lemma 2.2, we have |k| 1/s -|k -j| 1/s ≤ C |j| 1 |k| 1-1/s + |k -j| 1-1/s . Then T 2 can be bounded by the inequality |T 2 | ≤ C k∈Z 3 j∈Z 3 |k| r e τ |k-j| 1/s e τ (|k| 1/s -|k-j| 1/s ) -1 |k -j| |v j | |v k-j | × |v -k | |k| r e τ |k| 1/s ≤ C k∈Z 3 j∈Z 3 |k| r-1 2s e τ |k-j| 1/s τ |k| 1/s -|k -j| 1/s e τ |j| 1/s |k -j| |v j | |v k-j | × |v -k | |k| r+ 1 2s e τ |k| 1/s ≤ Cτ k∈Z 3 j∈Z 3 (|j| r-1 2s + |k -j| r-1 2s )e τ |k-j| 1/s |j| |k -j| |k| 1-1/s + |k -j| 1-1/s × e τ |j| 1/s |û j | |v k-j | |v -k | |k| r+ 1 2s e τ |k| 1/s ≤ T 21 + T 22 , where T 21 = Cτ k∈Z 3 j∈Z 3 |j| r+ 1 2s e τ |j| 1/s |v j | |k -j| (1 + τ |k -j| 1/s e τ |k-j| 1/s ) × |v k-j | |k| r+ 1 2s e τ |k| 1/s |v -k | , T 22 = Cτ k∈Z 3 j∈Z 3 |j| (1 + τ |j| 1/s e τ |j| 1/s ) |v j | |k -j| r+ 1 2s e τ |k-j| 1/s × |v k-j | |k| r+ 1 2s e τ |k| 1/s |v -k | . We have used the inequality |k| 1-1/s + |k -j| 1-1/s ≥ |j| 1-1/s and e ξ ≤ 1 + ξe ξ for ξ ∈ R + in the estimation of T 21 . With application of Hölder inequality and Minkowiski inequality, we have for T 21 , |T 21 | ≤ Cτ v H r v 2 G s r+ 1 2s ,τ + Cτ 2 v G s r,τ v 2 G s r+ 1 2s ,τ . Symmetrically, one has a same bound for T 22 , then for T 2 , |T 2 | ≤ Cτ v H r v 2 G s r+ 1 2s ,τ + Cτ 2 v G s r,τ v 2 G s r+ 1 2s ,τ . Then we obtain Λ r e τ Λ 1/s (v • ∇v), Λ r e τ Λ 1/s v ≤ |T 1 | + |T 2 | ≤ C v 2 H r v G s r,τ + C v H r v 2 G s r,τ + Cτ 2 v G s r,τ v 2 G s r+ 1 2s ,τ + Cτ (1 + τ ) v H r v 2 G s r+ 1 2s ,τ , which finishes the proof of Lemma 2.3. Uniform existence of solutions In this section, we will first show the existence of Gevrey class solutions u ν to Navier Stokes equations (1.1). And the existence of Gevrey class solution u to Euler equations (1.2) can be obtained similarly. The method of the proof are based on Galerkin approximation. Before that, we first consider the following equivalent equation for Navier-Stokes equation, d dt u ν + νAu ν + P(u ν • ∇u ν ) = 0, u ν | t=0 = Pa. (3.1) where A = -P∆ is the well-known Stokes operator and P is the Leray projector which maps a vector function v into its divergence free part v 1 , such that v = v 1 +∇q and ∇ • v 1 = 0, q is a scalar function and (v 1 , ∇q) = 0. Similarly for Euler equation, we have the following equivalent form, d dt u ν + P(u ν • ∇u ν ) = 0, u ν | t=0 = Pa. (3.2) We then recall some properties of the Stokes operator A, which are known in [Chapter 4 in [START_REF] Constantin | Navier-Stokes Equations[END_REF]]. Proposition 3.1. The Stokes operator A is symmetric and selfadjoint, moreover, the inverse of the Stokes operator, A -1 , is a compact operator in L 2 . The Hilbert theorem implies there exists a sequence of positive numbers λ j and an orthonormal basis w j of L 2 , which satisfies Aw j = λ j w j , 0 < λ 1 < . . . < λ j ≤ λ j+1 ≤ . . . , lim j→∞ λ j = ∞. Moreover, in the case of T 3 = (-π, π) 3 , the sequence of eigenvector functions w , j s and eigenvalues λ , j s are the sequences of functions w k,j and numbers λ k,j , w k,j (x) = e j - k j k |k| 2 e ik•x , λ k,j = |k| 2 , where k = (k 1 , k 2 , k 3 ) ∈ Z 3 , k = 0, j = 1, 2, 3 and {e j } j=1,2,3 are the canonical basis in R 3 . So we know that each w j are not only in L 2 , but also in G s r,τ for ∀r > 0. Now we will show that there exists a solution to equation (3.1) for a ∈ G s r,τ with r > 9/2, s ≥ 1, and τ (t) > 0 is a differentiable decreasing function of t. To this end, we first prove a priori estimate in the following Proposition. Proposition 3.2. Let r > 9/2, s ≥ 1, a ∈ G s r,τ0 and τ (t) > 0 is a differentiable decreasing function of t defined on [0, T ] with τ (0) = τ 0 > 0, where 0 < T < T * and T * is the maximal time of H r solution to (3.1) with respect to the initial data a. Let u ν (t, x) ∈ L ∞ ([0, T ]; G s r,τ (•) (T 3 )) ∩ L 2 ([0, T ]; G s r+1,τ (•) (T 3 )) be the solution to (3.1), then the following a priori estimates holds, u ν (t, •) G s r,τ (t) ≤ G T , 0 < t ≤ T, ν t 0 u ν (s, •) 2 G s r+1,τ (s) ds ≤ M T , 0 < t ≤ T . With the same assumptions as above, let u(t, x) ∈ L ∞ ([0, T ]; G s r+1,τ (•) (T 3 )) be the solution to (3.2), we also have u(t, •) G s r,τ (t) ≤ G T , 0 < t ≤ T, Furthermore the uniform radius τ (t) is given by τ (t) = 1 e C1t 1 τ0 + C2 C1 (e C1t -1) , where C, C 1 , C 2 , G T , M T are constants depending on a, r, s, T . Proof. Applying Λ r e τ Λ 1/s on both sides of (3.1) and taking the L 2 inner product of both sides with Λ r e τ Λ 1/s u ν , one has 1 2 d dt u ν (t, •) 2 G s r,τ (t) + ν u ν (t, •) 2 G s r+1,τ (t) = τ ′ (t) u ν (t, •) 2 G s r+ 1 2s ,τ -Λ r e τ Λ 1/s (u ν • ∇u ν ), Λ r e τ Λ 1/s u ν , (3.3) where we use the fact that P commutes with Λ r e τ Λ 1/s and P is symmetric. Now we consider the right hand side of (3.3). By (2.2) in Lemma 2.3, we have from (3.3), for convenience, we sometimes suppress the dependence of u ν and τ in t, 1 2 d dt u ν 2 G s r,τ + ν u ν 2 G s r+1,τ ≤ C u ν H r u ν 2 G s r,τ + C u ν 2 H r u ν G s r,τ + (τ ′ + Cτ u ν H r + Cτ 2 u ν H r + Cτ 2 u ν G s r,τ ) u ν 2 G s r+ 1 2s ,τ , (3.4) where C is a constant depending on r, s. Now if the radius of Gevrey class τ (t) is smooth and decreaseing fast enough such that the following inequality holds, τ ′ + Cτ u ν H r + Cτ 2 u ν H r + Cτ 2 u ν G s r,τ ≤ 0. (3.5) Then (3.4) implies 1 2 d dt u ν 2 G s r,τ + ν u ν 2 G s r+1,τ ≤ C u ν H r u ν 2 G s r,τ + C u ν 2 H r u ν G s r,τ . (3.6) As ν > 0, it can be obtained directly from (3.6), d dt u ν G s r,τ ≤ C u ν H r u ν G s r,τ + C u ν 2 H r , (3.7) By Grownwall's inequality in (3.7), we have for 0 < t < T * , u ν (t) G s r,τ (t) ≤ g(t) a G s r,τ 0 + t 0 C g(s) -1 u ν (s) 2 H r ds A(t), (3.8) where g(t) = e t 0 C u ν (s,•) H r ds and T * is the maximal time interval of H r solution. It has been known that T * is independent of ν. Moreover, it follows the a priori estimate for H r solution for Navier Stokes equation in [START_REF] Majda | Vorticity and incompressible flow[END_REF], d dt u ν (t, •) 2 H r ≤ C u ν (t, •) H r u ν (t, •) 2 H r , 0 < t < T * , (3.9) where C is a constant depending on r. Then, on one side we have u ν (t) 2 H r ≤ C g(t) a 2 H r , 0 < t < T * . (3.10) And on the other side, let 0 < T < T * , then for 0 < t < T , u ν (t, •) H r ≤ a H r 1 -Ct a H r ≤ a H r 1 -CT a H r C T . (3.11) With (3.9),(3.10) and (3.11), we have u ν (t, •) G s r,τ (t) ≤ A(t) = e t 0 C u ν (s) H r ds a G s r,τ 0 + t 0 C g(s) -1 u ν (s) 2 H r ds ≤ e CCT t a G s r,τ 0 + C a 2 H r t ≤ e CCT T a G s r,τ 0 + C a 2 H r T G T , 0 ≤ t ≤ T. (3.12) In fact a sufficient condition for (3.5) to hold is τ ′ (t) + Cτ (t)C T + Cτ 2 (t)C T + Cτ 2 (t)G T = 0. (3.13) Then solving the ordinary differential equation (3.13), τ (t) = 1 e CCT t 1 τ0 + CCT +CGT CCT (e CCT t -1) . (3.14) We can obtain (1.4) by arranging the constants in (3.14), τ (t) = 1 e C1t 1 τ0 + C2 C1 (e C1t -1) , (3.15) where C 1 , C 2 are constants depending on r, s, T, a. Then (3.15) proves (1.4) in Remark 1.1. Integrating (3.6) form 0 to t, we have, for 0 < t < T , ν t 0 u ν (s, •) 2 G s r+1,τ (s) ds ≤ M T , 0 < t < T < T * , (3.16) where M T depends on T, a, r, s, τ 0 . It should be noted that all of the above estimates are independent of ν, so we let ν = 0 in (3.4), and proceed exactly as above, then we can obtain similar results for the a priori estimate for solution u to equation (3.2). With the estimates in Proposition 3.2, we can implement the standard Faedo-Fourier-Galerkin approximation as in [START_REF] Lions | Quelques méthodes de rèsolution des problémes aux limites non linèaires[END_REF][START_REF] Temam | Navier-Stokes equations and Nonlinear Functional Analysis[END_REF] to prove the existence of such u ν and u in the function space of Gevrey class s G s r,τ . Theorem 3.1. There exists a unique solution u ν to (3.1) such that u ν ∈ L ∞ ([0, T ], G s r,τ ( • ) ). Similarly there exists a unique solution u to (3.2) such that u ∈ L ∞ ([0, T ], G s r,τ ( • ) ). Proof. The method of proof of existence is based on Galerkin approximations and the priori estimate in Proposition 3.2. For a fixed positive integer n, we will look for a sequence of functions u ν n (t, •) with n ∈ N in the form u ν n (t, x) = n j=1 α ν j,n (t)ω j , where {ω j } ∞ j=1 are the orthonormal basis in Proposition 3.1. Let W n be the space spanned by {w 1 , w 2 , • • • , w n }, and χ n is the orthogonal projector in L 2 into W n . The approximating equation is as follows,    d dt u ν n + νAu ν n + χ n P(u ν n • ∇u ν n ) = 0, u ν n | t=0 = χ n a (3.17) Taking the L 2 inner product with w j , j = 1, 2, . . . , n, then the equation system (3.17) is equivalent with the following ordinary differential equation system,      d dt α ν j,n (t) + νλ j α ν j,n + k,ℓ b(ω k , ω ℓ , ω j )α ν k,n α ν ℓ,n = 0, j = 1, 2, . . . , n, α ν j,n (0) = (a, ω j ) , (3.18) where b(ω k , ω ℓ , ω j ) = (ω k • ∇ω ℓ , ω j ) satisfying b(ω k , ω ℓ , ω j ) = -b(ω k , ω j , ω ℓ ) . By the standard ordinary differential equation theory, there exists a solution to (3.18) local in time interval [0, T n ). In order to show that T n can be extended to T , we multiply with α ν j,n (t) on both sides of (3.18) and take sum over 1 ≤ j ≤ n. We have 1 2 d dt   n j=1 α ν j,n (t) 2   + νλ j j α ν j,n (t) 2 = 0, (3.19) because j,k,ℓ b(ω k , ω ℓ , ω j )α ν k,n α ν ℓ,n α ν j,n = 0. Moreover, from (3.19), we have u ν n (t, •) L 2 = n j=1 α ν j,n (t) 2 ≤ n j=1 α ν j,n (0) 2 ≤ a 2 L 2 , ∀t > 0 Then we have for every T n , it can be extended to arbitrary large, so it can be extended to T . And we also obtain u ν n remains bounded in L ∞ [0, T ]; L 2 , ∀n. Moreover, we obtain a solution u ν n (t, x) = n j=1 α ν j,n ( t)w j (x) to (3.17) and we know that u ν n (t, x) ∈ G s r+1,τ (t) for 0 < t < T because it is only finite sum of w j for fixed n. We then want to obtain the uniform Gevrey class norm bound for u ν n . To this end, we first apply Λ r e τ Λ 1/s on both sides of (3.17) and then take the L 2 inner product with Λ r e τ Λ 1/s to obtain 1 2 d dt u ν n (t, •) 2 G s r,τ (t) + ν u ν n (t, •) 2 G s r+1,τ = τ ′ (t) u ν n (t, •) 2 G s r+ 1 2s ,τ -Λ r e τ Λ 1/s χ n P(u ν n • ∇u ν n ), Λ r e τ (t)Λ 1/s u ν n . We note that the operator χ n and P commute with Λ r e τ Λ 1/s , and they are symmetric, then Λ r e τ Λ 1/s χ n P(u ν n • ∇u ν n ), Λ r e τ (t)Λ 1/s u ν n = Λ r e τ Λ 1/s (u ν n • ∇u ν n ), Λ r e τ (t)Λ 1/s u ν n . With the arguments in Proposition 3.2, we have, u n ν (t, •) G s r,τ (t) ≤ G T , 0 < t ≤ T < T * , ∀n. Proof. To study the pressure p ν , the existence is obvious results from standard elliptic equation theory. We consider the regularity of p ν (t, x). First we apply the operator Λ r e τ Λ 1/s on both sides of (3.23) and then take L 2 inner product with Λ r e τ Λ 1/s p ν to obtain -∆p ν , Λ 2r e 2τ Λ 1/s p ν = ∇ • (u ν • ∇u ν ), Λ 2r e 2τ Λ 1/s p ν . (3.24) Here if we can write p ν (t, x) = j∈Z 3 pν j e ij•x , then the left side of (3.24) is -∆p ν , Λ 2r e 2τ Λ 1/s p ν = j∈Z 3 |j| 2r+2 e 2τ |j| 1/s pν j 2 = p ν 2 G s r+1,τ . The right hand side of (3.24) can be bounded by ∇ • (u ν • ∇u ν ), Λ 2r e 2τ Λ 1/s p ν = (2π) 3 k∈Z 3 j∈Z 3 |k| r-1 ûν j • (k -j) (k • ûν k-j )p ν -k |k| r+1 e 2τ |k| 1/s = (2π) 3 k∈Z 3 j∈Z 3 |k| r-1 ûν j • (k -j) (j • ûν k-j )p ν -k |k| r+1 e 2τ |k| 1/s ≤ C p ν G s r+1,τ ×      k∈Z 3   j∈Z 3 (|j| r-1 + |k -j| r-1 ) |j| |k -j| ûν j e τ |j| 1/s e τ |k| 1/s ûν k-j   2      1/2 ≤ C u ν 2 G s r,τ p ν G s r+1,τ , where C is a constant depending on r. Then from above estimate, we obtain Then using the same arguments as above, one can obtain the same results for p. p ν G s r+1,τ ≤ C u ν 2 G s r,τ . From (3.8), we obtain p ν (t) G s r+1,τ ≤ CA(t) 2 ≤ CG 2 T , t ∈ [0, T ]. Convergence of solutions in Gevrey space In the previous Section we have proved the existence of solutions to the Navier-Stokes equation and Euler equation in Gevrey class space. In this Section we will show the vanishing viscosity limit of Navier-Stokes equation in Gevrey class space. Moreover, we can obtain the converging rate with respect to ν. Theorem 4.1. Let u ν , p ν and u, p are the solutions we have obtained in the previous Section, where u ν , u ∈ L ∞ ([0, T ], G s r,τ ( • ) ), p ν , p ∈ L ∞ ([0, T ], G s r+1,τ ( • ) ). Then the following estimates hold, u ν (t, •) -u(t, •) G s r-1,τ (t) ≤ C √ ν, p ν (t, •) -p(t, •) G s r,τ (t) ≤ C √ ν, (4.1) for any 0 < t ≤ T , where C is a constant depending on r, s, a, T . Proof. Let us first consider the new equation for w = (u ν -u) and p = p ν -p,        ∂w ∂t -ν∆u ν + w • ∇u ν + u • ∇w + ∇p = 0, ∇ • w = 0, w| t=0 = 0. (4.2) Then we apply the operator Λ r-1 e τ Λ 1/s on both sides of (4.2) and take the L 2 inner product with Λ (r-1) e τ Λ 1/s w on both sides to obtain, 1 2 d dt w(t) 2 G s r-1,τ = ν Λ (r-1) e τ Λ 1/s ∆u ν , Λ (r-1) e τ Λ 1/s w + τ ′ w 2 G s r-1+ 1 2s ,τ -Λ (r-1) e τ Λ 1/s (w • ∇u ν ), Λ (r-1) e τ Λ 1/s w -Λ (r-1) e τ Λ 1/s (u • ∇w), Λ (r-1) e τ Λ 1/s w , (4.3) where the term Λ (r-1) e τ Λ 1/s ∇p, Λ (r-1) e τ Λ 1/s w disappeares since w = u ν -u is divergence free. It remains to estimate the right hand side of (4.3), for convenience, we denote I 1 = ν Λ (r-1) e τ Λ 1/s ∆u ν , Λ (r-1) e τ Λ 1/s w , I 2 = Λ (r-1) e τ Λ 1/s (w • ∇u ν ), Λ (r-1) e τ Λ 1/s w , I 3 = Λ (r-1) e τ Λ 1/s (u • ∇w), Λ (r-1) e τ Λ 1/s w . Using the discrete Hölder inequality, one can obtain |I 1 | = ν Λ (r-1) e τ Λ 1/s ∆u ν , Λ (r-1) e τ Λ 1/s w = ν -(2π) 3 k∈Z 3 |k| 2r e 2τ |k| 1/s (û ν k • ŵ-k ) ≤ ν(2π) 3 k∈Z 3 |k| r+1 e τ |k| 1/s |û ν k | |k| r-1 e τ |k| 1/s | ŵ-k | ≤ Cν u ν G s r+1,τ w G s r-1,τ . (4.4) Then we have |I 1 | ≤ Cν u ν G s r+1,τ w G s r-1,τ . As for I 2 , we first write it into the sum of their Fourier coefficients, I 2 = i(2π) 3 k∈Z 3 j∈Z 3 [ ŵj • (k -j)] (û ν k-j • ŵ-k ) |k| 2(r-1) e 2τ |k| 1/s . Since r > 9/2, there exists a constant C such that |k| r-1 ≤ C |j| r-1 + |k -j| r-1 , and s ≥ 1 implies e τ |k| 1/s ≤ e τ |j| 1/s e τ |k-j| 1/s . Thus I 2 can be bounded by |I 2 | ≤ C k∈Z 3 j∈Z 3 (|j| r-1 + |k -j| r-1 ) | ŵj | |k -j| ûν k-j e τ |j| 1/s e τ |k-j| 1/s × | ŵ-k | |k| r-1 e τ |k| 1/s . Then by discrete Hölder inequality and Minkowski inequality, we obtain |I 2 | ≤ C u ν G s r,τ w 2 G s r-1,τ . (4.5) As for I 3 , where I 3 = i(2π) 3 k∈Z 3 j∈Z 3 [û j • (k -j)]( ŵk-j • ŵ-k ) |k| 2(r-1) e 2τ |k| 1/s . Here again the cancellation property implies 0 = u • ∇Λ r-1 e τ Λ 1/s w, Λ r-1 e τ Λ 1/s w = i(2π) 3 k∈Z 3 j∈Z 3 [û j • (k -j)]( ŵk-j • ŵ-k ) |k -j| r-1 e τ |k-j| 1/s |k| r-1 e τ |k| 1/s . Then we have I 3 = i(2π) 3 k∈Z 3 j∈Z 3 [û j • (k -j)]( ŵk-j • ŵ-k ) |k| 2(r-1) e 2τ |k| 1/s -i(2π) 3 k∈Z 3 j∈Z 3 [û j • (k -j)]( ŵk-j • ŵ-k ) |k -j| r-1 e τ |k-j| 1/s |k| (r-1) e τ |k| 1/s = R 1 + R 2 , where we denote R 1 = i(2π) 3 k∈Z 3 j∈Z 3 [û j • (k -j)]( ŵk-j • ŵ-k ) |k| r-1 -|k -j| r-1 e τ |k| 1/s × |k| r-1 e τ |k| 1/s , and R 2 = i(2π) 3 k∈Z 3 j∈Z 3 [û j • (k -j)]( ŵk-j • ŵ-k ) |k -j| r-1 e τ |k| 1/s -e τ |k-j| 1/s × |k| r-1 e τ |k| 1/s . Here we used a different strategy in the split of |k| r-1 e τ |k| 1/s -|k -j| r-1 e τ |k-j| 1/s as in Lemma 2.3 to estimate R 1 and R 2 . With use of the following mean value theorem, there exists a constant θ ∈ (0, 1) such that |k| r-1 -|k -j| r-1 = (r -1)(|k| -|k -j|) θ |k| + (1 -θ) |k -j| r-2 ≤ C |j| (|k| r-2 + |k -j| r-2 ). Then by discrete Hölder inequality and Minkowski inequality, we have |R 1 | ≤ C k∈Z 3 j∈Z 3 |û j | |k -j| | ŵk-j | | ŵ-k | |j| (|j| r-2 + |k -j| r-2 ) × e τ |j| 1/s e τ |k-j| 1/s |k| r-1 e τ |k| 1/s ≤ C u G s r,τ w 2 G s r-1,τ . As for R 2 , we use the inequality e ξ -1 ≤ |ξ| e |ξ| for ∀ξ ∈ R and Lemma 2.2, |R 2 | ≤ C k∈Z 3 j∈Z 3 |û j | | ŵk-j | | ŵ-k | |k -j| r e τ |k-j| 1/s e τ (|k| 1/s -|k-j| 1/s ) -1 × |k| r-1 e τ |k| 1/s ≤ Cτ k∈Z 3 j∈Z 3 |û j | | ŵk-j | | ŵ-k | |k -j| r-1 e τ |k-j| 1/s e τ |j| 1/s |j| |k -j| |k| 1-1 s + |k -j| 1-1 s × |k| r-1 e τ |k| 1/s . For here we use the inequality |k -j| ≤ 2 |k| |j| for k, j = 0, then we have |k -j| |k| 1-1 s + |k -j| 1-1 s ≤ |k -j| 1/s ≤ C |k -j| 1 2s |k| 1 2s |j| 1 2s , where C is a constant depending on s. Thus R 2 can be bounded as follows, |R 2 | ≤ Cτ k∈Z 3 j∈Z 3 |û j | | ŵk-j | | ŵ-k | |k -j| r-1+ 1 2s e τ |k-j| 1/s e τ |j| 1/s |j| 1+ 1 2s × |k| r-1+ 1 2s e τ |k| 1/s ≤ Cτ k∈Z 3 j∈Z 3 |û j | | ŵk-j | | ŵ-k | |k -j| r-1+ 1 2s e τ |k-j| 1/s (1 + τ |j| 1/s e τ |j| 1/s ) × |j| 1+ 1 2s |k| r-1+ 1 2s e τ |k| 1/s ≤ Cτ u H r w 2 G s r-1+ 1 2s ,τ + Cτ 2 u G s r,τ w 2 G s r-1+ 1 2s ,τ , where we use the inequality e ξ ≤ 1 + ξe ξ for ∀ξ ∈ R + with respect to e τ |j| 1/s and also the discrete Hölder inequality and Minkowski inequality in the above inequality. Then we have , 0 < t ≤ T. This proves the first estimate of (4.1) by arranging the constant. Then we want to estimate p ν (t) -p(t) in the norm of G s r,τ . To do so, we first take the divergence of both sides of (4.2) to obtain the following elliptic equation, -∆p = ∇ • (w • ∇u ν ) + ∇ • (u • ∇w). |I 3 | ≤ C u G s r,τ w 2 G s r-1,τ + Cτ u H r w 2 G s r-1+ 1 2s ,τ + Cτ 2 u G s r,τ w 2 G s r-1+ 1 2s ,τ . ( 4 (4.10) Then we first apply the operator Λ r-1 e τ Λ 1/s on both sides of (4.10) and then take the L 2 inner product with Λ (r-1) e τ Λ 1/s p on both sides to obtain, (2π) 3 p 2 G s r,τ = Λ r-1 e τ Λ 1/s ∇ • (w • ∇u ν ), Λ (r-1) e τ Λ 1/s p + Λ r-1 e τ Λ 1/s ∇ • (u • ∇w), Λ (r-1) e τ Λ 1/s p = i 2 (2π) 3 It remains to estimate P 1 and P 2 in (4.11). For simplicity we only compute P 1 , since P 2 can be estimated in the same way. For the pressure p(t, x) of Euler equation (1.2), one can first obtain the following elliptic equation, -∆p = ∇ • (u • ∇u). (4. 8 ) 2 T t 1 / 2 e 8212 Since w(0) = 0 and Grownwall's inequality, (4.8) impliesw(t, •) G s r-1,τ (t) ≤ e CGT t t 0 ν u ν (s, •) G s r+1,τ (s) ds, 0 < t ≤ T.Recalling from (3.16) we have for 0 < t ≤ T ,t 0 ν u ν (s, •) 2 G s r+1,τ (s) ds ≤ M T , t ∈ (0, T ].With Hölder inequality, we havet 0 ν u ν (s, •) G s r+1,τ (s) CGT t , 0 < t ≤ T.(4.9) k∈Z 3 j∈Z 3 [+ i 2 (2π) 3 k∈Z 3 j∈Z 3 [= P 1 + P 2 ,P 1 = i 2 (2π) 3 k∈Z 3 j∈Z 3 [ 3 k∈Z 3 j∈Z 3 [ 323312123333 ŵj • (k -j)](k • ûν k-j ) |k| 2(r-1) e 2τ |k| 1/s p-k û j • (k -j)](k • ŵk-j ) |k| 2(r-1) e 2τ |k| 1/s p-k ŵj • (k -j)](k • ûν k-j ) |k| 2(r-1) e 2τ |k| 1/s p-k , P 2 = i 2 (2π) û j • (k -j)](k • ŵk-j ) |k| 2(r-1) e 2τ |k| 1/s p-k . |P 1 | 3 k∈Z 3 j∈Z 3 [≤ (2π) 3 k∈Z 3 j∈Z 3 C 2 T t 1 / 2 G 13333212 = (2π) ŵj • (k -j)](j • ûν k-j ) |k| 2(r-1) e 2τ |k| 1/s p-k (|j| r-2 + |k -j| r-2 )e τ |j| 1/s e τ |k-j| 1/s |k -j| × |j| | ŵj | ûν k-j |k| r e τ |k| 1/s p-k ≤ C w G s rthe fact k • ûν k-j = j • ûν k-jfrom the divergence free condition. And, similarly, we can obtain|P 2 | ≤ (2π) 3 k∈Z 3 j∈Z 3 [û j • (k -j)](j • ŵk-j ) |k| 2(r-1) e 2τ |k| 1/s p-k ≤ C w G s r12) and (4.13) into (4.11), we obtainp(t, •) G s r,τ (t) ≤ CG T w(t, •) G s r-1,τ (t), 0 < t ≤ T. T e CGT t , 0 ≤ t ≤ T. This proves (4.1) by arranging the constants. Thus we have proven Theorem 1.1. 3 ), then we have a ∈ H m σ , ∀m, because there exists a constant C m,τ0,s such that a H m ≤ C m,τ0,s a G s . But we can't directly obtain an uniform bound for C m a H m by the Gevrey norm r,τ 0 of a G s when m is very large. Then we can't say that, if m goes to infinity, r,τ 0 1 Cτ u H r + Cτ 2 u G sBy the choice of τ in (3.13), and noting that (3.8),(3.9),(3.10),(3.11) also hold for Euler equation (1.2). Then choosing the appropriate constant C, one has τ ′ + Cτ u H r +Cτ 2 u H r +Cτ 2 u G s r,τ ≤ 0, then we can obtain from (4.7) and (3.12), d dt w(t, •) G s r-1,τ (t) ≤ ν u ν (t, •) G s r+1,τ (t) + CG T w(t, •) G s r-1,τ (t) .6) Substituting (4.4), (4.5) and (4.6) into (4.3), we obtain 1 2 d dt w 2 G s r-1,τ ≤ ν u ν G s r+1,τ w G s r-1,τ + C u G s r,τ r,τ w w 2 G s r-1,τ G s 2s r-1+ 1 2 ,τ . (4.7) + τ ′ + Acknowledgements. The research of the second author was supported by NSF of China(11422106) and Fok Ying Tung Education Foundation (151001), the research of the first author and the last author is supported partially by "The Fundamental Research Funds for Central Universities of China". Thus u ν n remains bounded in L ∞ ([0, T ]; G s r,τ ( • ) ). (3.20) In order to pass to the limit in the nonlinear term using a compactness theorem, we need to estimate on du ν n dt . From (3.17), we have By (3.20) and (3.21), noting that H r (T 3 ) is compactly embedded in L 2 (T 3 ) from Rellich-Kondrachov Compactness Theorem in [START_REF] Evans | Partial Differential Equations[END_REF], a compactness theorem in [START_REF] Lions | Quelques méthodes de rèsolution des problémes aux limites non linèaires[END_REF][START_REF] Temam | Navier-Stokes equations and Nonlinear Functional Analysis[END_REF] indicates the existence of the limit For Euler equations, one can take very similar approach to obtain the existence of solution in Gevrey class space and we omit the details here. Thus we prove Proposition 3.1. It remains to show that u ν is the solution of (1.1). In fact it can be obtained from (3.22) that So there exists a scalar function p ν such that where p ν is unique up to a constant, and p ν satisfies with periodic boundary condition. For the regularity of the pressure p ν (t, x), we have the following Proposition. Proposition 3.3. Let p ν satisfies (3.23) , then the following estimate holds, ≤ CG 2 T , 0 < t ≤ T. And for the pressure p(t, x) in (1.2), we also have T , 0 < t ≤ T , where C, T, G T are defined in Proposition 3.2. Chao-Jiang Xu, School of Mathematics and Statistics, Wuhan university 430072, Wuhan, P.R. China, and, Université de Rouen, CNRS UMR 6085, Laboratoire de Mathématiques, 76801 Saint-Etienne du Rouvray, France E-mail address: [email protected]
40,127
[ "1002347", "858992", "856943" ]
[ "486389", "486389", "91" ]
01474204
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474204/file/978-3-642-36796-0_10_Chapter.pdf
Pontus Johnson Johan Ullberg email: [email protected] Markus Buschle email: [email protected] Ulrik Franke email: [email protected] Khurram Shahzad email: [email protected] P 2 AMF: Predictive, Probabilistic Architecture Modeling Framework Keywords: probabilistic inference, system properties, prediction, Object Constraint Language, UML, class diagram, object diagram In the design phase of business and software system development, it is desirable to predict the properties of the system-to-be. Existing prediction systems do, however, not allow the modeler to express uncertainty with respect to the design of the considered system. In this paper, we propose a formalism, the Predictive, Probabilistic Architecture Modeling Framework (P 2 AMF), capable of advanced and probabilistically sound reasoning about architecture models given in the form of UML class and object diagrams. The proposed formalism is based on the Object Constraint Language (OCL). To OCL, P 2 AMF adds a probabilistic inference mechanism. The paper introduces P 2 AMF, describes its use for system property prediction and assessment, and proposes an algorithm for probabilistic inference. Introduction As an alternative to business and software service development by trial-anderror, it is desirable to predict the properties of envisioned services already in the early phases of the lifecycle. Such predictions may guide developers, allowing them to explore and compare design alternatives at a low cost. Business and software developers routinely argue for or against alternative design choices based on the expected impact of those choices on, e.g., the future system's efficiency, availability, security or functional capabilities. However, experience-based predictions made by individual developers have drawbacks in terms of transparency, consistency, cost and availability. Therefore, formal approaches to such predictions are highly desirable. In addition to prediction, system property analysis methods may be employed to assess properties of existing systems that are difficult to measure directly, e.g. in the case of information security. From an enterprise interoperability perspective, one common approach to the field is the use of various forms of architecture models [START_REF] Ullberg | A language for interoperability modeling and prediction[END_REF]. The abstraction in these models allows for quantitative reasoning about various issues. Incorporating the ability to perform quantitative analysis and prediction would further improve the reasoning. Most current system architecture frameworks, however, lack modeling languages that support interoperability analysis [START_REF] Chen | Architectures for enterprise integration and interoperability: Past, present and future[END_REF]. In this article, we present P 2 AMF, a framework for generic business and software system analysis. P 2 AMF is based on the Object Constraint Language (OCL), which is a formal language used to describe expressions on models in the Unified Modeling Language (UML) [START_REF]Object Management Group: Object Constraint Language[END_REF]. The most prominent difference between P 2 AMF and OCL is the probabilistic nature of P 2 AMF. P 2 AMF allows the user to capture uncertainties in both attribute values and model structure. OCL for system property predictions In business and software development, many system qualities are worth predicting. These include theoretically well-established non-functional properties such as performance [START_REF]Object Management Group: UML Profile for MARTE: Modeling and Analysis of Real-Time Embedded Systems[END_REF]. There are also properties where consensus on the theoretical base has yet to materialize, e.g. in the case of security [START_REF] Lodderstedt | SecureUML: A UML-Based Modeling Language for Model-Driven Security[END_REF], and interoperability [START_REF] Ullberg | A language for interoperability modeling and prediction[END_REF]. Finally, there are many functional capabilities and non-functional properties that are so specific to a certain context that the analysis approach needs to be tailored for each instance, e.g. the coverage of the dictionary in a word processor application or the acoustic faithfulness of instruments in a music production application. The multitude of potentially interesting analyses prompts the need for generic languages and frameworks for system property analysis. An additional justification for such formalisms is the integrated analysis of multiple properties that they enable. Multi-attribute analysis provides a base for structured system quality trade-off, and the trade-off between different properties is a key element in any design activity. To contain the analysis algorithms of multiple system properties, a framework needs to feature an appropriate and sufficiently flexible language. Many system property analysis approaches are based on logic, arithmetic operations and structural aspects of the system [START_REF] Hansson | A Logic for Reasoning about Time and Reliability[END_REF] [START_REF] Ritchey | Using model checking to analyze network vulnerabilities[END_REF]. The dominating notation for software modeling today is the Unified Modeling Language (UML) [START_REF]Object Management Group: OMG Unified Modeling Language (OMG UML)[END_REF]. Any generic framework for quality analysis therefore benefits from UML compatibility, allowing models to be shared between design and analysis. The Object Constraint Language (OCL) [START_REF]Object Management Group: Object Constraint Language[END_REF], satisfies these requirements. OCL incorporates predicate logic, arithmetics and set theory, making it sufficiently expressive to contain most system property analysis needs. As a part of UML, OCL is also highly interoperable. OCL was developed with normative purposes in mind, allowing the designer to constrain future implementation to conform not only to UML models, but also to OCL statements. However, OCL is also suitable for the descriptive (in particular predictive) purposes of system analysis, [START_REF] Lodderstedt | SecureUML: A UML-Based Modeling Language for Model-Driven Security[END_REF]. Still, one increasingly important characteristic of modern business and software systems is not captured by OCL: uncertainty. As the business and IT-systems grows older, our knowledge of them becomes less certain. There are several reasons for this development. Firstly, business and software systems are rapidly increasing in complexity; they are growing in size as well as in the complexity of the underlying technologies. Secondly, as systems and components grow older, so do the people who developed them, and finally they will no longer be available. Combined with the poor state of documentation that plagues many projects, this adds to our uncertainty. Thirdly, the use of externally developed and maintained software is increasing. To allow for explicit consideration of uncertainty in the analysis of nonfunctional properties, the framework presented in this paper, P 2 AMF, is capable of expressing and comprehensively treating uncertainty in UML models. In P 2 AMF, attributes are random variables. P 2 AMF also allows the explicit modeling of structural uncertainty, i.e. uncertainty regarding the existence of objects and links. Indeed, as opposed to comparable formalisms (cf. Section 4 on related work), P 2 AMF features probabilistic versions of logic, arithmetic and set operators, properly reflecting both structural uncertainty and the uncertainty of attribute values. This article unfolds as follows: In Section 2, P 2 AMF is described from the perspective of the user; in this section, the contribution of the article is provided in its most accessible form. The section also include references to some current applications, ranging from business aspects such as organizational structure to more IT related aspects. The most challenging part of the development of P 2 AMF was the extension of OCL to a probabilistic context. The proposed inference approach is presented in Section 3. In Section 4, related work is considered. Finally, in Section 5 conclusions are described. Introduction to P AMF In this section, P 2 AMF is described from the point of view of the user, i.e. an analyst evaluating a system property. In the first subsection, the differences between P 2 AMF and the UML-OCL duo are explained. Then, an example class diagram is introduced and subsequently instantiated. This is followed by a subsection where the object diagram attribute values are predicted. The final two subsections describe the expressiveness and some current applications of P 2 AMF. Differences between P 2 AMF and UML-OCL The Object Constraint Language (OCL) is a formal language used to describe expressions on UML models. These expressions typically specify invariant conditions that must hold for the system being modeled, or queries over objects described in a model [START_REF]Object Management Group: Object Constraint Language[END_REF]. From the user perspective, P 2 AMF has many similarities to UML-OCL; from a syntax perspective, every valid P 2 AMF statement is also a valid OCL statement. There are, however, also significant differences. The first and most important difference is that while OCL mainly is employed in the design phase to specify constraints on a future implementation, P 2 AMF is used to reason about existing or potential systems. P 2 AMF may be employed to predict the uptime of a system while OCL is used to pose requirements on the uptime of the same system. While OCL is normative, mandating what should be, P 2 AMF is descriptive and predictive, calculating what is or will be. A second difference between UML-OCL and P 2 AMF is the importance of the object diagram for P 2 AMF. As in standard UML, class diagrams with embedded expressions may be constructed that represent a whole class of systems. These diagrams may then be instantiated into object diagrams representing the actual systems considered. In P 2 AMF, however, the object diagrams become particularly significant as inference is performed on them. Furthermore, P 2 AMF takes uncertainty into consideration. In particular, two kinds of uncertainty are introduced. Firstly, attributes may be stochastic. For instance, when classes are instantiated, the initial values of their attributes may be expressed as probability distributions. As will be described later, the values may subsequently be individualized for each instance. Secondly, the existence of objects and links may be uncertain. It may, for instance, be the case that we no longer know whether a specific server is still in service or whether it has been retired. This is a case of object existence uncertainty. Such uncertainty is specified using an existence attribute that is mandatory for all classes. We may also be unsure of whether a server is still in the cluster servicing a specific application. This is an example of association uncertainty. Similarly, this is specified with an existence attribute on the association, implemented using association classes. The introduction of two mandatory existence attributes and the specification of attribute values by means of probability distributions thus constitute the only changes to OCL as perceived by the user. These modest changes, however, allow for a comprehensive probabilistic treatment of the affected class and object diagrams, including both attribute uncertainty and structural uncertainty, enabling proper probabilistic inference over OCL expressions. An example class diagram To illustrate the usage of P 2 AMF, consider the simple example of a cloud service. This is a case where the probabilistic nature of P 2 AMF is relevant; in cloud computing, the sheer complexity of the cloud mandate for an architecture, and architecture analysis, approach. Furthermore, there is a fundamental uncertainty about such things as the number of servers currently providing a given service, about the characteristics of these particular servers, etc. Nevertheless, these aspects influence the properties of the service at hand. From an interoperability perspective, properties such as response time and availability are important to consider. Although these are only a small part of aspects important for interoperability, they serve as an well-sized and self-contained example for illustrating P 2 AMF. The class diagram for the example is given in Fig. 1. It contains three classes: Service, Cloud and Server. In the present example, we assume that the service provider, in order to commit to a feasible service level agreement, would like to predict the future response time of the provided service. Thus, responseTime is an attribute of each of the three classes. Furthermore, every server can be up or down, thus prompting the attribute available. If a server is down, the time to repair is given by the attribute timeToRepair. Some of the attributes are given initial values while the rest are derived from other attributes. There is also a helper operation, min, returning the minimum of the provided values. Below, the model's P 2 AMF expressions are provided. Going from the bottom and up in the P 2 AMF expressions above, first consider the Boolean server existence attribute. The probability that a given server exists is given by a Bernoulli distribution of 97%. Since the running example concerns a future state, this probability distribution represents the belief that a server will in fact be installed as planned, and will be dependent on the modeler's or other expert's knowledge . Continuing to the attribute available, the distribution specifies a 95% probability that a given server is up and running at any given moment. For the attributes timeToRepair and responseTimeWA, normal distributions specify the expected time (in seconds) before a server is up and running again after a failure and the response time for the case of a server that has not failed respectively. So far, we have considered four attributes assigned initial probability distributions on the class level. They thus represent the whole population of considered servers. Later, as the class diagram is instantiated, these estimates can be updated with system-specific data. The top-most attribute of the Server class differs from the previously presented as it is derived. The derivation states that the response time of the server depends on whether it is available or not. If it is available, responseTimeWA gives the response time while timeToRepair returns the relevant value when the server is down. The Execution association connects the Server to the Cloud class. As there is uncertainty about whether a given server is connected to the Cloud, its existence attribute is assigned a probability of 70%. The Cloud class has two attributes: its existence, which is similar to the existence attribute of the server class except that we are certain that the Cloud exists; and a real attribute with an initial probability distribution specifying the expected response time of the networking infrastructure. The Provision association class connects Service to the Cloud. Its features are similar to the Execution association class. Finally, the class Service contains one derived attribute, responseTime, one operation, min, and the mandatory existence attribute. The service response time is given as a sum of the Cloud networking infrastructure response time on the one hand, and the minimum response time of the set of providing servers on the other. The min operation simply returns the minimum value of a set of values. The existence attribute is similar to those of the other classes. An example object diagram The class diagram captures the general type of system and the causal effects that such systems are subject to. In order to make specific predictions, however, object diagrams detailing actual system instances are required. Instantiation follows the same rules as in object orientation in general. Classes are instantiated into objects, associations into links, multiplicities must be respected, and attributes may be assigned values (in the case of P 2 AMF, either deterministic values or probability distributions). There is, however, one interesting and useful difference. In ordinary UML/OCL, values may not be assigned to derived attributes since those attributes are inferred from the derivation expression. Assignment of a different value than the one resulting from the derivation rule would lead to an inconsistent model. The probabilistic inference algorithm presented in Section 3, however, does allow the assignment of values to derived attributes, as long as attributes are assigned values within the ranges specified by the probability distributions, on the class level. The most useful consequence of this capability is the possibility to infer backwards in the causal chain. In our running example, we can therefore gain knowledge about the availability of the servers merely by observing the response time of the service. This capacity for backwards reasoning is not available in standard OCL/UML. As an example, consider a model where x = y + z. If x is assigned a value, OCL can tell us nothing of the value of y. P 2 AMF, however, can. Therefore, in P 2 AMF, all information that is provided in the object diagram is used to improve the predictions of attribute values. Returning to the running example, consider the object diagram of Fig. 2. In this instance of the class diagram, the calculator -an instance of the Service class -uses theCloud, which is the single instance of the Cloud class. Three redundant Server instances are present in the Cloud, calcServA, calcServB and calcServC. We assume that the service provider estimates the attribute values as presented in Table 2.Note that attributes may be assigned either deterministic values, as theCloud.existence, or stochastic ones, as e.g. calcServC.timeToRepair. Some are not assigned any values at all. These will instead be inferred as part of the prediction. Again, note that unlike standard UML/OCL, any attribute may be assigned a value and any attribute may be unassigned; inference will still be possible. A modeler can therefore obtain predictions based on the current state of knowledge, however poor that knowledge is. Of course, high uncertainties in the object diagram will generally lead to high uncertainties in the predictions. Inference in the object diagram With support of a tool [START_REF] Ullberg | A tool for interoperability analysis of enterprise architecture models using pi-ocl[END_REF], the analyst can perform predictive inference on the object diagram described above with the click of a button. The details of the underlying algorithms are presented in Section 3. The results of the inference are new probability distributions assigned to the attributes. As these are typically nonparametric, they are most easily presented in the form of diagrams. We note that the most probable response time is 80ms. This is the sum of the most probable response times of theCloud and calcServB, as calcServB is the fastest server and it is probably available. However, there is a certain probability (24%) that calcServB is down (i.e. that available is false) or that it is not in service (that existence is false). In this case, calcServA will most probably (83%) be available, and the response time will increase to 130 ms on average. If calcServA also fails or if it is not in service, calcServC will provide a mean response time of 170 ms. Despite the tripled redundancy, there is a small probability (1.2%) that none of the servers are available. In that case, the response time depends on the installed server with the shortest time to repair, i.e. either calcServA or calcServC, with a mean of 1:40h (6000 s) each. Finally, although quite unlikely, there is the risk (0,3%) that none of the servers will exist as modeled; they could have been taken out of service or were perhaps never installed in the first place. In this case, the response time will be so high that the exact value no longer matters. As mentioned, backward inference is an important capability of probabilistic reasoning. As an example, suppose that when the system has been installed, an end user of the calculator service measures its response time to 130 ms. From this information, the prediction system automatically infers that both calcServA and calcServB must be either unavailable ( 90% probability) or non-existent (e.g. retired) ( 10% probability) while calcServC must be providing the service. This conclusion is reached automatically, but it can be understood intuitively as follows: Provided by redundant servers, the calculator service response time is given by the fastest available server. Since the measured service response time (taking the Cloud into account) is slower than those of calcServA and calcServB, they are surely down. Since the measured response time fits the probability distribution of calcServC when it is up and running, this must be the providing server. Expressiveness of P 2 AMF A set of expressive characteristics makes P 2 AMF particularly well suited for specifying predictive system property models. These include object orientation, support for first-order logic, arithmetics, set theory and support for expressing both class and instance level uncertainty, as described in this section. P 2 AMF operates on class and object diagrams. The object-oriented features of such diagrams may therefore be leveraged by the predictive systems in P 2 AMF. These features are well known and include class instantiation, inheritance, polymorphism, etc. Secondly, P 2 AMF is able to express first-order logical relations. The predictive benefits of predicate logic are undisputed and used as a base for many deductive formalisms [START_REF] Spivey | The Z notation: a reference manual[END_REF]. Furthermore, arithmetics, the oldest branch of mathematics, is used for prediction of properties ranging from hardware-related ones such as reliability [START_REF] Lyu | Handbook of Software Reliability Engineering[END_REF] to organizational and economic ones, e.g. efficiency [START_REF] Mason-Jones | Total cycle time compression and the agile supply chain[END_REF]. In order to efficiently make predictions on models such as the ones exemplified above, set theory is indispensable. The ability to speak of the number of components in a certain system, the qualities of a set of objects following a given navigation path in an object diagram, etc. are important for predictions on most systems with varying structure [START_REF] Spivey | The Z notation: a reference manual[END_REF]. As previously discussed, for many real-world systems and situations, perfect information is rare. On the contrary, the available information is often incomplete or otherwise uncertain [START_REF] Aier | A survival analysis of application life spans based on enterprise architecture models[END_REF]. In P 2 AMF, attributes of objects may be expressed by probability distributions. For many systems, not only the attribute values are associated with uncertainty, but also the system structure, e.g. does cloud service Z have double servers as the specification claims, or was one retired last month? The introduction of the existence attribute on classes and associations allows the specification of structural uncertainty in P 2 AMF. The object-oriented separation of theoretical prediction laws on the class level and the particulars about a specific system on the object level also pertains to the specification of uncertainty. The class-level modeler may need to express uncertainties about e.g. the strengths of attribute relations. For instance, to what extent a certain category of firewalls reduces the success rate of cyber attacks is rarely known precisely. Similarly as for the instance level, P 2 AMF allows for specification of attribute uncertainty as well as structural uncertainty on the class level. 2.6 Applications of P 2 AMF P 2 AMF has been used in class diagrams predicting such diverse properties as interoperability [START_REF] Ullberg | A language for interoperability modeling and prediction[END_REF], availability [START_REF] Franke | An architecture framework for enterprise IT service availability analysis[END_REF], and the effects of changes to the organizational structure of an enterprise [START_REF] Gustafsson | Modeling the IT impact on organizational structure[END_REF]. It has also been used for multi-property analysis [START_REF] Närman | An integrated enterprise architecture framework for information systems quality analysis[END_REF]. These applications can be seen as evaluations of P 2 AMF, in particular of the expressiveness of the formalism, as well as examples of the wide variety of properties that can be evaluated using P 2 AMF. Furthermore, a software tool supporting modeling and prediction using P 2 AMF has been developed, see [START_REF]Object Management Group: Object Constraint Language[END_REF] for a description of an early version of this tool. Probabilistic Inference In this section, we explain how inference is performed in P 2 AMF models. A Monte Carlo approach is employed, where the probabilistic P 2 AMF object diagram is sampled to create a set of deterministic UML/OCL object diagrams. For each of these sample diagrams, standard OCL inference is performed, thus generating sample values for all model attributes. For each attribute, the sample set collected from all sampled OCL models is used to characterize the posterior distribution. Several Monte Carlo methods may be employed for probabilistic inference in P2 AMF models, including forward sampling, rejection sampling and Metropolis-Hastings sampling [START_REF] Walsh | Markov Chain Monte Carlo and Gibbs Sampling[END_REF]. Of these, rejection and Metropolis-Hastings sampling allow the specification of evidence on any attribute in the object models while forward sampling only allows evidence on root attributes 1 . In this section, we will only present rejection sampling as it is the simplest method that allows evidence on all attributes. Let O p denote a P 2 AMF object diagram, let X 1 , ..., X m be the set of Boolean existence attributes X in such a diagram and let Y 1 , ..., Y n be a topological ordering of the remaining attributes Y in the diagram. A topological ordering requires that causal parent attributes appear earlier in the sequence than their children = f Y r i (Pa Y r i ). The objective of the rejection sampling algorithm is to generate samples from the posterior probability distribution P (X, Y|e), where e = e X ∪ e Y denotes the evidence of existence attributes as well as the remaining attributes. The objective is thus to approximate the probability distributions of all attributes, given that we have observations on the actual values of some attributes, and prior probability distributions representing our beliefs about the values of all attributes prior to observing any evidence. The first step in the algorithm is to generate random samples from the existence attributes' probability distribution P (X): x[1], ..., x[M ]. For each sample, x[i], and based on the P 2 AMF object diagram O p , a reduced object diagram, N i ∈ N, containing only those objects and links whose existence attributes, X j , were assigned the value true, is created. Some object diagrams generated in this manner will not conform to the constraints of UML. In particular, object diagrams may appear where a link is connected to only one or even zero objects. Such samples are rejected. Other generated object diagrams will violate e.g. the multiplicity constraints of the class diagram. Such samples are also rejected. Finally, some OCL expressions are undefined for certain object diagrams, for instance a summation expression over an empty set of attributes. Remains a set of traditional UML/OCL object diagrams, Ξ ⊂ N, whose structures vary but are syntactically correct, and whose attributes are not yet assigned values. In the second step, for each of the remaining object diagrams, Ξ i , the probability distribution of the root attributes, P (Y r ) is sampled, thus producing the sample set y r [1], ..., y r [size(Ξ)]. If there is evidence on a root attribute, the sample is assigned the evidence value. Based on the samples of the root attributes, analyses but rather provides a general language for expressing them. The closest match is probably the work by Ferrer et al. on multiple non-functional property evaluation [START_REF] Ferrer | Optimis: A holistic approach to cloud service provisioning[END_REF], using the Dempster-Shafer approach to probabilistic reasoning. However, P 2 AMF is more general still; aiming to offer not just a toolbox but a unified language where the best practice of e.g. reliability or performance modeling can be expressed. Within this third category, there are also generic frameworks for system quality analysis, such as ATAM [START_REF] Bass | Software Architecture in Practice[END_REF]. These typically provide different support than P 2 AMF, and are not based on probabilistic foundations. Fig. 1 . 1 Fig. 1. An example class diagram. c o n t e x t S e r v i c e : : r e s p o n s e T i m e : R e a l d e r i v e : c l o u d . r e s p o n s e T i m e + min ( c l o u d . s e r v e r . r e s p o n s e T i m e ) c o n t e x t S e r v i c e : : min ( v a l u e s : Bag ( R e a l ) ) : R e a l body : v a l u e s -> i t e r a t e ( x : R e a l ; a c c : R e a l=maxVal | x . min ( a c c ) ) c o n t e x t S e r v i c e : : e x i s t e n c e : B o o l e a n i n i t : B e r n o u l l i ( 0 . 9 8 ) c o n t e x t P r o v i s i o n : : e x i s t e n c e : B o o l e a n i n i t : B e r n o u l l i ( 0 . 9 8 ) c o n t e x t Cloud : : r e s p o n s e T i m e : R e a l i n i t : Normal ( 0 . 0 5 , 0 . 0 1 ) c o n t e x t Cloud : : e x i s t e n c e : B o o l e a n i n i t : B e r n o u l l i ( 1 . 0 ) c o n t e x t E x e c u t i o n : : e x i s t e n c e : B o o l e a n i n i t : B e r n o u l l i ( 0 . 7 0 ) c o n t e x t S e r v e r : : r e s p o n s e T i m e : R e a l d e r i v e : i f a v a i l a b l e t h e n responseTimeWA e l s e ti meT oRe pai r e n d i f c o n t e x t S e r v e r : : responseTimeWA : R e a l i n i t : Normal ( 0 . 1 , 0 . 0 2 ) c o n t e x t S e r v e r : : tim eTo Rep air : R e a l i n i t : Normal ( 3 6 0 0 , 9 0 0 ) c o n t e x t S e r v e r : : a v a i l a b l e : B o o l e a n i n i t : B e r n o u l l i ( 0 . 9 5 ) c o n t e x t S e r v e r : : e x i s t e n c e : B o o l e a n i n i t : B e r n o u l l i ( 0 . 9 7 ) Fig. 2 . 2 Fig. 2. Instantiation of the example class diagram. Fig 3 displays the distribution of the most interesting attribute, calculator.responseTime. Fig. 3 . 3 Fig. 3. calculator.responseTime probability distribution. Table 2 . 2 Attributes are assigned either probability distributions or deterministic values in the object diagram. 2 . The parents of Y i , Pa Yi , are those attributes that are independent variables in the OCL definition of the child attributes, Y i = f Yi (Pa Yi ), where f Yi is the OCL expression defining Y i . Furthermore, let Y r represent the subset of Y that are root attributes, Pa Y r i = ∅, i.e. they are defined by probability distributions rather than by OCL expressions, P (Y r ). Let Y r represent the subset of Y that are not root attributes, Y r = Y \ Y r , i.e. that are defined by OCL statements rather than by probability distributions, Y r i Root attributes have no causal parents. That it is possible to order the attributes topologically requires the absence of cycles. Acyclicity is therefore a requirement for P 2 AMF, just as for e.g. Bayesian networks. Conclusions Prediction and assessment of the expected quality and behavior of business and software systems already in the design stage is a desirable capability. With the frequent re-configurations of services in a complex and uncertain environment, the need for such analyses to deal with uncertainty grows. In this paper, we have reported on a language and tool for probabilistic prediction and assessment of system properties. The formalism, P 2 AMF, supports automatic probabilistic reasoning based on set theory, first-order logic and algebra. With class and object diagrams as a base, P 2 AMF is compatible with UML. This paper has introduced P 2 AMF and exemplified it for a simple analysis case, pointed out other areas where P 2 AMF is being employed and described the algorithm for performing the required probabilistic inference. the OCL expressions are calculated in topological order for each remaining attribute in the object diagram, y r i = f y r i (Pa y r i ). The result is a set of deterministic UML/OCL object diagrams, Λ ⊂ Ξ, where in each diagram, all attributes are assigned values. The third step of the sampling algorithm rejects those object diagrams that contain attributes which do not conform to the evidence. The sampling process ensures that root attributes always do conform, but this is not the case for OCLdefined attributes. The final set of object diagrams, O ⊂ Λ, contains attribute samples from the posterior probability distribution P (X, Y|e). These samples may thus be used to approximate the posterior. The algorithm is presented in pseudo code below. for(int i=1; i<M; i++) { x = sampleExistenceAttributes(); x = sampleExistenceAttributes(); Related work There are three categories of work that in different ways are similar to P 2 AMF. The first category includes variants of first-order probabilistic models. Among other proposals, these include Bayesian Logic (BLOG) [START_REF] Milch | Blog: probabilistic models with unknown objects[END_REF] and Probabilistic Relational Models (PRM) [START_REF] Friedman | Learning probabilistic relational models[END_REF]. These are similar to P 2 AMF in their use of object-based templates which may be instantiated into structures amenable to probabilistic inference. However, they do not consider how logic and arithmetic operators are affected by structural uncertainty. The second category of related work comprises query and constraint languages such as SQL [START_REF] Melton | Understanding the new SQL: a complete guide[END_REF] and OCL [START_REF]Object Management Group: Object Constraint Language[END_REF]. Similarly to P 2 AMF, these languages allow logical and arithmetic queries of object or entity models. They are, however, deterministic rather than probabilistic. The third and most important category of related work is work on stochastic quality prediction for software architecture. These include MARTE [START_REF]Object Management Group: UML Profile for MARTE: Modeling and Analysis of Real-Time Embedded Systems[END_REF], KLAPER [START_REF] Grassi | KLAPER: An Intermediate Language for Model-Driven Predictive Analysis of Performance and Reliability[END_REF] and the Palladio component model for model-driven performance prediction [START_REF] Becker | The palladio component model for model-driven performance prediction[END_REF], where two of them have opted for UML or MOF based modeling formalisms. However, common to all of these contributions is their focus on the analysis of particular properties. P 2 AMF differs from these, as it does not propose specific
35,446
[ "1002367", "1002368", "1002369", "1002370", "1002371" ]
[ "366312", "366312", "366312", "366312", "304294", "366312" ]
01474205
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474205/file/978-3-642-36796-0_11_Chapter.pdf
Pontus Johnson Maria Eugenia Iacob Margus Välja email: [email protected] Marten Van Sinderen email: [email protected] Christer Magnusson Tobias Ladhe email: [email protected] Business model risk analysis: predicting the probability of business network profitability Keywords: value networks, profitability, risk analysis, probabilistic inference, goal interoperability In the design phase of business collaboration, it is desirable to be able to predict the profitability of the business-to-be. Therefore, techniques to assess qualities such as costs, revenues, risks, and profitability have been previously proposed. However, they do not allow the modeler to properly manage uncertainty with respect to the design of the considered business collaboration. In many real collaboration projects today, uncertainty regarding the business' present or future characteristics is so significant that ignoring it becomes problematic. In this paper, we propose an approach based on the Predictive, Probabilistic Architecture Modeling Framework (P 2 AMF), capable of advanced and probabilistically sound reasoning about profitability risks. The P 2 AMF-based approach for profitability risk prediction is also based on the e 3value modeling language and on the Object Constraint Language (OCL). The paper introduces the prediction and modeling approach, and a supporting software tool. The use of the approach is illustrated by means of a case. Introduction A business model is critical for any new business venture, and especially for those that involve multiple organizations, due to the complexity of their relationships. In the literature of the last decade several authors have proposed different frameworks aimed at identifying the main ingredients of a business model (e.g., [START_REF] Osterwalder | The Business Model Ontology -a proposition in a design science approach[END_REF][START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF]; for an overview, see [START_REF] Pateli | A research framework for analysing eBusiness models[END_REF][START_REF] Alberts | The MOF Perspective on Business Modelling[END_REF]). An important motivation behind business modeling is its ability to provide an overview of the relationships between the actors involved in a business collaboration and of the way they all aim to benefit from it, financially or otherwise. In the design phase of a business collaboration, it is desirable to be able to predict the risks concerning profitability associated with the "business-to-be". As an alternative to the rather costly trial-and-error approach, it is desirable to understand the properties of the envisioned collaboration already in its early phases. Therefore some of the existing business modeling approaches not only model the business, but also propose some techniques to assess qualities such as costs and revenues [START_REF] Osterwalder | The Business Model Ontology -a proposition in a design science approach[END_REF], and profitability [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF]. However, they do not allow the modeler to properly express uncertainty with respect to the considered business collaboration. In many real collaboration projects today, uncertainty regarding the business' present or future characteristics is so significant that ignoring it becomes problematic. Our main contribution in this paper is an approach capable of advanced and probabilistically sound reasoning about profitability risks of a given business model expressed in the e 3 -value modeling language [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF]. Such predictions may guide business managers, allowing them to explore and compare collaboration scenario alternatives at a low cost. Profitability predictions do, in fact, constitute an important element in the strategic decision making process, and a critical part of the assessment of risks associated with a new business venture. Managers routinely argue for or against alternative business opportunities based on those opportunities' expected impact on, e.g., the company's future financial and business performance. However, experience/intuition-based predictions made by individual managers have serious drawbacks in terms of transparency, consistency, and ability to correctly evaluate costs and risks. Therefore, formal approaches to business model quality prediction are required. They not only allow us to anticipate the business-to-be, but they are also a means to achieve pragmatic and, goal interoperability [START_REF] Asuncion | Pragmatic Interoperability: A Systematic Review of Published Definitions[END_REF] in multi-actor business collaborations. The proposed profitability prediction approach draws upon our earlier work concerning the Predictive, Probabilistic Architecture Modeling Framework (P 2 AMF) [START_REF] Johnson | P 2 AMF: Predictive, Probabilistic Architecture Modeling Framework[END_REF] that, in turn, is based on the Object Constraint Language (OCL) [START_REF] Omg | Object Constraint Language[END_REF]. The process we follow to develop our profitability prediction approach is as follows. In the first step, starting from the original definition of the e 3 -value ontology, we define the e 3value metamodel in the P 2 AMF, expressed as an OCL-annotated class diagram. Consequently, any e 3 -value model can be instantiated from the e 3 -value class diagram metamodel in the form of an object diagram. Finally, we define and implement the underlying inference algorithm for the prediction of the attribute values associated with the model elements of the object model. Thus, the execution of the inference algorithm produces, for example, predictions about the net earnings attribute values for all actors participating in a business collaboration. Such profitability predictions of each of the actors involved, are determined taking into account given levels of uncertainty (expressed as probability distributions) at three levels: uncertainty regarding attribute values of objects in the object model, uncertainty related to objects (e.g., uncertainties regarding the actors' participation in the value network), and uncertainties regarding the (existence of) relationships between objects (e.g., uncertainties related to a value exchange). This represents an important advancement compared to Gordijn's work on profitability sheets and analysis [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF], since Gordijn's approach only considers deterministic values for attributes, and value network models are fixed. Furthermore, due to the fact that the P2AMF and the EAAT allow us to incorporate uncertainty in e 3 -value models, profitability predictions can be seen as risk assessments. To the best of our knowledge, this is the first time a formal business model-based profitability risk analysis method is proposed for business models. Work on how trust assumptions affect profitability in value networks has been reported (e.g., [START_REF] Fatemi | Trust and business webs[END_REF]). However, it should be noted that trust is just one specific source of risk. The remainder of this paper is organized as follows. In Section 2 we briefly present the original e 3 -value business model ontology [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF]. Section 3 is devoted to the P 2 AMF and tool. Section 4 describes the main contribution of this paper, the profitability prediction approach and illustrates the usage of this approach by means of the electric cars case study that has been defined in the scope of the Stockholm Royal Seaport Smart City project [START_REF]Exploateringskontoret Stockholms Stad, Övergripande program för miljö och hållbar stadsutveckling i Norra Djurgårdsstaden[END_REF]. The papers ends with some conclusions and pointers to future work. Business modeling and the e 3 -value ontology In this section we motivate our choice for the e 3 -value modeling formalism and briefly present the e 3 -value ontology [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF]. Many business model frameworks exist that aim at facilitating and guiding business modeling, e.g., Activity system [START_REF] Zott | Business Model Design: An Activity System Perspective[END_REF], e 3 -value [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF], VDML [START_REF] Omg | Value Delivery Modeling Language (VDML)[END_REF], REA [START_REF] Geerts | An Ontological Analysis of the Primitives of the Extended-REA Enterprise Information Architecture[END_REF], RCOV [START_REF] Demil | Business Model Evolution: In Search of Dynamic Consistency[END_REF], The BM concept [START_REF] Hedman | The business model concept: Theoretical underpinnings and empirical illustrations[END_REF], Entrepreneur's BM [START_REF] Morris | The entrepreneur's business model: toward a unified perspective[END_REF], The social BM [START_REF] Yunus | Building Social Business Models: Lessons from the Grameen Experience[END_REF], The BM guide [START_REF] Kim | Knowing a winning business idea when you see one[END_REF], 4C [START_REF] Wirtz | Strategic Development of Business Models. Implications of the Web 2.0 for Creating Value on the Internet[END_REF], Internet BM [START_REF] Lumpkin | E-Business Strategies and Internet Business Models: How the Internet Adds Value[END_REF], and BMO [START_REF] Osterwalder | The Business Model Ontology -a proposition in a design science approach[END_REF]. Some of them have a strong link to information systems, others are closely related to strategic management or industrial organization. Most of the business model frameworks mentioned above have been published in the top 25 MIS journals. However, a systematic literature review we carried out recently [START_REF] Alberts | The MOF Perspective on Business Modelling[END_REF] resulted in an initial set of 171 journal articles and conference papers relevant for the topic of business modeling. After filtering this set of publications, we ended up with 76 articles presenting some 43 different business model frameworks. Furthermore, five articles in the reviewed literature present a review of business model literature and aim to compare some existing frameworks: [START_REF] Pateli | A research framework for analysing eBusiness models[END_REF][START_REF] Gordijn | Comparing two Business Model Ontologies for Designing e-Business Models and Value Constellations[END_REF][START_REF] Lambert | A Conceptual Framework for Business Model Research[END_REF][START_REF] Al-Debei | Developing a unified framework of the business model concept[END_REF][START_REF] Zott | The Business Model: Recent Developments and Future Research[END_REF]. A common trait of most of these frameworks is that they lack the level of formality which is necessary to relate a business model to its supporting enterprise architecture at the model level. However, of the reviewed frameworks, two stand out as having, from the modeling point of view, a sufficient formal foundation: e 3 -value [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF] and BMO [START_REF] Osterwalder | The Business Model Ontology -a proposition in a design science approach[END_REF]. An extensive comparison of these two formalisms is presented in [START_REF] Gordijn | Comparing two Business Model Ontologies for Designing e-Business Models and Value Constellations[END_REF]. There are some significant differences between the two approaches. In terms of the scope covered, BMC is focused on a single element of a value chain and its direct relations with customers and suppliers, while e 3 -value takes a network perspective in order to provide insight into value generation outside the formal boundary of a single organization. Also, at the conceptual level they are quite different: the BMO puts emphasis on resources needed to create a certain value proposition, while in e 3 -value, the modeling of value streams in a business network is central. An approximate mapping between BMO and e 3 -value concepts is proposed in [START_REF] Gordijn | Comparing two Business Model Ontologies for Designing e-Business Models and Value Constellations[END_REF], which clearly reveals these differences. When considering the level of formality, although both e 3value and BMO have been found to be "light weight" ontologies [START_REF] Gordijn | Comparing two Business Model Ontologies for Designing e-Business Models and Value Constellations[END_REF], e 3 -value is more formal than BMO since it comes with a metamodel [START_REF] Gordijn | Value based requirements engineering: Exploring innovative ecommerce idea[END_REF] and a graphical notation, for which reason it is also a modeling language. The fact that BMC is widely accepted is partly due to its simplicity and ease of use, which come at the cost of formality. In this paper we choose for e 3 -value because of its higher level of formalism and because it provides a network perspective on business collaborations which makes it suitable for capturing network effects regarding value propagation. In the remainder of this section we briefly summarize the e 3 -value modeling constructs (for more details we refer to [START_REF] Gordijn | Value based requirements engineering: Exploring innovative ecommerce idea[END_REF][START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF]). An e 3 -value model essentially describes the value exchange relationships between two or more actors involved in a business collaboration, expressed as a value network model. The main concepts defined in the e 3 -value business model ontology that capture these exchanges are: actor (with its specializations, market segment and composed actor), value exchange, value object, value port, and the value interface. Besides a structural specification of the elements of a business value network and of its value streams, an e 3 -value model also captures behavioral aspects of such networks with respect to the flow of value. As such, concepts such as stimulus and end stimulus, dependency path and value exchange are used to define a so-called use case map describing the business behavior of actor in the collaboration modeled by the e 3 -value model. In Table 1 below we summarize the definition of all these concepts, and their graphical notation. Furthermore, in Figure 2 an example of an e 3 -value model can be seen. The P 2 AMF As mentioned before, we use the P 2 AMF framework [START_REF] Johnson | P 2 AMF: Predictive, Probabilistic Architecture Modeling Framework[END_REF] and the Enterprise Architecture Analysis Tool (EAAT) [START_REF] Johnson | A Tool for Enterprise Architecture Analysis[END_REF] tool to extend e 3 -value to a probabilistic setting. P 2 AMF is based on the Object Constraint Language (OCL) [START_REF] Omg | Object Constraint Language[END_REF], which is a formal language used to describe expressions on models in the Unified Modeling Language (UML). OCL expressions typically specify invariant conditions that must hold for the system being modeled or queries over objects described in a model. The most prominent difference between P 2 AMF and OCL is the probabilistic nature of P 2 AMF. P 2 AMF allows the user to capture uncertainties in both attribute values and model structure. Concept Definition Notation actor "an economically independent (and often also legal) entity" [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF] Market segment Is a concept that breaks a market (consisting of actors) into segments that share common properties. It is often used to model that there is a large group of end-consumers who value objects similarly. Value interface Used to groups one or more value ports of one actor. Value port "An actor uses a value port to provide or request value objects to or from his/her environment, consisting of other actors. Thus, a value port is used to interconnect actors so that they are able to exchange value objects." [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF] Value exchange "Is used to connect two value ports with each other. It represents one or more potential trades of value object instances between value ports. As such, it is a prototype for actual trades between actors. […] It does not model actual exchanges of value object instances." [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF] Value transaction "Concept that aggregates all value exchanges, which define the value exchange instances that must occur as consequence of how value exchanges are connected, via No distinct notation is defined in the tool. value interfaces to actors." [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF] Value object "A service, a product, or even an experience, which is of economic value for at least one of the actors involved in a value model" [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF] Is represented as a label on a value exchange relationship. Value activity Collection of operational activities, which can be assigned as a whole to an actor and lead to creation of profit or economic value for the performing actor. [21] And/Or fork and join An AND fork connects a scenario element to one or more other elements, while the AND join connects one or more elements to one other element. An OR fork models a continuation of the scenario path into one direction, to be chosen from a number of alternatives. The OR join merges two or paths into one. [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF] Start and end stimuli "Use case maps start with one or more start stimuli. A start stimulus represents an event, possibly caused by an actor. [...] A use case map also has one or more end stimuli. They have no successors." [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF] Table 1. e 3 -value concepts An introduction to P 2 AMF From the user perspective, P 2 AMF has many similarities to OCL applied to class and object diagrams. As can be seen in the derivations in Section 4, P 2 AMF statements generally appear identical to OCL statements. However, their interpretation differs because P 2 AMF takes uncertainty into consideration. In P 2 AMF, two kinds of uncertainty are introduced. Firstly, attributes may be stochastic. For instance, when classes are instantiated, the initial values of their attributes may be expressed as probability distributions. To the attribute Actor.expenses in the following example, context Actor::expenses:Real init: Normal(3500,300) a normal distribution with a mean of 3500 and a standard deviation of 300 is assigned. The above expression determines the initial value of attribute instances. In the corresponding object diagrams, the values may be further specified in the form of evidence. Evidence, a term borrowed from the Bayesian theory of probabilistic inference, determines the attribute value of the instance, and may be either deterministic (hard evidence) or probabilistic (soft evidence). Secondly, the existence of objects and links may be uncertain. It may, for instance, be the case that we do not know whether we will be able to generate solar energy next week. This can be represented as a case of object existence uncertainty (i.e., whether the generation activity will exist next week is not certain). Such uncertainty is specified using an existence attribute that is mandatory for all classes: context GenerationActivity::existence:Boolean init: Bernoulli(0.8) where the Bernoulli probability distribution states that there is an 80% chance that the activity in fact exists. Uncertainty with respect to the existence of links may be specified in a similar way. The introduction of two mandatory existence attributes and the specification of attribute values by means of probability distributions thus constitute the only changes to OCL as perceived by the user. These changes, however, allow for a comprehensive probabilistic treatment of OCL-annotated class and object diagrams, including both attribute uncertainty and structural uncertainty. The mathematical approach and inference algorithms behind the approach are presented in [START_REF] Johnson | P 2 AMF: Predictive, Probabilistic Architecture Modeling Framework[END_REF]. In brief, object diagrams are subjected to Monte Carlo-based probabilistic inference with algorithms, e.g., Metropolis-Hastings [START_REF] Koller | Probabilistic graphical models: principles and techniques[END_REF] and Rejection Sampling [START_REF] Walsh | Markov chain monte carlo and gibbs sampling[END_REF]. Attributes with previously unknown values are assigned probability distributions. Those with known probability distributions are updated in the light of their relations to neighboring attributes as well as in the light of evidence assigned to various attributes. With the tool support presented in Section 3.2, the analyst can perform predictive inference on object diagrams with the click of a button. The results of the inference are new probability distributions assigned to the attributes. As these are often nonparametric, they are most easily presented in the form of histograms. The EAAT tool We have developed a software tool, the Enterprise Architecture Analysis Tool (EAAT), that allows both probabilistic class diagrams and probabilistic object diagrams to be modeled. It also performs inference as described in the previous subsection. The tool is presented in detail in [START_REF] Johnson | A Tool for Enterprise Architecture Analysis[END_REF] and can be downloaded from [START_REF]EAAT tool[END_REF]. It is divided into two components, the CLASS MODELER, and the OBJECT MODELER, corresponding to two file types: class and object diagrams. The CLASS MODELER is a graphical editing tool for probabilistic class diagrams. In addition to the basic editing functionality, the CLASS MODELER (i) allows attribute values to be defined either by probability distributions or by OCL expressions, (ii) requires a value for the mandatory existence attributes of classes and associations, and (iii) provides OCL syntax checking support. The OBJECT MODELER has two components: 1) an editing tool for probabilistic object models, and 2) an inference engine. The editing tool (i) allows probabilistic attribute values, including the mandatory existence attributes, (ii) displays histograms for all attributes representing their probability distributions after inference, and (iii) offers an interface to different inference algorithms and parameters. With one click, the calculations described in Section 3.1 generate posterior probability distributions for all attributes. Predicting profitability risks using e 3 -value models and P 2 AMF Due to the fact that the P 2 AMF and the EAAT allow us to incorporate uncertainty in e 3 -value models (at object, attribute and relationship levels), profitability predictions can be seen as risk assessments. Risk is generally defined as "the frequency and magnitude of loss that arises from a threat (whether human, animal, or natural event)" [START_REF]The Open Group: Technical Standard Risk Taxonomy[END_REF] and calculated as the threat's probability multiplied by the magnitude of its effect (i.e., the size of the value loss). Thus, our profitability calculation fits in the above definition (i.e., of profitability risk) as it takes into account both uncertainty and magnitude of the net earnings. In this section we present our approach for risk prediction. The e 3 -value metamodel As expressed in P 2 AMF (Figure 1), the e 3 -value metamodel is quite similar to the e 3value ontology presented in [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF]. All metamodel entities and relations of the P 2 AMF version can be found in the e 3 -value ontology. For reasons of economy, a few concepts and relations in the e 3 -value ontology have been omitted in the P 2 AMF metamodel. Figure 1. e 3 -value metamodel Currently, the P 2 AMF version does not feature composite actors. It was also possible to omit a few elements from the Use Case Maps of the e 3 -value ontology without affecting the profitability algorithms. A few attributes and several operations have also been added in the P 2 AMF-based metamodel. In this section, we will focus on these attributes and operations as these contain the OCL statements used to replicate the calculations of the e 3 -value profitability sheet. The risk prediction approach The main goal of the profitability analysis is to calculate the net earnings of each actor. While this attribute is not explicit in the e 3 -value ontology or tool, it is calculated in the Excel profitability sheet generated by the tool. In the P 2 AMF-based metamodel, this attribute, Actor.netEarnings, is defined as follows: context Actor::netEarnings: Real derive: self.valueInterface.netEarnings->sum() -self.investment -self.expenses -self.activity.investment->sum() The net earnings are thus the sum of all net earnings of the actor's value interfaces minus the actor's direct investments and expenses and the investments and expenses of the actor's activities. As noted in [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF], a proper net present value calculation requires a time series of e 3 -value models. This is also the case for the P 2 AMF-based version. While investments and expenses are non-derived attributes, net earnings of value interfaces are derived. Each value port has a valuation attribute, specifying the value of the exchanged value object. If the value port is incoming, net earnings are increased by the product of the valuation attribute and the number of transactions. If the value port is outgoing, the net earnings are decreased by the corresponding amount. The occurrences, or number of transactions, originate from the attribute occurrence in the start stimulus. The value port occurrences are also affected by the structure of the use case map. For instance, if the scenario path from the start stimulus to the considered value port contains an OR fork with two branches, then the occurrences of the value port will be a fraction of those of the start stimulus. In order to calculate the occurrences of a value port, a recursive algorithm is employed. The algorithm searches through the use case map in order to find the start stimulus. The occurrence value is then propagated and transformed from value interface to value interface by various mechanisms. In many cases, the occurrence value is simply copied. In other cases, such as for the OR fork, the occurrence value is diminished by a factor determined by the fraction attributes of the SourceFraction and TargetFraction classes. The Electric Cars case study The Stockholm Royal Seaport (SRS) smart city project has a vision of becoming a world class environmental city district [START_REF]Exploateringskontoret Stockholms Stad, Övergripande program för miljö och hållbar stadsutveckling i Norra Djurgårdsstaden[END_REF]. This could include micro electricity generation by consumers and electric car usage. Our example includes both cases in one simplified scenario, in which we use pricing estimates from [START_REF] Andersson | Plug-in Hybrid Electric Vehicles as Control Power: Case studies of Sweden and Germany[END_REF]. The scenario of the example is as follows. Electric cars used in the SRS area. The cars' owners want to maximize the usage of their resources and earn extra money with the cars when they are not in movement. They can do that by participating in the frequency control market, where electric car capacity is aggregated and sold as a resource to the transmission system operator [START_REF] Andersson | Plug-in Hybrid Electric Vehicles as Control Power: Case studies of Sweden and Germany[END_REF]. In our scenario, the electric vehicle (EV) aggregator operates charging stations, where cars should be plugged in to the grid when idle. The micro-generators have long-term contracts with the aggregator. The example is presented as an e 3 -value model in Figure 2. The models depict one day. It is assumed that there are B(20 000, 0.5) cars in the neighborhood, where B is the binomial distribution (in this case with a mean value of 10 000). They are assumed to have 10kWh batteries, needing to be charged once a day, which implies an occurrence attribute of the start stimulus with the value 1. We further estimate the number of micro generators to B(12, 0.5) and the number of EV aggregators to B(3, 0.75). There is only one regional grid. Since we are uncertain of the future price of electricity, we estimate that the car owner pays 0.98±0.1€ (where 0.98 is the mean and 0.1 is the standard deviation of a normal distribution) per charging, while the price of electricity for the EV aggregator from the regional grid is 0,81±0,1€. Due to its long-term contract, the EV aggregator purchases electricity from the local micro generators at a fixed price of 0.58€, when power is available. Considering the alternatives, customers on average value one battery charging at 1,2±0,2€. The regional grid buys electricity from producers, and thus values the electricity required for one charging at 0,6±0,1€. As the micro generators cannot sell their electricity elsewhere, it has no value to them outside of the transaction with the EV aggregator. The local grid operator is expected to value the capacity provided by one car battery at 0.4±0.05€/day, considering the available alternatives. Therefore, it makes sense to purchase that capacity for 0.32±0.05€ from the EV aggregator. The aggregator in turn, buys the capacity from each car owner for 0.25±0.05€. For the car owner, the cost of providing the regulation is low; the tapping and recharging cause some battery degradation, and there is an inconvenience finding the car with less than a full battery. Based on these considerations, the value of the capacity for the average car owner is estimated at 0.2±0.04€. Figure 3 The object models for the electric cars e 3 -value model Considering the investment costs and expenses, EV aggregators expect fixed costs of 600±200€/day, and running expenses of 500±200€/day. The micro generators' fixed costs are estimated to 110±20€/day with no running expenses. The regional grid has no extra costs in this business model, nor does the car owner. The e 3 -value profitability sheet [START_REF] Gordijn | Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas[END_REF] sums up all the value generated in interfaces and subtracts investment and expense costs. The calculations are further based on the number of occurrences and market segment counts. As e 3 -value does not consider uncertainty, mean values are utilized. According to the profitability sheet, the net earnings of each actor is the following: Regional Grid: 2 270 €/day; EV Aggregator: 445 €/day; Microgenerator: 180 €/day. As end-customers, the electric car owners' utility is calculated to be 0,27 €/day. Thus, according to the e 3 -value model, the example is a sound business model that deserves investment from all parties. Let us now consider the P 2 AMF-based version. In the EAAT tool, the graphical representation is a rather large object diagram, a fragment of which is shown in Figure 3. This is visually not as efficient as an e 3 -value model, but the automated model transformation between the two notations is straightforward and could be implemented. In P 2 AMF, the above uncertainties are easily accommodated. By the Monte-Carlo sampling approach, nonparametric probability distributions are generated for all attributes. The distributions of the net earnings of the involved actors are presented in Figure 4, Figure 5, Figure 6, and Figure 7. Due to the replacement of the binomial distributions with the nearest integer, we find that the net earnings' mean values of the e 3 -value model are slightly off. However, this error is marginal. In accordance with the e 3 -value results, we find that there is good reason for the Regional Grid, the Micro-generators, and the electric car owners to sign up for this business model. However, the probability distribution of the EV Aggregator is alarming. Although the mean value is positive, the probability of economic loss is very high. There seems to be a 50% chance of losing money for the EV Aggregator. Any moderately risk-averse agent should be advised against this business model. Conclusions Prediction and assessment of the expected profitability and behavior of a new business venture already in the early planning phase is a desirable capability, especially in support of strategic decision making. As the business venture becomes more complex and involves more partners, the sources of risks also proliferate, which increases the criticality of analyses taking uncertainty into consideration. In this paper, we have reported on an approach and a tool for probabilistic prediction and assessment of profitability risks. The proposed formalism is based on the e 3 -value business modeling language and the P 2 AMF framework, and supports automated probabilistic reasoning based on set theory, first-order logic and algebra. Our approach allows us to anticipate profitability levels expressed as probability distributions assigned to the model elements' attributes. The proposed approach assumes that the value network model is enriched with realistic probability distributions. However, in real situations the form of those distributions may be challenging to obtain. This lack of knowledge may have a negative impact on the quality of the analysis outcomes. To a very large extent this is due to the fact that value networks abstract from the internal details of the actors involved in the business collaboration. We argue that such quantitative input (of sufficient accuracy) can be obtained if one takes these internal details into account, and relates value network models to enterprise architecture models. Therefore, one direction in which we foresee a possible extension of our approach is that of chaining existing enterprise architecture cost analysis [START_REF] Iacob | Quantitative analysis of service-oriented architectures[END_REF] and prediction techniques with the value network profitability prediction technique proposed in this study. context ValueInterface::netEarnings: Real derive: self.valuePort.economicValue->sum() The net earnings of a value interface are thus the sum of the economic values of the value ports. context ValuePort::economicValue: Real derive: if (self.valueExchangeIn->notEmpty()) then self.valuation*self.valueInterface.getOccurrences(Set{}) else self.valuation*self.valueInterface.getOccurrences(Set{}) endif Figure 2 . 2 Figure 2. The e 3 -value model for the electric cars case Figure 4 . 4 Figure 4. Net earnings of electric car owner. Mean is 0.27€/day. Figure 5 . 5 Figure 5. Net earnings of EV Aggregator. Mean is 450€/day. Figure 6 . 6 Figure 6. Net earnings of Regional Grid. Mean is 2300 €/day. Figure 7 . 7 Figure 7. Net earnings of Microgenerators. Mean is 210 €/day. The irregular shape is due to the probability of competition.
35,975
[ "1002372", "998713", "1002373", "998714", "1002374", "1002375" ]
[ "366312", "303060", "366312", "303060", "300563", "300563" ]
01474207
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474207/file/978-3-642-36796-0_13_Chapter.pdf
Martijn Zoet email: [email protected] Johan Versendaal email: [email protected] Business Rules Management Solutions: Added Value by Effective Means of Business Interoperability Keywords: Business Rules Management, Interoperability, BRM, Business Interoperability Providing Org. / Receiving Org. Consortium Interoperability research, to date, primarily focuses on data, processes and technology and not explicitly on business rules. The core problem of interoperability from an organisation's perspective is the added value generated from collaborating with other parties. The added value from a data, process and technology perspective has been widely researched. Therefore it is the aim of this study to provide insights into the added value for organisations to collaborate when executing business rules management solutions. Explanations of possibilities, opportunities and challenges can help to increase the understanding of business rules interoperability value creation. Presented results provide a grounded basis from which empirical and practical investigation can be further explored. Introduction Many business services nowadays heavily rely on business rules to express business entities, coordination, constraints and decisions [START_REF] Shao | Extracting business rules from information systems[END_REF][START_REF] Bajec | A methodology and tool support for managing business rules in organisations[END_REF][START_REF] Zoet | Aligning risk management and compliance considerations with business process development[END_REF]. A business rule is [START_REF] Morgan | Business rules and information systems: aligning IT with business goals[END_REF] "a statement that defines or constrains some aspect of the business intending to assert business structure or to control the behaviour of the business." The field of business rules management knows various research streams. Examples are business rules authoring, business rules engines, application in expert systems, business rules architecture, business rules ontology's, data mining and artificial intelligence [START_REF] Zoet | Aligning risk management and compliance considerations with business process development[END_REF]. However, the research topics within each stream are technology driven [START_REF] Arnott | A critical analysis of decision support systems research[END_REF][START_REF] Rosca | Towards a flexible deployment of business rules[END_REF]. Yet, it is not the technology and software applications that are of interest to an organisation; it is the value proposition they deliver. Nevertheless research focusing on improving business rules management practices and its value proposition is nascent [START_REF] Arnott | A critical analysis of decision support systems research[END_REF][START_REF] Nelson | Transitioning to a business rule management service model: Case studies from the property and casualty insurance industry[END_REF]. An important design factor to increase an organisation's value proposition in general is cooperation [START_REF] Hammer | Reengineering the Corporation: A Manifesto for Business Revolution[END_REF]. To achieve effective cooperation organisations have to resolve interoperability issues. In this study business interoperability is defined as [START_REF] Lankhorst | Introducing Agile Service Development[END_REF] "the organisational and operational ability of an enterprise to cooperate with its business partners and to efficiently establish, conduct and develop IT-supported business relationships with the objective to create value." An example from the airline industry can demonstrate business interoperability of business rules expressing decisions. A global airline alliance has 10 members. Each member has different business rules to decide whether customers are allowed into their business lounge. Airline X states that a customer must have acquired the silver status while airline Y states that the customer must have acquired the gold status. When a customer of airline Y arrives at a lounge managed by airline X carrying the silver status he will not be allowed access. Airline Y will not pay Airline X to take care of the customer. Two events change the business rules with regards to lounge access. First an airline changes its business rules or secondly an additional airline is allowed into the alliance. If the business rules are hard coded or stored locally all systems at all airports have to be altered. When each member offers a decision service containing their specific business rules only the specific decision service has to be altered improving the business interoperability of the entire alliance. However, current interoperability research primarily focuses on data, services, processes, business and interaction and not explicitly on business rules [START_REF] Molina | Enterprise Integration and Networking: Challenges and Trends[END_REF]. For each previously mentioned concept three categories of interoperability research can be distinguished: conceptual, technological and organisational [START_REF] Chen | An Approach for Enterprise Interoperability Measurement[END_REF]. Conceptual research focuses on barriers related to syntactic and semantics', technological research focuses on information system technology while organisational research focuses on responsibility, organisational structure and business value. All research streams have the same purpose: to develop knowledge and solutions to remove barriers and enable effective business interoperability [START_REF] Chen | An Approach for Enterprise Interoperability Measurement[END_REF]. Since interoperability research related to business rules is nascent research needs to focus on the inquiry of the phenomenon itself [START_REF] Edmondson | Methodological Fit in Management Field Research[END_REF]. This article extends understanding of business rules interoperability by addressing the underlying value proposition for organisations. Based on previous research, we will consider a business rules management solution (hence BRMS) as consisting of eleven different service systems. With these premises, the specific research question addressed is: "What is the relation between forms of business interoperability and the organisation's business rules management service systems in the perspective of value propositions?" Answering this question will help organizations better understand the value proposition behind collaborating with organisations in order to deliver business rules. The paper is organised as follows. First we describe the individual business services of a BRMS. Then we present the various forms of interoperability and stages of service design. After which we present our analysis of BRMS interoperability. We conclude with a discussion of these findings, focusing on the implications for practice and for the study of business rules based services. Literature A business service is defined as [START_REF] Lankhorst | Introducing Agile Service Development[END_REF]: "a coherent piece of functionality that offers added value to the environment, independent of the way this functionality is realized." To deliver a business service a value-coproduction of resources, skills, knowledge and competences has to be configured [START_REF] Lankhorst | Introducing Agile Service Development[END_REF]. This configuration is called a service system. A BRMS is a co-production of various resources, skills, knowledge and competences [START_REF] Nelson | Transitioning to a business rule management service model: Case studies from the property and casualty insurance industry[END_REF][START_REF] Nelson | A Lifecycle Approach towards Business Rules Management[END_REF][START_REF] Zoet | Business Rules Based Perspective on Services: A Mixed Method Analysis[END_REF]: i.e. a co-production of service systems. Nelson [START_REF] Nelson | Transitioning to a business rule management service model: Case studies from the property and casualty insurance industry[END_REF] proposed a very rudimentary service system for business rules containing three elements a service provider, a service client and a service target. A more detailed classification has been proposed Zoet and Versendaal [START_REF] Zoet | Business Rules Based Perspective on Services: A Mixed Method Analysis[END_REF]. This classification scheme, existing of eleven service systems, classifies the processes, guidance elements, actors, input and output per service system. A detailed explanation of the BRMS can be found in [START_REF] Zoet | Business Rules Based Perspective on Services: A Mixed Method Analysis[END_REF]. However to ground our method and research a summary is provided. Deployed business rules are monitored for proper execution. The 1) monitoring service system collects information from executed business rules and generates alerts when specific events occur. This information in turn can be used to improve existing or design new rule models. Execution of business rules is guided by a separate service system: 2) the execution service system. It transforms a platform specific rule model into the value proposition it must deliver. A platform specific rule model can be source code, handbooks or procedures. The execution in turn can be automated or performed by humans. To execute a platform specific rule model it needs to be created. A platform specific rule model is created from a non-platform specific rule model by the 3) deployment service system. Before deploying business rule models they have to be checked for two error types 1) semantic / syntax errors and errors in its intended behaviour. The first type of errors are removed from the business model by the 4) verification service system; the latter by the 5) validation service system. The business rule model itself is created within the 6) design service system. In addition an 7) improvement system exists. The improvement system contains among others functionality to execute impact analysis. To design business rules models data sources need to be mined; the 8) mining service system contains, processes, techniques and tools to extract information from various data sources, human or automated. Before mining can commence in some cases explicit data sources need to 9) cleansed. The cleansing service system removes all additional information intervening with proper mining or design activities. Each previous mentioned service systems provide output to two management service systems: 10) the version service system and 11) the audit service system. Changes made to the data source, platform specific rule models, non-platform specific rule models and all other input and output are registered by the version service. All data collected about realising changes to specific input, output and other service system elements are registered by the audit service system. Examples of registered elements are: execution dates, rule model use, rule model editing, verification and validation. All service systems described in this paragraph need to be designed developed and executed. Service design is the process of requirements analysis and service discovery. After requirements are analysed the service system needs to be configured. For this interaction, roles, functions, processes, knowledge and products need to be defined. After the service system is configured the service itself needs to be executed. From literature four levels of collaboration can be recognized: 1) no collaboration, 2) bilateral collaboration, 3) multilateral collaboration and 4) extended collaboration [START_REF] Plomp | Measuring Chain Digitisation Maturity: An assessment of Dutch Retail Branches[END_REF]. Two organisations within the same industry or value chain working together is defined as a bilateral collaboration. Multilateral collaborations have the same characteristics as bilateral collaborations with the slight difference that more than two parties are involved. Extended collaboration describes many-to-many and 'n-tier' relationships between organisations. Two examples are consultative bodies and network orchestrators. We assume that the type of collaboration (X) implies different design, development and execution of the BRMS (Y). Fig. 1 schematically illustrates these dependencies. Nelson [START_REF] Nelson | Transitioning to a business rule management service model: Case studies from the property and casualty insurance industry[END_REF] classifies inter departmental collaboration for a specific BRMS along five dimensions organisation scope, ownership, structure, development responsibility and implementation responsibility. We adopt three dimensions in our analysis 1) ownership, 2) development responsibility and 3) implementation responsibility. However, to fit inter organisational collaboration they must be adapted. Ownership in our model is divided into two dimensions ownership of the input and output of a service system. Development responsibility is defined as the organisation that executes the service system process and implementation responsibility is defined as the organisation that implements the output of the service system. Organisation scope in our research is one of the variables of conceptual model namely: collaboration. Data gathering consisted of three phases. First the effect of the collaboration types on each business rules management service system has been evaluated by means of a workshop. Participants to the workshop were six business rules experts. During the second phase 12 projects have been surveyed to identify potential elements to which third parties could supply added value. During the last phase data sources such as press and analyst reports have been evaluated to indentify collaboration possibilities. The results are discussed in the remainder of this section. Per service system indentified additional variables and the characteristics of the dimensions are discussed. Cleansing Service System and Mining Service System Interoperability Explicit and tacit data sources are input for the business rules mining service system, cleansing service system, and design service system. Cleansing and mining are discussed in this section; the design service system in the next. The business interoperability question with regards to data sources is: can data from multiple organisations add additional value compared to data from a single organisation? Multiple organisations create and execute very similar or identical rule models. Examples of such rule models are medical treatment rules within the healthcare industry [START_REF] Ferlie | A new mode of organizing in healthcare? Governmentality and managed networks in cancers service in England[END_REF] and fraud detection rules used by banks and insurers [START_REF] Chiu | A Web Services-Based Collaborative Scheme for Credit Card Fraud Detection[END_REF]. Improvement of such rule sets is based on execution data of a single organisation. By means of collaboration larger and more accurate data sources can be created. Overall characteristics of the interoperability design issues for the cleansing and mining system are depicted in Table 1 andTable 2. Both tables show an additional variable influencing the development responsibility: privacy. Privacy influences the question which organisation is responsible for cleansing? If the data source contains sensitive information cleansing should occur at the providing organisation in the case of bilateral or multilateral collaboration. Cleansing in this case can also mean sanitising or anonymizing data [START_REF] Chiu | A Web Services-Based Collaborative Scheme for Credit Card Fraud Detection[END_REF]. Extended collaboration implies the same question. However, when data is collected and integrated by an independent consultative body this question may be easier to solve from a political viewpoint [START_REF] Monsieur | Handling Atomic Business Services[END_REF]. After the data source is created it can be used to mine rules. When an extended collaboration is realized the consultative body can mine the data sources after which the proposed business rules are shared with all partners e.g. the healthcare industry [START_REF] Ferlie | A new mode of organizing in healthcare? Governmentality and managed networks in cancers service in England[END_REF]. Other forms of collaboration have two choices 1) each party mines the data source itself or they appoint a partner to do so thus factually creating an extended collaboration. Design Service System Interoperability The design of a rule model is based on a specific data source or on proposed business rules model by the mining service system. An additional variable has been indentified influencing the design service system: type of partners, see Table 3. A partner can be either a rule chain partner or a competitive/alliance partner. Competitive partners are defined as organizational entities from the same industry realizing an identical value proposition. A rule chain partner is an organizational entity that either formulates data sources or business rules that must be implemented by the organisation or an organizational entity that should implement business rules or data sources defined by the organisation. Interoperability between competitive / alliance partners deal with the same questions as the data source interoperability. Either organisations design rule model together or do so by providing input to a consultative body. Examples are organizations that together formulate business rules for risk management [START_REF] Zoet | Aligning risk management and compliance considerations with business process development[END_REF]. Interoperability between rule chain partners adds an extra dimension to designing the rule model. An example from the public sector will demonstrate this. The ministry of finance formulates tax laws that are analysed by the tax and customers administration to formulate business rules models. These business rules model are deployed into software and forms which are then sent to the citizens. In addition to the tax and customers administration multiple commercial and non-commercial organisations also formulate business rules based on the same tax laws. The same applies to other laws like for example the Sarbanes-Oxley Act (SOX) or the Fair and Accurate Credit Transaction Act (FACTA). All commercial organisations governed by specific laws are building rule models based on the text provided by the United States Government. Expending on the question at the beginning of this paragraph: who should translate the tax laws, SOX and FACTA to business rules models? The government or the individual commercial and non-commercial organisations governed by the rules? To answers this question first the difference between internal business rules and external business rules has to be explained. Two main sources of business rules can be distinguished, namely internal business rules sources and external business rules sources [START_REF] Zoet | Aligning risk management and compliance considerations with business process development[END_REF]. This adheres to the principle within risk management where a distinction exists between operational risk and compliance [START_REF] Zoet | Alignment of Business Process Management and Business Rules[END_REF]. External business rules are specified by external parties through the creation of regulations stating which rules an organization needs to comply to. Internal business rules sources are specified by the organization itself; they decide which rules they want to enforce [START_REF] Chen | An Approach for Enterprise Interoperability Measurement[END_REF]. With external business rules organizations have to prove, based on externally imposed criteria, that they have established a sufficient system to control the business rules. For internal business rules there are no externally applied criteria or need to prove sufficient control; in this case organizations can implement their own criteria and create a system for measuring this. Expanding on the difference in enforceability indicates a mismatch in the power/knowledge nexus [START_REF] Foucault | Security, territoty, population -lectures at the college of france 1977 -87[END_REF]. In practice organisation will translate laws and regulations to business rules in one of two ways: or they transform the laws themselves or they will hire a vendor, system integrator or consultancy firm to translate for them. In all previous mentioned cases the organisation that performs the translation is not the organisation that enforces the regulation. The number of parties between the enforcer and/or creator of the law and the actual implementation by means of business rule models we define as n-order compliant, see Fig. 2. If government agency X states law Z and organisation Y hires a consultancy firm to translate and implement the law by means of business rules they are 3 rd order compliant. If they translate and implement the law directly they are 2 nd order compliant. Only one organisation has the power (/knowledge) to provide 1 st order compliancy, the organisation that defines the regulation, government agency X. They can achieve this by translating the law into a business rule model and distribute this model to the organisations. The same situation can be recognized within individual organisations. One department specifies strategy or internal policies. A second department translates the strategy to operational business rules. In turn the operational business rules are distributed to the information technology department achieving 2 nd or 3 rd order compliancy. With respect to organisational collaboration in a rule chain the preferable solutions would be that 1st order compliancy is achieved. Thus that the regulatory body who defines the legislation also creates and distributes the business rule model. However, currently only one example of this is known to the authors the Australian Taxation Office [START_REF] Office | Legal Database Australian Taxation Office[END_REF]. In all other cases it is recommended to keep the n-order compliancy as low as possible. Validation Service System Interoperability Validation is the service system that explores errors in the intended behaviour of business rule models by means of test cases containing real life data. Likewise to design service system the partner type also influences validation, see Table 4. First order compliancy can still be achieved within the validation service system when the enforceable party is not responsible for the business rule model design however they need to validate the designed model and declare it compliant. The respondents and authors have no knowledge about a public body officially validating external rule models. Examples can be found within commercial rule chains. Authorised insurance brokers review, accept, administer, collect premiums and execute claim settlement for insurance agencies. They define rule models to support the previous mentioned tasks. Before deploying the actual rule models insurance organisations apply their test set to test if product business rules are properly deployed. If so, they consent on deploying the service to the live environment. In these cases an extended collaboration is established with the authorised insurance broker as consultative body. Other examples can be found in the healthcare industry where various consultative bodies have test cases for diagnoses rules sets. Bilateral or multilateral collaborations between two organisations can also apply validation in the same manner. Another possibility is sharing test cases between collaboration partners instead of 'outsourcing' the validation process. Deployment, Execution and Monitoring Service System Interoperability Within three projects information system deployment and maintenance to another organisational are outsourced to a third party, e.g. system integrator. Non-platform specific rule models are transformed to platform-specific rule models by the third party. The implementation and development responsibility in all collaboration forms lies with the receiving organisation (system integrator). Ownership of the input and output in most cases lies by the providing organisation. Execution service interoperability occurs when one or more organisation(s) offer(s) a value proposition realized by means of a platform specific rule model to one or more organisation(s). The airline alliance example described in the introduction section is an example of this type of collaboration, which can be classified as business rules as a service. Another example can be found in the healthcare sector where specific hospital offer decisions service to multiple of its pears. No additional variables impacting the characteristics have been found, see Table . 6. Monitoring service system collaboration mainly will occur in rule-value chains since most organisations will not provide monitoring services to competitors. A possible exception might be extended collaboration with a consultative body. However, the output of the monitoring service system: performance data can be input for the cleansing service system, mining service system or design service system collaboration. An example of rule-value chains within the insurance industry is that an inspector applies a rule model to determine if a vehicle is either repairable or total loss. Based on the result of the rule model the insurance companies start different processes. Although not a collaboration between two organizations; another pattern instantiation is indentified in the business-to-consumer industry: telemedical care for patients [START_REF] Fifer | Critical Care, Critical Choices: The Case for Tele-ICUs in Intensive Care[END_REF]. The patient has physical equipment at home that contains the platform specific rule model. The execution of this model is monitored at the hospital or medical centre. All types of collaboration have the same dimensions characteristics, see Table 7. Regarding the audit service system and version service system no advantages can be distinguished regarding bilateral and multilateral collaborations. In extended collaboration consultative bodies and individual organisations need to determine how to manage local and network versions of the various business rules concepts. However, one can argue this can be considered overhead instead of added value. The analysis of our initial model revealed two additional variables 1) type of partners and 2) privacy. The latter indicates that sanitising and/or anonymizing should be taken into account when sharing input data among organisations. Research addressing sanitising and/or anonymizing data already has been conducted in various fields. Solutions can be adopted and adapted from these fields. Our research revealed rule value chains and more specific the n-order compliance concept. N-order compliance raises questions in terms of organisational, social, cultural, political and economical effects and consequences [START_REF] Legner | Business Interoperability Research: Present Achievements and Upcoming Challenges[END_REF]. Research indicates that 3 rd and 4 th order compliancy is a common grade of compliance. Are both levels the optimal form of interoperability from a political or economic viewpoint? From a political viewpoint most countries distinguish between policy makers (ministries) and a central government responsible for translating and executing policies. What effects would 1 st order compliancy have on the political relationship? From an economic viewpoint an interesting question is: which savings can be achieved when realising 1 st order compliance? Although limited, research on economic assessment of business interoperability shows improvements in throughput, cycle time and reduction of transaction cost [START_REF] Legner | Business Interoperability Research: Present Achievements and Upcoming Challenges[END_REF]. How do these concepts relate to the various forms of n-order compliance? Analysing the four dimensions for underlying trends reveals that both development and implementation responsibility vary per individual service system per organisation. The ownership of the input for a specific service system in all cases, except for the design service system, is at the providing organisation. This comes as no surprise. For organisations to derive value from the collaborated service systems the information needs to be contextualized for their specific information they own. This can only be achieved by contextualizing the input. The ownership of the output of the individual service systems follows the conceptual lifecycle of the four highlevel business rules subjects 1) data, 2) non-platform specific rule model, 3) platform specific rule model and 4) value proposition [START_REF] Zoet | Business Rules Based Perspective on Services: A Mixed Method Analysis[END_REF]. Neglecting the design service system from a rule-chain perspective the input ownership changes at each of the four lifecycle points. When the output is data, the ownership is shared. The providing party has ownership of the non-platform specific and platform-specific rule models, while the value proposition ownership is at the receiving party. The reason the rule chain perspective deviates from the pattern is because 1 st order compliance is considered to be preferable. To realize 1 st order compliance the ownership of the output lies at the organisation that has the knowledge and power to do so. Conclusion Business rules are a key denominator for an organizations success. Likewise the ability to collaborate is important. Therefore we set out to answer the research question: "What is the relation between forms of business interoperability and the organisation's business rules management service systems in the perspective of value proposition?" In order to answer this question we first indentified the different types of interorganisational collaboration. After which the collaboration types have been combined with the eleven service systems of a BRMS. Explanatory research further operationalized the relation: we used data collected from a workshop and secondary data sources such as press reports, analyst reports and business rules management project documentation. The aim of this study was to provide insights into different forms of interoperability that are related to an organisation's BRMS. The results have limitations. Insights are derived from a limited set of data and the existing knowledge base in the area of business rules management. Building on the results from this explorative research further research should be performed. Main subject of future work will be the further validation of the indentified interoperability possibilities in order to assess the practical relevance besides establishing its theoretical foundation. Another direction for future work is creating business, technical and process building blocks to realize interoperability. Fig. 1 . 1 Fig. 1. Schematic Overview researched relations between Concepts Fig. 3 . 3 Fig. 3. Schematic Overview N-Order Compliancy Table 1 . 1 Characteristics Dimensions Interoperability in relation to the Mining Service System Bilateral Multilateral Extended Ownership Input Providing Org. Providing Org. Providing Org. Ownership Output Providing Org. / Receiving Org. Table 2 . 2 Characteristics Dimensions Interoperability in relation to the Cleansing Service System Bilateral Multilateral Extended Ownership Input Providing Org. Providing Org. Providing Org. Ownership Output Providing Org. / Receiving Org. Table 3 . 3 Characteristics Dimensions Interoperability in relation to the Design Service System Bilateral Multilateral Extended Ownership Input Rule-Chain: Rule-Chain: Rule-Chain: 1 st order party 1 st order party 1 st order party Non Rule-Chain: Non Rule-Chain: Non Rule-Chain: Providing Org. Providing Org. Consortium Ownership Output Rule-Chain: Rule-Chain: Rule-Chain: 1 st order party 1 st order party 1 st order party Non Rule-Chain: Non Rule-Chain: Non Rule-Chain: Providing Org. / Individual Org./ Consortium Receiving Org. Receiving Org. Development Rule-Chain: Rule-Chain: Responsibility 1 st order party 1 st order party Table 4 . 4 Characteristics Dimensions Interoperability in relation to the Validation Service System Bilateral Multilateral Extended Ownership Input Providing Org. Providing Org. Providing Org. Ownership Output Providing Org. Providing Org. Providing Org. Development Rule-Chain: Rule-Chain: Rule-Chain: Responsibility 1 st order party 1 st order party 1 st order party Non Rule-Chain: Non Rule-Chain: Non Rule-Chain: Receiving Org. Receiving Org. Consortium Implementation Rule-Chain: Rule-Chain: Rule-Chain: Responsibility Providing Org. Providing Org. Providing Org. Non Rule-Chain: Non Rule-Chain: Non Rule-Chain: Providing Org. Providing Org. Providing Org. Table 5 . 5 Characteristics Dimensions Interoperability in relation to the Deployment Service System Bilateral Multilateral Extended Ownership Input Providing Org. Providing Org. Providing Org. Ownership Output Providing Org. Providing Org. Consortium Development Receiving Org. Receiving Org. Receiving Org. Responsibility Implementation Receiving Org. Receiving Org. Receiving Org. Responsibility Table 6 . 6 Characteristics Dimensions Interoperability in relation to the Execution Service System Bilateral Multilateral Extended Ownership Input Providing Org. Providing Org. Providing Org. Ownership Output Receiving Org. Receiving Org. Receiving Org. Development Providing Org. Providing Org. Providing Org. Responsibility Implementation Receiving Org. Receiving Org. Receiving Org. Responsibility Table 7 . 7 Characteristics Dimensions Interoperability in relation to the Monitoring Service System Bilateral Multilateral Extended Ownership Input Providing Org. Providing Org. Providing Org. Ownership Output Receiving Org./ Receiving Org./ Consortium Providing Org. Providing Org. Development Receiving Org./ Receiving Org./ Receiving Org. Responsibility Providing Org. Providing Org. / Providing Org. Implementation Receiving Org./ Receiving Org./ Receiving Org. / Responsibility Providing Org. Providing Org. Providing Org.
34,426
[ "1002376", "1002377" ]
[ "362517", "362517" ]
01474208
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474208/file/978-3-642-36796-0_14_Chapter.pdf
Sini Ruohomaa email: [email protected] Lea Kutvonen email: [email protected] Behavioural Evaluation of Reputation-Based Trust Systems Keywords: Trust management, reputation systems, inter-enterprise collaboration, simulation-based benchmarking, attack models In the field of trust and reputation systems research, there is a need for common and more mature evaluation metrics for the purpose of producing meaningful comparisons of system proposals. In the state of the art, evaluations are based on simulated comparisons of how quickly negative reputation reports spread in the network or which decision policy gains more points against others in a specific gamelike setting, for example. We propose a next step in identifying criteria for a maturity model on the behavioural analysis of reputation-based trust systems. Introduction The goal of this methodological work is to advance the state of the art of evaluating reputation-based trust management systems. We find that the field currently suffers from a confusion of what kind of evidence simulation experiments can provide exactly, and there is a need for credibly evaluating the attack resistance and robustness of proposed systems [START_REF] Gollmann | From access control to trust management, and back -a petition[END_REF]. We acknowledge that other attributes such as usability [START_REF] Marsh | Rendering unto Caesar the things that are Caesar's: Complex trust models and human understanding[END_REF][START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF], viability [START_REF] Zibuschka | On some conjectures in IT security: the case for viable security solution[END_REF], deployability [START_REF] Bonneau | The quest to replace passwords: A framework for comparative evaluation of web authentication schemes[END_REF] and adjustability to different business situations [START_REF] Ruohomaa | Trust and distrust in adaptive inter-enterprise collaboration management[END_REF] require attention as well. Instead of a complete maturity model addressing all these aspects, our focus here is on trying to advance behavioural evaluation of reputation-based trust systems specifically. We first summarize the problem setting of the field from the point of view of inter-enterprise collaborations, which are the context of our work [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]. Collaborations take place between autonomous business services operating in an open service ecosystem. New previously unknown or little known actors can join the ecosystem, and old ones may leave. In this environment, each actor has different goals, which change over time, and it must protect its own integrity by making decisions on whether it trusts another service enough to collaborate with it. Trust management is the activity of upkeeping and processing information which trust decisions are based on, and a trust management system is an automation tool for the purpose. A trust decision is made by a trustor, gauging its willingness to engage in a given action with a given trustee, given the risks and incentives involved. The key input to a trust decision is reputation information, which is commonly used to evaluate the subjective probability that the trustee will either behave according to the collaboration contract (cooperate), or break the collaboration contract (defect). Reputation information is divided into two categories: First-hand experiences are gained from the trustor monitoring the outcomes of actions it has engaged in itself, and are generally considered to be error-free within the limits of observability. External experiences are gained from third-party recommenders based on their own first-hand experiences; these actors may have an incentive to provide incorrect information deliberately or can simply disagree based on having observed different kinds of behaviour. The aim of evaluating reputation-based trust systems in research is often phrased in terms of quantifying an improvement to existing work. The prevalent approach of evaluating trust and reputation systems relies on using simulations to produce evidence that a given trust or reputation system is able to correctly identify well-and misbehaved actors of specific kinds (e.g. [START_REF] Kerr | Smart cheaters do prosper: Defeating trust and reputation systems[END_REF]). These simulations are typically based on fixed stereotypical behaviour patterns (e.g. [START_REF] Schlosser | On the simulation of global reputation systems[END_REF]), which falls under the field of reliability rather than security [START_REF] Gollmann | From access control to trust management, and back -a petition[END_REF]. When scoring policy behaviour, it is tempting to set up a benchmark of measuring "correct" and "incorrect" decisions given specific evidence. Unfortunately, this is an oversimplification that relies on a set of quite fragile assumptions: that reputation information captures reality accurately, service providers act predictably enough to follow stereotypical patterns, and actors in the marketplace, especially the attackers, are not particularly resourceful. None of these assumptions can be said to be true in an ecosystem of inter-enterprise collaboration. This discrepancy causes a real danger that by introducing reputation measures into the market with inadequate analysis of their relevant behaviour we end up inviting rampant reputation fraud, and advance ecosystem deterioration by introducing a metric that does not serve its purpose. Farmer and Glass have analyzed the effects of deployed web reputation systems in the real world [9, ch. 5], while deployability and market acceptance analysis of system proposals also gain increasing attention in the field of security [START_REF] Zibuschka | On some conjectures in IT security: the case for viable security solution[END_REF][START_REF] Bonneau | The quest to replace passwords: A framework for comparative evaluation of web authentication schemes[END_REF]. The main overarching goal of behavioural analysis of policies of any kind is to support policy selection, but this choice reflects the actors' different goals. There are no objectively correct answers. Summarizing policy behaviour given specific input patterns helps this comparison, even if there is no universal correct behaviour. As a special case, the purpose of a reputation-based trust management system is to detect and deter misbehaviour, so we should learn what its vulnerabilities and other costs are. These cannot be benchmarked by fixed loads, but have to be analyzed per system; from a security perspective, it is obviously not enough to conclude that a system is robust against the most popular attack of last year. Higher-level classifications of attacks may support vulnerability analysis in the form of a checklist. Our research question is: what kinds of tools can we apply to evaluate whether a reputation-based trust management system fulfills its behavioural requirements, and particularly, what metrics could be organized as a reusable benchmark between systems and how? Section 2 provides background on reputation-based trust management, how trust management systems are directed by policy, and summarizes our simulation experiments and attack resistance evaluation from earlier work. Section 3 presents the state of the art on evaluation methods in the field. Section 4 discusses the possibilities and limitations of different methods, such as simulation experiments in analyzing trust and reputation systems, and the ways to evaluate attack resistance based on methods adopted from computer security. Section 5 concludes. Studying the Behaviour of Trust Management Systems To support the discussion on development of evaluation methods, we use our own earlier work on trust management as an illustrative example in Section 2.1. During our simulation work summarized in Section 2.2 we learned the current evaluation methods could benefit from the steps we propose in Section 4. Reputation-Based Trust Management The purpose of a trust management system is to handle routine trust decisions on behalf of a human user and to collect and manage the relevant input needed for them, most notably first-hand and third-party reputation information. Third-party experiences must be evaluated for credibility and incorporated into the local body of reputation information with care, as they may include low-quality or intentionallly fraudulent data. Non-routine decisions, which for example involve high risks or cannot be automatically decided on due to insufficient information, must be forwarded to a human user to decide on. This division is explicitly configured. In order for a deterministic automation system to adjust to different business situations, we must separate policy from implementation in the system and make the former modifiable during runtime. A sufficiently flexible information model allows the automated rules to handle quite complex contexts, such as a situation where the reputation of a minor actor in the collaboration is not spotless, but the monetary losses of any errors it may make are covered by insurance and the collaboration as a whole needs someone to fulfil the role in order to happen. The establishment of metapolicy which determines when a situation is routine and when it requires human intervention, in turn, will pick out cases that are not suitable to be handled automatically. This improves the trustworthiness of the decision-making system itself [START_REF] Ruohomaa | Trust and distrust in adaptive inter-enterprise collaboration management[END_REF]. The two main policies of a reputation-based trust management system are the trust decision policy and the reputation update policy. The trust decision policy determines, based on input such as reputation information, whether we are willing to collaborate with an actor or not. The reputation update policy, on the other hand, establishes how to handle new reputation information; among other things, it must determine how much weight information from external sources is given over local observations [START_REF] Ruohomaa | Trust and distrust in adaptive inter-enterprise collaboration management[END_REF]. A trust decision policy must balance the number of possible partners and requirement for positive evidence, while a reputation update policy must weigh information quality and credibilty against the amount of information that is available to support decision-making. As reputation influences trust decisions and through that collaboration opportunities, it attracts manipulation attempts on competitors' and one's own reputation. This causes challenges for finding a robust reputation update policy that can still utilize the information available to support trust decisions. Example attacks on reputation systems [START_REF] Yao | Addressing common vulnerabilities of reputation systems for electronic commerce[END_REF] include undeserved negative feedback, collusions of multiple actors to skew a specific actor's reputation up-or downwards, or an actor stuffing the ballot by creating multiple seemingly independent identities in a Sybil attack [START_REF] Douceur | The Sybil attack[END_REF]. When selecting a reputation update policy to protect the trustor from being mislead by external reputation information, we can roughly divide the trustees into four categories: -Well-reputed actors recommended as trustworthy by high-credibility sources, -Promising actors recommended as trustworthy by low-credibility sources, but generally unknown by high-credibility sources, -Shunned actors warned to be untrustworthy either by high-credibility sources or by unanimous low-credibility sources, and -Mysterious actors receiving either very few or contradictory recommendations. While all of these categories are more or less subjective perceptions rather than proof of the trustees' actual behaviour and trustworthiness, a good reputation system should generally promote the well-reputed actors and weed out the shunned actors. The two other classes require more careful balancing. A very risk-averse trustor will prefer not to collaborate with the mysterious actors, independent of whether they offer better terms of service. Should everyone adopt this approach, though, newcomers will have no chance of proving themselves, targets of defamation cannot clear their name, and the service ecosystem will begin to deteriorate. The promising actors face a problem similar to newcomers in that they have not proven themselves enough, but at least they have some recommendations supporting them. On the other hand, it is also easier for a malicious attacker to appear as one of the promising actors rather than a well-reputed one, or to claim that any negative recommendations about it result from reputation attacks rather than honest feedback. Evaluating Reputation-Based Trust Management Systems When evaluating the behaviour of a reputation-based trust management system, the usual interest is in studying whether a given trust decision or reputation update policy responds to a specific requirement, such as identifying actors that follow a specific type of misbehaviour as misbehaving. For trust decision policies, the usual appropriate reaction is then to not engage in collaboration with the actor, while for reputation update policies, it is to reject the likely fraudulent information. In earlier work, we have summarized the simulations and analysis of example trust decision policies [START_REF] Ruohomaa | Trust and distrust in adaptive inter-enterprise collaboration management[END_REF]; below, we summarize a reputation update policy experiment, where we have compared the effects that four reputation update policies have on trust decisions when the trust decision policy remains fixed [3, ch. 6.3]. Both experiments share a similar structure: the policies under scrutiny are applied to a set of different simulated experience streams as the sole input. Some of the streams have been optimized against each policy for the simulated attacker to defect as efficiently as possible. Our experiments make two contributions [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]: The behaviour of a given decision or reputation update policy is illustrated through exposing it to different representative experience streams and plotting the resulting trust decision score. Second, the limitations of each policy are demonstrated by defining the behaviour of an optimal attacker, and calculating how much it is possible for it to benefit by defecting while it maintains its reputation above the level of positive trust decisions. A reputation update policy determines both whether a new experience is incorporated into an agent's private reputation information storage, and how much weight it should be given in future decision-making. A key input to this decision is the source-dependent credibility of the experience. The studied reputation update policies have been selected to represent different types of solutions to this choice, and we have visualized how effectively they discriminate against ill-behaved actors. The baseline policy for comparison is "Accepting", which simply incorporates all experiences independent of their credibility. The "Weighted" policy offsets the impact of dubious experiences by weighing them by their credibility: as we consider source credibility to be represented by a real number c ∈ [0, 1], instead of incrementing the counter for the matching type of experiences with 1 per experience, this policy would increment it by c instead. The "Fixed-cutoff" policy ignores all experiences below a minimal credibility limit C 1 , and the "Variablecutoff" policy compared the so far amassed external experiences' average credibility C 2 to the new item's source-based credibility c and accepts the experience if c ≥ C 2 . This is to ensure that the trustor is open to new experiences when it has nothing better, but does not dilute its reputation storage by low-quality information when it has access to more credible experiences. The policies in question were selected to be understandable to a projected end user, and to take advantage of different features of the information model of the system in order to illustrate its advantages. We matched our experience streams to the previously discussed well-reputed actors, promising actors with positive but low-credibility reputations, and mysterious actors who receive contradictory recommendations: positive reports from high-credibility sources, and negative from low-credibility sources. Shunned ac-tors were covered in the first simulation [START_REF] Ruohomaa | Trust and distrust in adaptive inter-enterprise collaboration management[END_REF][START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]. Additional streams demonstrated optimal attacker behaviours. Optimal attackers were designed to keep their reputation high enough to always ensure a positive trust decision, and the actions they could choose from were cooperating, faking a positive low-credibility experience to boost their reputation, and defecting. Each action was assigned a cost based on its impact [START_REF] Ruohomaa | Trust and distrust in adaptive inter-enterprise collaboration management[END_REF]. The agent's task was to maximize its score per action taken [START_REF] Russell | 6: Adversarial search[END_REF] against each target policy separately. For example, the attacker defecting with a major negative monetary effect to the trustor would gain the attacker +6 points, a minor negative effect +2 points, generating a low-credibility fake experience would be a 0-cost action independent of whether it implied a major or minor positive experience, and actually cooperating would cost -1 or -3 points depending on whether the effect to the trustor was minor or major positive, respectively. For example, the optimal attacker could generate fake experiences and then defect with major negative effect against the Accepting and Weighted policies, but it would require more fake experiences per defection against the Weighted policy. Both policies mainly suit environments where the vast majority of information is truthful, and the impact of the occasional error is low; they do not work against quickly mass-produced fake experiences. The Fixed-cutoff policy refused all suspicious experiences, but is left with fewer experiences and will not be able to take advantage of promising actors with low-credibility positive experiences only. The Variable-cutoff policy, in turn, could be circumvented with a large number of low-credibility reports before the first defection. We have discussed prompt reaction to notable changes in behaviour in other work [START_REF] Ruohomaa | Detecting and reacting to changes in reputation flows[END_REF], and proposed other extensions to the example policies in the thesis [3, 6.3]. State of the Art in Evaluation Metrics for Reputation-Based Trust Systems A reputation-based trust management system implements the preferences of its user, and as such there is no objective "correct" result that could be validated. To discuss the state of the art in simulation experiments, we present experimentation approaches from two categories: simulating marketplace resistance against attackers following given behaviour patterns, and simulating a single actor's competitiveness in a marketplace. The first category corresponds to mechanism design. It sets all actors to use the same decision policies and measures how well the marketplace as a whole resists different kinds of misbehaviour. The second category represents agent design, pitting different decision policies against each other in the same marketplace. It measures an agent's competitiveness on the marketplace, given an existing mechanism it needs to adjust to. Reputation Systems in Electronic Marketplaces Related work presents simulation experiments on the behaviour of different accumulative and probabilistic reputation systems in an electronic market-place [START_REF] Schlosser | On the simulation of global reputation systems[END_REF][START_REF] Nurmi | Perseus -a personalized reputation system[END_REF][START_REF] Jøsang | Simulating the effect of reputation systems on e-markets[END_REF]. In such a marketplace, intelligent agents, which correspond to our service providers, perform pairwise brief transactions of buying and selling goods. The marketplace is given a distribution of agents with different behaviour profiles, and each agent type has a decision policy; typically the reputation update policy is equal between all agents, and all experience information is shared. The simulation then measures for example the average number of transactions taken with a given type of agent (honest, malicious, etc). The basic behaviour profiles of agents are typically very straightforward, such as "honest agents always carry out transactions honestly and give fair ratings", while "malicious agents act honestly or dishonestly by chance, and always give negative ratings" [START_REF] Nurmi | Perseus -a personalized reputation system[END_REF]. More complex behaviour can be tied to the marketplace as a whole; for example, a "spamming" agent can otherwise act honestly, but always rate other agents negatively in order to make itself more attractive in comparison [START_REF] Nurmi | Perseus -a personalized reputation system[END_REF], or an agent may be an opportunistic defector, adjusting its behaviour based on whether there is anyone in the marketplace who will transact with it [START_REF] Jøsang | Simulating the effect of reputation systems on e-markets[END_REF]. Schlosser et al. define a behaviour profile for a "disturbing" agent as one who first builds a high reputation with good transactions, and then uses up the reputation so gained by defection [START_REF] Schlosser | On the simulation of global reputation systems[END_REF]. Honest agents all use the same decision algorithm, and if they transact frequently with malicious agents, the reputation system has failed to protect the marketplace. Based on this definition, few reputation systems are resistant to the optimal attacker model -even the "disturbing" behaviour model [START_REF] Schlosser | On the simulation of global reputation systems[END_REF] turns out to be aptly named, when in fact it is nothing more than a model for a selfish agent behaving rationally within the limitations set by the environment. To be able to give conclusive results, the tools of game theory require strict formal abstraction of the environment and agent behaviour; the core problem then becomes how to formulate a question within this vocabulary so that it is "solvable", while ensuring that the result still gives some useful information about real marketplaces. One of the aspects left out by this simplification is the social control or deterrence effect of these reputation-based sanctioning mechanisms. In other words, the simulations do not measure how much the reputation system cuts down the expected gains from optimized misbehaviour, although they may show that a specific fixed negative behaviour pattern gains less in one system than another. The reputation system will inevitably be one step behind a rational attacker, so in the prediction of attacks our systems inevitably fail; the goal is therefore damage control and reducing the payoff of attacks. It should be noted that reputation loss can only ever deter an actor who plans to remain on the market in the future, so final sanctioning should come from the slower but generally effective judicial system. Our own simulations have studied how a given agent survives against rational selfish agents. They simplify the interaction with other actors into experience input streams. We then specify policies that drop optimal attacker gains below a certain level to reflect the deterrence effect. The difference between fixed and optimal attackers is that within the same cost model, all attacks will bring equal or less gain than the optimal one. This allows policy comparisons. The challenge is finding a sufficiently realistic cost model. As further examples of analysis against a given attack type, Margolin and Levine have measured the cost of successfully executing a Sybil attack [START_REF] Margolin | Quantifying resistance to the Sybil attack[END_REF], or the cost of extra "votes" gained through the attack in different schemes, and Srivatsa et al. have aimed to minimize attacker gains from fixed oscillatory behaviour such as the aforementioned "disturbing" agent model [START_REF] Srivatsa | TrustGuard: countering vulnerabilities in reputation management for decentralized overlay networks[END_REF]. Competitive Agent Simulations In competitive agent simulations, agents and policies are pitted against each other in a fixed environment. Each actor aims to maximize its own gains. The format of shared reputation information is fixed, but agents can choose their internal data representation themselves. The Agent Reputation and Trust (ART) testbed [START_REF] Fullam | A specification of the Agent Reputation and Trust (ART) testbed: experimentation and competition for trust in agent societies[END_REF] has attracted notable attention, but is no longer maintained. The Trust and Reputation Experimentation and Evaluation Testbed (TREET) [START_REF] Kerr | TREET: the Trust and Reputation Experimentation and Evaluation Testbed[END_REF] is a more recent proposal. It is a more flexible comparison tool, but does not include the yearly competition forum that helped ART attract wider research attention. Convincing the research community to adopt a specific testbed or a benchmark is a nontrivial task, and the differences in domain requirements make this even more difficult. The ART testbed simulates a marketplace of service providers competing to sell their services [START_REF] Fullam | A specification of the Agent Reputation and Trust (ART) testbed: experimentation and competition for trust in agent societies[END_REF]. The provided service is art evaluation for a customer: producing a real number as close to the unknown correct answer as possible. There are a number of limitations and costs related to providing the service: the agent can evaluate some art correctly, or get incorrect results and ask for help from others to validate its results. A reputation system is included to support requesting the help of other actors. The number of actors is low, 10-20, so in practice collecting direct experience on all of them is reasonably easy. The learning agents in the testbed should maximize their own measured gains. The testbed specifies fixed prices for how much customers pay for an evaluation ($100), the cost of asking for an evaluation from another actor ($10), and the cost of asking for a reputation value (a real number between 0 and 1) from another actor ($1) [START_REF] Teacy | The ART of IAM: The winning strategy for the 2006 competition[END_REF]. In addition, the agent can spend an arbitrary amount of money for its own evaluation, with the quality of information depending on the money spent. Teacy et al. provide further analysis of the ART testbed [START_REF] Teacy | The ART of IAM: The winning strategy for the 2006 competition[END_REF]. There are a few factors that limit ART's usability as a benchmark environment. Besides limitations of the information model of the testbed itself [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF][START_REF] Kerr | TREET: the Trust and Reputation Experimentation and Evaluation Testbed[END_REF], the design of the testbed has misdirected attention towards secondary features of the game: the winning strategy focused its effort on determining the most profitable amount of money to invest in generating its own opinion, and in general, very little reputation was exchanged between any of the agents [START_REF] Teacy | The ART of IAM: The winning strategy for the 2006 competition[END_REF]. As noted in the evaluations of ART [START_REF] Teacy | The ART of IAM: The winning strategy for the 2006 competition[END_REF], we cannot conclude that an agent's competitiveness in the simulated marketplace necessarily has anything to do with the policy performing well for a real enterprise operating in a real marketplace. The benefit of competitive testbeds to fixed, deterministic benchmark scoring is that the evaluation system is adaptive: instead of optimizing policies against a fixed setup, researchers must prepare for tradeoffs in a more uncontrolled environment, which brings in new aspects of realism from the point of view of the system adapting to its environment. Contests attract researcher attention for psychological reasons as well, and the feedback and fame for winning can help motivate adjusting one's work to a given common framework of evaluation. This sets high demands for the evaluation framework, which must iteratively aim for a relevant abstraction of the marketplace. There are limitations to the rational self-interested agent design approach as well: When agent fitness is observed in isolation, ecosystem-wide benefits of the reputation system, such as altruistic punishment [START_REF] Fehr | The nature of human altruism[END_REF] and social pressure to follow contracts [START_REF] Akerlof | The market for "lemons": Quality uncertainty and the market mechanism[END_REF], can easily become eliminated from the scope of the simulation. While online business is no doubt competitive, a market for inter-enterprise collaboration cannot sustain itself on short-term self-interest alone [START_REF] Akerlof | The market for "lemons": Quality uncertainty and the market mechanism[END_REF]. This may become a notable blind spot for the metric. Benchmarking Trust Management Systems Like most measurement at its core, simulation experiments are illustrative. They reflect their setup, first and foremost, and the results require validation even for reasonably objective measures such as raw performance. Fixed simulations do not test the system's resistance against anything else than the chosen specialized behaviour patterns. As the ART testbed competition shows, even pitting algorithms against each other in a tesbed may teach us very little about their relative fitness in the world outside the testbed. Test loads from actual ecosystems, once available, will also be selected illustrative datasets. The behavioural requirements of a system should consider four key questions: 1) What kind of normal, constructive behaviour is expected in the system, 2) how effectively does the system recover from expected problems that are not calculated attacks, such as temporary malfunctions, 3) are the incentives the system creates in line with its role in the domain, and 4) how effectively does the system detect and deter both direct misbehaviour in the domain, and misbehaviour towards the system itself, such as reputation fraud? The first two categories can be addressed with fixed-input simulations suitable for automated benchmarking. The latter two measure the success of the system in promoting desired behaviour and weeding out misbehaviour; as both incentives and attacks must assume a rational actor, they are not possible to capture by fixed behaviour patterns. Repeatable Simulations with Fixed Loads Like reputation itself, simulated experience about reputation-based trust management systems is a subjective, simplified tool for comparison which only gains meaning when coupled with a purpose-driven valuation. A fitting purpose for applying the same test case across multiple systems would then be to provide classifications to aid policy comparison. While benchmarks cannot capture notable differences in the information models of different systems, they can be used to summarize policies built on compatible information models. The first, often inexplicit test done by a simulation is whether the core system is feasible to implement and run. Related to this, benchmark loads can be used to test the efficiency and scalability of a system that has non-trivial complexity, in terms of processing, communications and storage load caused by the decision-making and reputation processes. A well-argued mathematical model of the system complexity can be accepted as proof by itself, but a simulation result requires validation, as the implementation and the selection of loads adds a layer of possible measurement error. If the system is implementable, the main question becomes whether it supports the intended activities of the user. In order to define a valuation of what is expected as normal behaviour, the domain-specific requirements must be made explicit. A set of metrics (cf. [START_REF] Bonneau | The quest to replace passwords: A framework for comparative evaluation of web authentication schemes[END_REF]) allows a categorization, and the domain-specific requirements guide metric selection. Metrics should reflect the goals of the system so that its success in fulfilling them can be evaluated. The subjective goals of a system designer can be very specific, however, while comparison across multiple systems should leave space for different policy adopter preferences within the domain as well. As an example of the importance of explicit assumptions, Kerr and Cohen measured that the reactivity of systems that assume truthful reports is better than of those who evaluate and weigh incoming experiences for credibility [START_REF] Kerr | Smart cheaters do prosper: Defeating trust and reputation systems[END_REF]; on the other hand, in a typical competitive environment, not being able to resist fraudulent reports would instead be a critical failure that renders the system unusable. Once a domain model has been established, we can use it to define test patterns of constructive behaviour ; this requirement is often taken for granted in systems concentrating on foiling a specific attack, which may lead to an unusable system in practice. Examples of interesting behaviour to simulate include how the system treats cooperative service providers with different capabilities for service provision, or how a newcomer with no reputation data entering the system is able to get started. On the level of reputation and recommender credibility, the system should be able to take advantage of the reputation reports of new actors besides the old ones, and serve cooperative reporters, also if their observations genuinely differ from those of the majority. There are no objectively correct solutions even for constructive behaviour: for example the goal of supporting newcomers is often in conflict with the goal of defending against re-entry attacks. As a reliability test, a set of test patterns can be defined to illustrate recovery from problems as well, as long as they can be modelled statistically for benchmarking. Examples include reactivity to relevant changes in behaviour, how a service can recover its reputation after a temporary malfunction causes it to become unreliable for a while, a well-behaved user suffering and recovering from a defamation attack of fraudulent negative reports against it, or even load balancing for a service whose high reputation makes it too attractive to other actors in the ecosystem. 1Reputation-related problems can occur on two levels as well: the above examples represent the interaction of service provision and reputation, while on the second level actors' credibility as recommenders can suffer a disruption and need recovery. Like newcomer support, recovery support conflicts somewhat with robustness against malicious actors, but is important as a use case because the system is always designed for its non-malicious users. To be accepted by the market and serve its purpose, it must benefit the well-behaved actors enough to offset their cost of participation; otherwise it will not be used. Robustness Analysis When deploying a system that promotes good behaviour and sanctions misbehaviour, we must analyze its effects on rational actors who can adjust their behaviour to maximize their gains. The measurement system creates incentives that affect the behaviour of both benevolent and rational actors aiming to subvert the system. For example, if the actor with the highest number of positive transaction reports has a higher chance of being selected as a collaboration partner, the system provides an incentive to engage in many small transactions rather than a few large ones. These secondary incentives are not necessarily intentional or desired, but they should be included in the analysis of the system. In the field of security, attacks and defenses form a continuous reactive loop, where new attacks are met with new defenses. When we analyze reputation as a sanctioning mechanism, the threat of reputation loss should hopefully deter deliberate attacks by making them more costly. The assumption is therefore that attackers aim to maximize their gains and to minimize costs, which renders them suitable for game-theoretic minimax analysis [START_REF] Russell | 6: Adversarial search[END_REF]. Rational attacker models should always be optimized against a specific policy setting. We should generally not depend on security through obscurity, so the attacker should have knowledge of the policy in use and its current reputation. It should have a set of reasonable strategies to choose from, with costs and values assigned according to the resources needed and what we want to defend against. In our attacker model, we allowed optional ways to reach the goal of fraudulently making money off other actors: defection from many small transactions or a few large ones, and boosting reputation through fraudulent sources or by cooperating. We assigned a cost to cooperation, because while in a general market setting collaboration does pay off, we primarily wanted to ensure that defection does not, and selected the measurement accordingly. To support attacker analysis, high-level attack classifications may act as a reusable checklist. Relevant attack categories include misbehaviour in service provisioning, deliberate omissions and misreporting, conspiracy with other malicious actors to increase own reputation, conspiracy to decrease a competitor's reputation, coercion, replay and forgery to influence non-malicious actors' reports, and privacy violations against other actors e.g. through traffic analysis. In addition, the checklist can include rational but non-malicious grievances such as freeriding, i.e. not constructively participating in the aspects of the system that do not benefit the actor directly. One vulnerability grouping based on a review of existing systems has been presented in earlier work [START_REF] Yao | Addressing common vulnerabilities of reputation systems for electronic commerce[END_REF]; for an expansion to a checklist kind of design tool, a tree-structured categorization providing additional levels of detail may provide better usability. Robustness analysis results should be approached with a similar curious scepticism as research prototypes when it comes to evaluating a system's deployability: rather than providing positivistic evidence of specific desirable attributes of the system, the analysis acts as a feedback-collection step in a design science process. In other words, while not coming up with a vulnerability does not prove that it does not exist, going through the exercise of systematically looking for holes in the design is a valuable step in improving system design itself, and a part of good research practice that leads to more mature systems. Methods A benchmark serves best as a summarizing tool that simplifies comparisons. While system designers cannot use a benchmark load to prove the absence of a vulnerability or the objective superiority of a scheme, deployers may well benefit from more standardized comparison frameworks that provide an overview of the tradeoffs made in any specific systems. Towards this goal, we are also working on a first prototype of a simulation-based comparison tool for reputation update policies in order to identify useful patterns for benchmarking. A categorization framework would help in better capturing the fact that different policies represent different tradeoffs between partially conflicting goals, and as a result suit different environments and business needs. What the specific needs of a given environment are can only be determined by the actors in it [START_REF] Kaur | Enabling user involvement in trust decision making for inter-enterprise collaborations[END_REF]. Focusing too intently on specific behaviour patterns carries the risk of overly technology-centric evaluation of the proposed systems, so a balance must be sought between different methods of collecting feedback on a system. Our own simulation experiments represent an initial step in more generally summarizing policy behaviour given a specific input, such as identifying policies that produce positive trust decisions for trustees who are only known through low-credibility sources but have only positive experiences within them ("accepts promising actors"). This could be used as a basis to develop a more comprehensive categorization-based evaluation framework in the style of what Stajano et al. have established for evaluating user authentication [START_REF] Bonneau | The quest to replace passwords: A framework for comparative evaluation of web authentication schemes[END_REF]. For attack resistance, our minimax-based analysis of optimal attackers provides a new angle into this kind of evaluation in comparison to the prevalent methods in the field. We have also summarized how we have applied the method in practice; the analysis demonstrates that making impact information (minor and major positive and negative outcomes) and credibility evaluation available for the automation policies improves the attack resistance of the system [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]. Conclusion We have identified benefits and limitations of the state of the art in simulationdriven experimentation on trust and reputation systems, and gauged the potential of different methods for a set of behaviour-related measurement purposes. The two major directions we identify are building benchmarks for the interenterprise collaboration setting, and robustness analysis, which is by nature more specialized for each system and its purpose. General classification tools can help with this analysis as well. Benchmarks can be applied to simplify comparisons between systems. One notable extension to the idea are competitions within a given system; we believe the potential for this approach has not yet been exhausted in the state of the art, although the task of designing a high-quality marketplace abstraction is quite demanding. Attack resistance analysis, on the other hand, does not seem to lend itself to simulation. Load balancing through reputation is more relevant for e.g. routing services in mobile ad hoc networks than heterogeneous environments where all actors use their own policies. In marketplaces, pricing can be used to balance against overload. Acknowledgments This research has been performed in the Collaborative and Interoperable Computing (CINCO) group at the University of Helsinki, Department of Computer Science.
43,744
[ "1002378", "1002379" ]
[ "50895", "50895" ]
01474209
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474209/file/978-3-642-36796-0_15_Chapter.pdf
Zhongjie Wang Xiaofei Xu email: [email protected] Xianzhi Wang email: [email protected] Mass Customization Oriented and Cost-Effective Service Network Keywords: service network, service composition, mass customization, costeffectiveness, competency assessment Traditional service composition approaches face the significant challenge of how to deal with massive individualized requirements. Such challenges include how to reach a tradeoff between one generalized solution and multiple customized ones and how to balance the costs and benefits of a composition solution(s). Service network is a feasible method to cope with these challenges by interconnecting distributed services to form a dynamic network that operates as a persistent infrastructure, and satisfies the massive individualized requirements of many customers. When a requirement arrives, the service network is dynamically customized and transformed into a specific composite solution. In such way, mass requirements are fulfilled cost-effectively. The conceptual architecture and the mechanisms of facilitating mass customization are presented in this paper, and a competency assessment framework is proposed to evaluate its mass customization and cost-effectiveness capacities. Introduction The emergence of service-oriented technologies and trends, e.g., cloud computing, SoLoMo (Social, Local and Mobile), virtualization, and Internet of Things, have promoted an increasing number of software services on the Internet. In addition, there are now various offline physical and human services that are virtualized and connected to the Internet and collaborate with online software services. Such a proliferation of available services has led to a situation where it is time-consuming and costly to select the appropriate service from an extensive range of candidates when building a coarse-grained composite solution to satisfy individualized customer requirements. Within the service computing domain, this issue is both a traditional and popular research topic termed service composition. Although there has been much research in recent years, it has been insufficient in the face of mass customization. To lower service composition and delivery costs, it is better to build a standard solution and provide it to all customers. However, due to the divergence of requirements of different customers, such standardization-based strategies consequentially lead to lower de-grees of customer satisfaction. In contrast, if multiple fully personalized solutions are constructed based on each customer's preferences, then the cost is bound to significantly increase. Therefore, it is critical to look for a tradeoff between a fully generalized solution and multiple individualized ones, and to balance the costs and benefits of a composition solution(s). In addition, the solutions generated by current approaches are usually temporary, i.e., after the corresponding requirements have been fulfilled, they are released and no longer exist. This action further increases costs. Let us take a referential idea from the Internet: consider the scenario that users make end-to-end communications via the Internet. It is not necessary to establish a direct connection between their computers but a virtual link is dynamically established by a routing mechanism based on the infrastructure. After the communication finishes, this link is disconnected. The persistent Internet infrastructure could satisfy any type of individualized communication demands and end users do not necessarily know the details of the complex protocols. Using this basic philosophy as a reference, we propose the concept of a "Service Network" (SN). It could be considered as business-level persistent infrastructure in the form of interconnections between distributed services, and able to satisfy a large number of customers with customized requirements. The nodes in a SN would include various services (e.g., e-services, human services, information, and resources), and the interconnections between them are information exchanges and functional invocations following interoperability protocols such as SOAP and REST. As shown in Fig. 1, services are deployed on different servers and they are logically connected under the support of Internet. The SN concept is not a novel idea. Below are two cases from the real world: (1) ifttt (If-This-Then-That, http://ifttt.com). This is a website that aims to connect services from different websites (e.g., APIs of twitter, facebook, and instagram) using an event triggering mechanism to realize cross-domain service invocation. Users set up an "IF…THEN..." task on ifttt.com, and when the condition after IF is satisfied, the service after THEN is triggered. In this way distributed services on the Internet are interconnected as a virtual service network. (2) Alibaba service eco-system. Alibaba is a full-scale e-Business provider, whose services include alibaba.com (a B2B platform), tmall.com (a B2C platform), taobao.com (a C2C platform), ju.taobao.com (a Groupon-like platform), alipay.com (a 3 rd -party payment service), e56.taobao.com (a 3 rd /4 th -party logistics service), and etao.com (product search service). They are connected as a large service network to jointly fulfill the individualized requirements of millions of buyers and sellers. Just as the objective of the Internet is more than just the fulfillment of a single demand by two specific users, the purpose of an SN is not to fulfill just one requirement raised by one specific customer. On the contrary, any customers could utilize it for their own purposes. Of course, requirements can vary greatly, so an SN should adapt itself to the requirements (on QoS and function) of mass customers by dynamic "transformations". The more individualized and diverse the requirements an SN can satisfy, the higher its competency to facilitate mass customization. In economic terms, an SN should be "cost-effective", i.e., the sum of construction, maintenance, and customization costs should be below the total benefit gained from mass customization. This paper is organized as follows. In section 2 related works are introduced, and the similarities and differences between the philosophies of traditional service composition, SaaS, and SN are clarified. In section 3, the conceptual architecture of an SN is described and formally defined. Section 4 explains how an SN facilitates mass customization, and section 5 shows a competency assessment framework and corresponding metrics. Finally is the conclusion. Related Work Mass Customization (MC) [START_REF] Silveiraa | Mass customization: literature review and research directions[END_REF] originated in the production domain. Similarly, in the software engineering domain, methods such as software product line and reusebased software engineering (RBSE) emphasize the philosophy of utilizing standard and reusable fine-grained software components to rapidly build applications in one domain, essentially realizing the mass production and customization of software products [START_REF] Hallsteinsen | Dynamic software product lines[END_REF]. Later, ideas from MC and RBSE were imported into the services computing domain and became a key analysis and design approach in pursuing the mass customization of service-oriented systems using techniques like loosely-coupled architecture, autonomic agent, dynamic workflow, and service family [3][4]. Service discovery, selection, and composition [START_REF] Hwang | Dynamic web service selection for reliable web service composition[END_REF][6] play critical roles in constructing coarse-grained service solutions that meet individualized requirements. Applicable services are selected from candidates, then potential composite solutions are generated and evaluated, and the most appropriate one is delivered to the customers [START_REF] Ardagna | Adaptive service composition in flexible processes[END_REF]. In addition to IOPE (Input-Output-Preconditions-Effects) and QoS [START_REF] Cavallo | An empirical comparison of methods to support QoS-aware service selection[END_REF], the customer's preferences and context are addressed to look for an exact match between composite solutions and customer requirements [START_REF] Li | Pass: An approach to personalized automated service composition[END_REF] [START_REF] Lin | Web service composition with user preferences[END_REF]. At present, research on this issue is mainly based on AI planning techniques, i.e., initial and expected states, and the semantics of candidate services are formally described. A planner with reasoning capacity is then employed to look for a composite path that transforms the initial state into the expected one by back-chaining or forward-chaining policies [START_REF] Peer | Web service composition as AI planning: a survey[END_REF]. Semantic querying and reasoning are the pivotal techniques used in the process. Another popular approach is to look for the underlying pattern of each customer from his/her historical service usage records using data mining techniques. Personal-ized solutions are then built following the identified patterns [12][13]. This method is suited to a scenario where customers do not explicitly state their preferences. Software as a Service (SaaS) is another successful practice in boosting service mass customization [START_REF] Sun | Software as a Service: configuration and customization perspectives[END_REF]. In SaaS, a meta-data model is used to define variability in the data layer, business logic layer, and user interface layer, and each tenant makes personalized configurations on these variability points [START_REF] Mietzner | Variability modeling to support customization and deployment of multi-tenant-aware Software as a Service applications[END_REF]. In this way, many personalized requirements can be facilitated by one software instance, and the personalized performance of different tenants is ensured by the scalable architecture [START_REF] Shim | Patterns for configuration requirements of Software-as-a-Service[END_REF]. However, the services in a SaaS are largely designed, developed, deployed, and runtime provisioned by the SaaS operator itself, and many services distributed on the Internet are seldom used due to reliability considerations. It would appear that the distinctions of SN are twofold: (1) It transforms the "centralized service development, maintenance, and evolution" policy adopted in SaaS into "the utilization and aggregation of massive distributed services on the Internet to form a dynamic network structure" policy, thereby extending the scope and flexibility of mass customization; and (2) It transforms the "one-requirement-oriented temporary solution" policy adopted in traditional service composition approaches into the "massive-requirement-oriented persistent solution", thereby improving the cost-effectiveness of mass customization. Concept and Architecture of an Service Network An SN is essentially a combination of multiple composite solutions, each of which is established in terms of one customer requirement. Although superficially it appears to be quite complicated and redundant, it has a higher fitness, i.e., when one requirement arrives, it automatically looks for a sub-network and provides it to the corresponding customer. Each solution is the equivalent of a traditional service composition algorithm. As shown in Fig. 2, the structure of an SN looks like an artificial neural network. The left-most part is the input information obtained from three sources: the customers, obtained automatically from the context of customers, or from customers' historical records. The right-most part is the output information expected by customers. In the middle there are multiple layers, with each layer containing multiple services, and the services in the different layers are connected by parameter passing. The input is transformed into the output layer by layer. An SN is defined by SN=(IN_Pool, OUT_Pool, Service_Pool, IS_Flow, SS_Flow, SO_Flow), where  IN_Pool is the input of an SN and contains a set of parameters. Each parameter in_paramIN_Pool is a term from the domain ontology. For clarity, we suppose that all parameters are atomic and they are mutually independent, i.e., there are no semantics overlaps.  OUT_Pool is the output of an SN and also contains a set of parameters. Different from in_param, each parameter out_paramOUT_Pool may be either atomic or compound, and there may be semantic overlaps between different parameters.  Service_Pool is the main body of an SN and contains of a set of service nodes. For each node, service={FD, IN_slots, OUT_slots, QoS_params}, where FD is the semantic description (in the form of ontology), IN_slots is a set of slots, and each slot represents an input parameter of service, OUT_slots is a set of slots, and each slot represents an output parameter, and QoS_params is a set of quality parameters. Further, in_slot i IN_slots is defined by in_slot i =(in_param i , FD i ), where in_param i is the name of the input parameter and FD i is the ontology description of semantics. So does out_slot i =(out_param i , FD i )OUT_slots. Note that there is a special type of service node called compound service (CS), denoted by CS={service 1 , service 2 , …, service n }. For service i , service j CS, they have the same FD, IN_slots, and OUT_slots but the value of quality indicators in QoS_params might be different. For example, the nodes CService 1 and CService 2 are compound services, shown in dotted rounded rectangles.  IS_Flow={is_flow} is the connections between IN_Pool and Service_Pool. Further, is_flow=in_param i service j .in_slot jl indicates the transferring of the input parameter in_param i IN_Pool to the in_slot jl of a service j .  SS_Flow={ss_flow} is the connections between service nodes, and ss_flow=service i .out_slot ik service j .in_slot jl indicates the transferring of the output parameter out_slot ik of service service i to the input parameter in_slot jl of service j .  SO_Flow={so_flow} is the connections between Service_Pool and OUT_Pool, and so_flow=service j .out_slot jl out_param i indicates the transferring of the output parameter out_slot jl of service j to the output parameter out_param i OUT_Pool. For in_slot ik service i , there must be one or multiple flows pointing to one in_slot of a service node. Such a flow is either an is_flow from IN_pool or an ss_flow from another service node. When the SN is customized, on most occasions only one flow will be selected and take effect. However, if the customer expects higher reliability, multiple flows may take effect simultaneously, indicating that in_slot ik has multiple data sources (also called redundancy). Taking Fig. 2 as an example, the input parameter of CService 2 has three distinct sources (service 1 , in_pool, and CService 1 ). Similarly, with in_paramIN_pool, there is at least one is_flow pointing out from it. For out_slot jl service j , there may be zero, one, or multiple flows pointing out from one out_slot jl of a service node. Such a flow is either a so_flow to OUT_pool or an ss_flow to another service node. When there are no flows pointing out from the out_slot jl , this indicates that this output parameter is trivial and not used by the SN. Similarly with out_paramOUT_pool, there is at least one flow so_flow pointing to it. Furthermore, a flow pointing directly out from an in_paramIN_pool to and out_paramOUT_pool is illegal. 4 How Service Network Supports "Mass Customization" This section describes the SN mechanisms that support "mass customization". Based on the descriptions in section 3, there are two "transformation" mechanisms facilitating mass customization, i.e., (M 1 ) the dynamic selection of service nodes, and (M 2 ) the dynamic selection of flows. The variable service nodes and flows are both defined as the "features" of an SN, each of which has a limited scope and density of customization. Metaphorically speaking, a feature is like a joint of a human body and its customization scope is the joint's degree of freedom. Table 1 lists a set of customizable features and their customization scope. Focusing on a personalized requirement, the customization of an SN is the process of selecting a specific value for each feature from its customization scope, indicating that a subset of service nodes and a subset of flows are identified, and the SN is transformed into a composite solution. It is easy to imagine that, in terms of a specific requirement, there might be multiple possible results, each of which could fully satisfy the requirement. Therefore, besides the value assignment for each feature, the customization process should also find the "best" solution from these possibilities. We define it as a combinatorial optimization problem following a "just-enough" policy [START_REF] Ni | Commodity-market based services selection in dynamic web service composition[END_REF]. The following is the mathematical model. Input: o o x i =0/1 indicates whether service i is selected; o If x i =1 and CService i is a compound node, then y ik =0/1 indicates whether service ik is selected; o z ip,jq =0/1 indicates whether out_slot ip of service i is connected to in_slot jq of service j ; o v l,jq =0/1 indicates whether in_slot jq of service j is connected with in_param l ; o u ip,l =0/1 indicates whether out_param l is connected with out_slot ip of service i ; o f T (bp), f C (bp) and f R (bp) are the calculating functions that compute the global Time, Cost, and Reliability, respectively, of the generated bp according to its process structure [START_REF] Jaeger | QoS Aggregation for Web Service Composition using Workflow Patterns[END_REF]. Objective Function: The generated process bp satisfies the customer requirement as close as possible, and if no such process can be found, the output is null, i.e.,           min , . , . , . T T C R y F pr bp pr T f bp pr C f bp pr R f bp      s.t.:   . 0 T pr T f bp   ,   . 0 C pr C f bp   ,   . 0 R pr R f bp   Solving Strategies: o Phase 1: Pruning According to {req_in_param} and {req_out_param}, the initial SN is pruned, i.e., (1) in_param i IN_Pool \{req_in_param} and is_flow={in_param i *} are removed from the SN; (2) out_param j OUT_Pool \{req_out_param} and so_flow ={*out_param j } are removed from the SN; (3) recursively check each service node from left to right of the SN, examine each in_slot ip of each service i , if there are no flows pointing to in_slot ip , then service i and all related flows are removed from the SN; (4) check the remaining SN, and if it cannot produce any output parameters, then the requirements cannot be fulfilled by the initial SN and NULL is returned. o Phase 2: Optimization Based on the pruned SN, a multi-objective programming approach is employed to solve the combinatorial optimization problem. Competency Assessment of Service Network In reality, the competency of an SN is limited; in other words, not all individualized requirements can be satisfied by an SN, and only a certain number of requirements can be simultaneously satisfied. Even if a requirement could be satisfied, there are associated costs that must be paid. This section puts forward a set of indicators to assess the competency of an existing SN, with corresponding metrics. Competency Assessment Framework (SN-CAF) The competency of an SN is assessed from two aspects: capacity of mass customization and cost-effectiveness. Figure 3 Assessment of the Capacity of Mass Customization (CMC) The capacity of mass customization (CMC) is measured by looking at "customization" capacity and "mass" capacity. The former refers to the scope of functionalities and QoS that could be customized in an SN, and the latter refers to the scale or the number of requirements that could be simultaneously fulfilled by one SN. From a statistical point of view, CMC may be indirectly measured by the proportion of satisfied requirements relative to the total arriving requirements. More specifically, we use five fine-grained indicators, the first three, Overall Functionality Coverage (OFC), Functionality Customization Degree (FCD), and QoS Customization Degree (QCD), measure the "customization" capacity, the fifth (Maximum Load, ML) measures the "mass" capacity, and the final one (Requirement Satisfaction Ratio, RSR) is a statistical measurement. (1) Overall Functionality Coverage (OFC) OFC refers to the degree of functionality coverage of an SN relative to the business domain it belongs to. It characterizes the richness of functionality of an SN and is measured by the proportion of the ontology covered by an SN relative to the holistic ontology of the domain. Because the size of the domain ontology is difficult to estimate, we use the number of complete ontology covered by the SN as the metrics, i.e.,   . i i service OFC SN service FD   . The greater the functionality of an SN, the higher the diversity of its functions, and thereby the higher possibility that it might fulfill a varying number of functions, and the broader range of choices a customer may have to customize his/her functional preferences. (2) Functionality Customization Degree (FCD) FCD is the metrics indicating the degree by which functionalities could be customized. It may be measured by the percentage of customization relative to the total functionalities, and the customization scope of each functionality feature, using the four functionality features listed in Table 1, i.e., (F 1 ) the selection of input parameters and (F 2 ) service nodes, (F 3 ) the selection of the source of an input parameter of a service node, and (F 4 ) the selection of the source of an output parameter. The following are detailed metrics to calculate the degree of customization: For F 1 , we check each input parameter whether it is either optional or mandatory. A mandatory input parameter has to be selected when the SN is used, so it cannot be customized, and the opposite is true when it is optional. The pruning strategy (mentioned in section 4) is used to delete in_param i and all its related service nodes and output parameters from the SN, and if no output parameters remain then in_param i is mandatory; otherwise it is optional. Fig. 4 schematically illustrates an example pruning process, in which the deletion of the first input parameter leads to the deletion of Service 1 , Service 2 and Service 5 , but not incurs to the disappearances of the output parameters, so this input parameter is optional. For F 2 , we check each service node whether it is optional or mandatory. The same pruning strategy is used to prune service i and all its related service nodes and output parameters to check whether service i is optional or mandatory. Then For F 3 , we calculate the number of sources of an in_slot ik of service i and that the number is larger than 1 implies that this parameter can be customized. Then       3 _ : : * _ 1 _ i i ik ik service i service in slot flow flow in slot FCD SN IN slots       measures the percentage of customizable input parameters of services in the SN. For F 4 , we calculate the number of sources of an out_param j and that the number is larger than 1 implies that this parameter can be customized. Then (3) QoS Customization Degree (QCD) QCD is defined as the overall scope in which the global QoS of the customized solutions can vary. In terms of time, cost, and reliability, the measurements are different. First we transform the SN into the form of a service process using the following steps:  Construct a Start activity and an End activity;  Construct an activity for each simple service node, and for each candidate service in each compound service node;  For the activities transformed from compound service nodes, add an or-split before them and an or-merge after them;  If there is an ss_flow pointing from an input parameter in IN_Pool to a service node, construct an arrow connecting Start and the corresponding activity; similarly, place arrows between two activities, and between an activity and End;  If there are multiple arrows between two activities, keep one only;  If multiple flows point to the same in_slot of a service node, then label the corresponding arrows or-merge;  If a flow is optional, also label the corresponding arrow or-merge. [Cost] To measure the upper bound of cost (UC), keep a branch with a maximal price for each fragment between or-split and or-merge, delete all other branches, and then sum the prices of the remaining activities; similarly, keep the branch with the minimal price and obtain the lower bound of the cost (LC); then QCD C (SN)=[LC, UC]. [Reliability] To measure the upper bound of reliability (UR), keep the process unchanged and calculate global reliability based on the structure of the process, because the original process retains all the redundancy, and the global reliability is maximized. To measure the lower bound (LR), for multiple arrows grouped by the same or-merge, retain one arrow whose source activity has minimal reliability, and delete all other arrows and related activities; then calculate the corresponding global reliability. Such pruning eliminates all redundancy and retains activities with minimal reliability. Thus, QCD R (SN)=[LR, UR]. (4) Maximum Load (ML) ML is the maximal concurrency number of requirements that could be simultaneously satisfied by the SN. Different requirements may be allocated to different paths, or share the same path. Suppose the concurrency number of an atomic service i is CN(service i ). For a compound service node, namely CS j , its CN increases to   1 j CS jk k CN service   implying all the atomic services contained in CS j could be concurrently utilized to satisfy requirements. Essentially, the value of ML lies on the minimal concurrency number of service nodes, which are mandatory and cannot be pruned (this was mentioned in the measurement of FCD). So, ML(SN)=min SN(service i ) where service i {service: manda-tory}. (5) Requirement Satisfaction Ratio (RSR) RSR is a statistics metric and is not directly related to the structure of SN. It is the ratio that the number of satisfied requirements relative to the total arriving requirements during a period of time. This ratio indirectly reflects the competency of mass customization. If the ratio is low, then there are inevitably deficiencies in the SN and it is necessary to enhance it immediately. Assessment of Cost Effectiveness An SN is cost-effective when the benefit generated by satisfying the individualized requirements of a large number of customers is greater than the sum of the costs incurred during the full lifecycle of the SN, including construction costs, maintenance costs, and customization costs. Otherwise, the SN becomes worthless. Put another way, the more times that an SN is used and the greater the amount and frequency of individualized requirements it could satisfy, therefore the greater significance it has, and the greater benefit it could bring to the operator of the SN. There are three associated costs: ─ Construction Costs (NC): the cost of selecting the appropriate candidate services, negotiating with the providers of these services, and connecting them together to form an initial SN. It is paid for during the initialization of infrastructures and is a "sunk cost" or "fixed investment". ─ Maintenance Costs (MC): the cost of provisioning, evaluating, and continuous enhancing (discussed further in section 6) per unit of time (e.g., one month). Such costs are not directly related to the requirements but are seen as similar to daily operation costs. ─ Customization Costs (CC): the cost of customizing all of the features of an SN to satisfy a specific requirement. Suppose there are n possible requirements based on forecasting and historical records, and the "Willing to Pay (WTP)" of the i-th requirement is WTP i . If they are satisfied by traditional service compositions, i.e., each one is provided with a directlyconstructed solution, and the construction cost of the i-th solution is DC i , then the total benefit is: Ideally, if an SN is cost-effective, ACR(SN) and BUC(SN) will vary, with ACR(SN) being extremely high and BUC(SN) negative during the stage when an SN is initially constructed; as the number of arriving requirements increases, ACR(SN) drops gradually and reaches a stable level, and BUC(SN) ascends gradually and becomes positive. As a comparison, ACR and BUC in a traditional service composition approach remains stable with small fluctuations. Conclusions By focusing on the mass customization of services, this paper analyzed the deficiencies of traditional SaaS and service composition approaches, and then proposed a "Service Network" as a means to solve the contradictions between "mass-oriented standardization" and "individual-oriented personalization". An SN is constructed and maintained as a persistent existing infrastructure, to be transformed into various concrete solutions to satisfy considerable individualized customer requirements. This is achieved under the support of a set of customized features such as service nodes, flows between services, and varied QoS in compound service nodes. In addition to mass customization competency, this study also highlighted "cost-effectiveness" as an important factor, an aspect that has been ignored in previous research. Accompanied by the continuing growth in Internet services, customer requirements are becoming increasingly diversified and their granularity tends to large-grained. In such circumstances, cross-organization and cross-regional service collaboration have been the dominant trends in many service industries. Our study provides some perspective references regarding the research and practices in this respect. Fig. 1 . 1 Fig. 1. Service Network: Interconnections between Distributed Services 6 Fig. 2 . 62 Fig. 2. Conceptual Architecture of a Service Network Fig. 3 . 3 Fig. 3. Competency Assessment Framework of a Service Network FCD 1 ( 1 is used to measure the percentage of customizable input parameters in IN_Pool of SN. 6 Fig. 4 . 64 Fig. 4. The Pruning of the SN FCD 2 ( 2 measures the percentage of customizable service nodes in Ser-vice_Pool of the SN. FCD 4 4 of customizable output parameters in OUT_Pool of the SN. SN)+FCD 2 (SN)+FCD 3 (SN)+ FCD 4 (SN)). Figure 5 Fig. 5 . 55 Figure 5 provides an example process based on the SN shown in Fig. 2. .. 2 ) 2 If a service network is constructed to satisfy n requirements and they are distributed in m months, then the total benefit is SNB= Further, the cost-effectiveness of the constructed SN is denoted by CE(SN)= SNB DB DB  , indicating the percentage of the increased benefit that the SN produces compared with a traditional approach.In addition to CE, there are another two meaningful metrics:(1) Average Cost per Requirement (ACR), i.e., Benefit generated by Unit Cost (BUC), i.e., Table 1 . 1 Customizable Features of an SN Feature Sub-Feature Customization Scope F 1 (M 1 ) Input parameter in_param i { Selected, Not Selected} Func-tionali-ties F 2 F 3 F 4 (M 1 ) Service node service i (M 2 ) The source of an in_slot ik of ser-vice i (M 2 ) The source of an out_param j in OUT_Pool { Selected, Not Selected} {is_flow=*service i .in_slot ik } {ss_flow=*service i .in_slot ik } {so_flow=*out_param j } Q 1 (M 1 ) The selected services of a CS i { service i1 , …, service in } Q 2 (M 1 ) The number of selected services of a CS i {1, 2, …, |CS i |} QoS Q 3 (M 2 ) The number of sources of an in_slot ik of service i {1, 2, …,  |{is_flow=*service i .in_slot ik } {ss_flow=*service i .in_slot ik }|} Q 4 (M 2 ) The number of sources of an out_param j in OUT_Pool {1, 2, …, |{so_flow=*out_param j }|} Acknowledgment The work in this paper is supported by the projects funded by the Natural Science Foundation of China (Nos. 61272187 and 61033005).
32,166
[ "1002380", "1002381", "176079" ]
[ "380001", "380001", "380001" ]
01474211
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474211/file/978-3-642-36796-0_17_Chapter.pdf
Julio Cesar Nardi email: [email protected] Ricardo De Almeida Falbo João Paulo A Almeida email: [email protected] A Panorama of the Semantic EAI Initiatives and the Adoption of Ontologies by these Initiatives Keywords: enterprise application integration, semantics, ontology, systematic mapping Enterprise Application Integration (EAI) plays an important role by linking heterogeneous applications in order to support business processes within and across organizations. In this context, semantic conflicts often arise and have to be dealt with to ensure successful interoperation. In recent years, many EAI initiatives have aimed at addressing semantic interoperability challenges by employing ontologies in various ways. This paper aims to reveal, through a systematic review method, some aspects associated with semantic EAI initiatives and the adoption of ontologies by them, namely: (i) the business application domains in which these initiatives have been conducted; (ii) the focus of the initiatives regarding integration layers (data, message/service, and process); (iii) the adoption of ontologies by EAI research along the years; and (iv) the characteristics of these ontologies. We provide a panorama of these aspects and identify gaps and trends that may guide further research. Introduction In order to be competitive and face changing economic conditions, enterprises need to be flexible and dynamic, which requires the use of information systems that can work together supporting business processes [START_REF] Vernadat | Interoperable enterprise systems: Principles, concepts, and methods[END_REF]. In this context, Enterprise Application Integration (EAI) plays an important role for linking separate applications into an integrated system driven by business models and the goals they implement [START_REF] Gacitua-Decar | Ontology-based Patterns for the Integration of Business Processes and Enterprise Application Architectures[END_REF]. Challenges in EAI arise, among others, from the fact that heterogeneous enterprise applications employ different data and behavioral models [START_REF] Izza | Integration of industrial information systems: from syntactic to semantic integration approaches[END_REF], leading to semantic conflicts. These conflicts occur whenever applications are built with different conceptualizations, which can impact the integration of data, messages/services, and processes. Despite many advances in EAI, semantic integration of enterprise applications remains a hard problem [START_REF] Bussler | The Role of Semantic Web Technology in Enterprise Application Integration[END_REF]. In this context, several approaches for semantic integration have been applied, using a variety of instruments, including domain vocabularies, taxonomies, ontologies, logical formalisms, and rules that specify policies, governance, etc. [START_REF] Izza | Integration of industrial information systems: from syntactic to semantic integration approaches[END_REF]. Among these approaches, ontologies have been acknowledged as an important means to address semantic EAI [START_REF] Bussler | The Role of Semantic Web Technology in Enterprise Application Integration[END_REF] [START_REF] Izza | Integration of industrial information systems: from syntactic to semantic integration approaches[END_REF], namely through promoting integration of different information system layers (data, message/ service, and process). In the context of semantic EAI, ontologies have been employed with the purpose of contributing to the establishment of common understanding. This paper aims to reveal, through a systematic mapping [START_REF] Kitchenham | Guidelines for performing Systematic Literature Reviews in Software Engineering (Version 2.3)[END_REF], some aspects associated with semantic EAI initiatives and the adoption of ontologies by these initiatives, namely: (i) the business application domains in which the initiatives have been conducted; (ii) the focus of these initiatives regarding integration layers (data, message/service, and process); (iii) the adoption of ontologies by EAI research initiatives along the years; and (iv) the characteristics of the ontologies employed. These aspects are structured in six research questions that are investigated using 128 studies selected and analyzed according to a systematic review method. This paper is organized as follows: Section 2 presents the main concepts used in this paper and clarify some important terminology regarding integration approaches; Section 3 presents the systematic review method adopted, and describes the main parts of the mapping protocol developed during the planning phase; Section 4 presents the results of the mapping, including the selection process, the classification schemas, and data synthesis; Section 5 discusses the findings and the mapping limitations; Section 6 presents concluding remarks and outlines further investigation. Background The various works in the literature refer to many aspects of enterprise application integration. In this section, we discuss some of the most salient concepts and terms in this broad area of research, in order to characterize the scope of our investigation and support the definition of the research questions that will be the subject of this work. First of all, we should note that there are several definitions for the terms "integration" and "interoperability" referring to different or interrelated concepts, and these are often used indistinctively. Since we are interested in "application integration" as well as "application interoperability", we considered both terms in the searching string presented in Section 3, and throughout this paper, we use the term "integration" in a broad sense, involving both integration and interoperability. Secondly, in the investigated literature, the distinction between intra-and interenterprise application integration is often present. Intra-EAI aims at integrating applications in the context of a single enterprise, while inter-EAI (also referred to as B2B integration) supports integration of applications of more than one enterprise, linked, in many cases, by a collaborative process [START_REF] Endrei | Patterns: Serial Process Flows for Intra-and Inter-enterprise[END_REF]. Considering that most techniques and technologies that make up intra-EAI are also applicable to inter-EAI [START_REF] Endrei | Patterns: Serial Process Flows for Intra-and Inter-enterprise[END_REF], we are interested in both intra-and inter-enterprise application integration and use "enterprise application integration" to refer to both. Integration can concern one or several information system layers [START_REF] Izza | Integration of industrial information systems: from syntactic to semantic integration approaches[END_REF], such as: data layer, message/service layer, the process layer. Data layer integration concerns with moving or federating data between multiple databases, bypassing the application logic and manipulating data directly in the databases. Message/service layer integration addresses message exchange between information systems, which can occur in any tier, such as user interface, application logic or even in the data tier. Process layer integration, commonly referred to as Business Process Integration, views the enterprise as a set of interrelated processes, being responsible for handling message flows, implementing rules and defining the overall coordination of the execution. Ontologies have been acknowledged as an important means for achieving semantic EAI [START_REF] Bussler | The Role of Semantic Web Technology in Enterprise Application Integration[END_REF] [3], since they aim at providing formal specifications of shared conceptualizations. Considering their level of generality, ontologies continuously range from top-level ontologies, through domain ontologies to application ontologies. Top-level ontologies (so-called foundational ontologies) describe very general concepts like space, time, object, event, etc., and are independent of particular domains or problems [START_REF] Guarino | Formal Ontology and Information Systems[END_REF]. Domain ontologies describe concepts related to a generic domain, sometimes specializing concepts of a top-level ontology. Application ontologies, in turn, describe concepts related to a particular application [START_REF] Guarino | Formal Ontology and Information Systems[END_REF]. Since these kinds of ontologies form a continuum, the borderline between them is not clearly defined. Thus, in this paper, we distinguish only between top-level ontologies -those developed considering theories of Formal Ontology and related areas, e.g. DOLCE (Descriptive Ontology for Linguistic and Cognitive Engineering) and SUMO (Suggested Upper Merged Ontology) -and the rest (including various levels of generality usually referred as domain or application ontologies). Finally, due to the potential of ontologies as a means to address semantic aspects, in last decades, many ontology implementation languages have been developed and many knowledge representation languages have been used for building ontologies, even they were not initially developed for this purpose [START_REF] Gòmez-Pérez | Ontological Engineering: with examples from the areas of Knowledge Management, e-Commerce and the Semantic Web[END_REF]. So, it is important to know how ontologies have been designed and implemented in order to understand how appropriate these representations are for semantic EAI. In this context, we can cite knowledge representation languages such as first-order logic, frames and description logic. Based on them, there are some ontology languages, such as [START_REF] Gòmez-Pérez | Ontological Engineering: with examples from the areas of Knowledge Management, e-Commerce and the Semantic Web[END_REF]: FLogic (Frame Logic), RDF (Resource Description Framework), and OWL (Web Ontology Language). Beyond these languages, ontologies are also developed using technologies associated to service description, such as OWL-S (OWL-based web service ontology) and WSMO (Web Service Modeling Ontology). The Review Method and the Mapping Protocol (Planning) This systematic mapping was conducted taking as basis the method for systematic literature reviews given in [START_REF] Kitchenham | Guidelines for performing Systematic Literature Reviews in Software Engineering (Version 2.3)[END_REF]. This method is known for its suitability for PhD studies, which is the context of this research, and the research group has expertise on it, although some limitations are known [START_REF] Kitchenham | Guidelines for performing Systematic Literature Reviews in Software Engineering (Version 2.3)[END_REF]. According to [START_REF] Kitchenham | Guidelines for performing Systematic Literature Reviews in Software Engineering (Version 2.3)[END_REF], a systematic mapping is a kind of secondary study, which offers a broad view of primary studies in a specific topic in order to identify available evidences. Thus, a secondary study is a study that reviews primary studies related to a set of specific research questions with the aim of integrating/synthesizing the evidences related to these research questions. The primary study is an empirical study investigating a specific research question. A systematic mapping involves three phases [START_REF] Kitchenham | Guidelines for performing Systematic Literature Reviews in Software Engineering (Version 2.3)[END_REF]: Planning, Conducting and Reporting the mapping. Planning involves the pre-mapping activities, and encompasses the definition of the following items: research questions, inclusion and exclusion criteria, sources of studies, search string, and mapping procedures. These items compose the mapping protocol. Conducting the mapping is concerned with searching and selecting the studies, and extracting and synthesizing data from them. Reporting is the final phase and involves writing up the results and circulating them to potentially interested parties. The mapping protocol is an important artifact in the review process. It is produced during the Planning phase and consumed during the other phases. The main parts of the mapping protocol used by this work are described as follows. Research Questions. This mapping aims at answering the following research questions, considering the context of semantic EAI initiatives: RQ1. What are the business application domains addressed? RQ2. What is the distribution of studies according to the integration layers (data, message/service, and process layers)? RQ3. Over the years, how wide has been the adoption of ontologies? RQ4. What is the distribution of studies that use ontologies per integration layer? RQ5. What kinds of ontologies (considering their generality level) have been used? RQ6. Which languages/formalisms have been used to create the ontologies? Inclusion and Exclusion Criteria. The primary studies selection was based on the following criteria, which were organized in one inclusion criterion (IC) and four exclusion criteria (EC). The inclusion criterion is: (IC1) The study addresses enterprise application integration under a semantic perspective. The exclusion criteria are: (EC1) The study is not written in English; (EC2) The study is an older version (less updated) of another study already considered; (EC3) The study is not a primary study (which excludes short papers, editorials, and summaries of keynotes, workshops, and tutorials); (EC4) The study is just published as an abstract. Sources. We used automatic search to collect the studies. The search was applied in seven electronic databases that were defined based on systematic reviews in the Software Engineering area. The sources are: IEEE Xplore (http://ieeexplore.ieee.org), ACM Digital Library (http://dl.acm.org), SpringerLink (http://www.springerlink.com), Thomson Reuters Web of Knowledge (http://www.isiknowledge.com), Scopus (http://www.scopus.com), Science Direct (http://www.sciencedirect.com), Compendex (http://www.engineeringvillage2.org). Search String. In order to define the search string, we used two groups of terms that were joined in a conjunction with the "AND" operator. The first group includes terms that aim to capture studies related to "integration" or "interoperability" of enterprise software applications. The second group aims at capturing studies that deal with semantic aspects. Within each of the groups, the "OR" operator was used to allow for synonyms. The search string, as follows, was applied in three metadata fields (title, keywords and abstract) and suffered syntactical adaptations according to particularities of each source: ("application integration" OR "application interoperability" OR "enterprise system integration" OR "enterprise system interoperability" OR "integration of information system" OR "interoperability of information system" OR "integration of application" OR "interoperability of application" OR "interoperability of enterprise application" OR "interoperability of enterprise system" OR "integration of enterprise application" OR "integration of enterprise system" OR "interoperability of business application" OR "interoperability of business system" OR "integration of business application" OR "integration of business system" OR "integration of heterogeneous system" OR "integration of heterogeneous application" OR "interoperability of heterogeneous system" OR "interoperability of heterogeneous application" OR "interoperability of information system" OR "integrated application" OR "interoperable application" OR "integrated enterprise system" OR "interoperable enterprise system" OR "information system integration" OR "information system interoperability" OR "enterprise system integration" OR "enterprise system interoperability" OR "business system integration" OR "business system interoperability") AND (semantic OR semantics OR semantically) Mapping Procedures (Assessments). Before conducting the mapping, we performed a pilot test of the mapping protocol over a sample consisting of 35% of the studies, which was used to evolve the components of the protocol. Considering that the review process was conducted by one of the authors, an activity of validation was carried out by a second author using a different sample of 35% of the studies. Possible biases were discussed in periodic meetings. Conducting the Mapping This section describes the main steps that were performed in the mapping, including: search and selection, data extraction and data synthesis. Search and Selection The search process was conducted in the beginning of 2012, and, therefore, we looked for studies published until December 31 th 2011. As a result, a total of 702 records were retrieved: 107 from IEEE Xplore, 16 from Science Direct, 17 from ACM Digital Library, 56 from Thomson Reuters Web of Knowledge, 232 from Scopus, 218 from Compendex, and 56 from SpringerLink. After the search process, the selection process was conducted progressively in five stages. In the first stage, we have eliminated duplicated studies by examining titles and abstracts. In this stage, we had the highest reduction (almost 60%), since many studies are available in more than one source. In the second stage, we have applied the inclusion and exclusion criteria considering title and abstract only (resulting in a reduction of 15.5%). Although we have used language filter mechanisms on the source's search engines, some studies not written in English have been retrieved. Thus, we have also applied EC1 criteria in this stage. The resulting set of studies was refined in a third stage, which also considered the whole text (resulting in a reduction of 44.8%). After preliminary analysis, we noticed that only three studies published before 2001 remained in the end of the third stage (one published in 1993 and two published in 1995). Indeed, they did not characterized representative points of our sample, thus, in the fourth stage, we have eliminated these three studies and defined the lower boundary date as January 1 st 2001. In the fifth stage we eliminated the fours studies for which we had no access to the full text. Table 1 summarizes the stages and their results, showing the progressive reduction of the number of studies throughout the selection process (from 702 to 128 studies, with a reduction rate of about 81.7%). Classification Schema and Data Extraction Before data extraction, we defined categories for classifying the studies according to the research questions, as follows. Classification schema concerning integration focus. This schema is based on [3] and encompasses three categories: Integration at data layer, Integration at message/service layer, and Integration at process layer. So, depending on the focus of the integration approach, the study is classified as one of these layers or any combination of them. Classification schema for kinds of ontology. This schema encompasses two categories: Top-level ontology and Low-level ontology. According to the generality level of the ontologies, discussed in Section 2, a study is classified as using a Toplevel ontology if a foundational ontology is used. On other hand, a study is classified as using a Low-level ontology, if a domain or application ontology is used. A study can be classified in both categories if it employs both top-and low-level ontologies. Other classification schemes. Concerning the categories for business application domains and ontology languages, we collected unstructured data without a predefined classification (the categories were only defined during data analysis), in order to deal with the large variety of possibilities. In order to collect data about business application domains, we looked for use cases, examples used for describing the proposed solutions, domains that motivated research initiatives, and so on. Regarding ontology languages, we looked for the formalisms used to represent ontologies, such as OWL, OWL-S, first-order logic, among others. After that, during data synthesis, we analyzed the content and defined the categories. This process was iterative, and the resulting categories were evaluated in periodic meetings. This process involved five steps: (1) analyzing content; (2) defining categories; (3) evaluating categories; (4) classifying studies; and (5) evaluating the classification schema. The data extraction process consisted in analyzing and collecting data of each selected study, and organizing them in a data collection form, shown in Table 2. Business Application Domains in Semantic EAI (RQ1). Considering the business application domains in which semantic EAI initiatives were applied, we identified that about 76.6% of the studies presented their solution approaches in the context of specific business application domains. The other 23.4% of the studies were classified as "General", since they just make reference to generic scenarios like "business-to-business", "e-commerce", "business", etc. Considering the approaches that were developed in the context of specific application domains, we have identified 19 categories of business application domains, which are presented in Fig. 2 together with the percentage of studies per category. The "Other" category was introduced to group business application domains that had no representative occurrence (only one paper), such as: Aerospace, Importing and Exporting, Content Publishing, Video Mail System and Software Engineering. Fig. 2. The percentage of the selected studies per business application domains Considering the distribution of studies per specific business application domain, we can notice that the "Logistics, Planning and Asset Management" domain has the largest representativeness (12.5%). It stands out, mainly because it involves supply chain initiatives, being characterized by intensive interaction between suppliers and consumers. Besides that, business application domains with representativeness between 7.8% to 5.5% include: "Product Sale Systems" (purchase order in general, and online shopping), "Product Engineering" (industrial automation technology, which requires integration and management of product life-cycle), "Natural Environment Information" (initiatives about geographic location, geographic information systems, meteorological and oceanographic information), and "Health and Research Sector" (pharmaceutical industry, health care, bio-informatics and research organizations). The other categories, although with smaller percentage of studies, still represent important numbers, if we consider that almost 23.4% of the selected studies do not make reference to any specific application domain (General). Focus on the Integration Layers (RQ2). The studies were classified as promoting semantic EAI on data layer, message/service layer, process layer, or any combination of them. The Fig. 3 presents the percentage of studies per integration layer. Fig. 3. Distribution of the selected studies per the focus on the integration layers Some studies focus only on one layer: data layer (13%), message/service layer (31%), and process layer (3%). Others propose integration solutions by addressing two integration layers: data and message/service layers (5%), and message/service and process layers (27%). And, finally, there are studies that address the three layers: data, message/service and process layers (12%). Finally, when considered in isolation or when considered in tandem with other layers, the data layer is addressed by 30% of the studies, the message/service layer is addressed by 75% of studies, and process layer by 42% of them (again either solely or in tandem with other layers). The studies that address data and message/service layers together are characterized by approaches that define data source integration solutions besides considering direct interactions (by message, service, etc.) among applications. The studies that address message/service layer together with process layer presents initiatives related to service orchestration, workflow definition, as well as business process-driven enterprise application integration initiatives. In this way, the studies that establish integration on data, message/service, and process layers together are characterized by proposing architectures, frameworks and integration approaches related to business process-driven enterprise application integration. The proposed solutions range from data source integration to application interaction driven by business processes. In this context, it is important to remark that no study focused on data and process layers without considering the message/service layer, which reflects the mediation role that the message/service layer plays. During data extraction phase, we noted that some studies presented generic approaches, which did not make commitments to any integration layer, being classified as "Without focus on any layer" (9%). These studies are characterized by proposing conceptual or generic solutions, like reference models, standards, and metamodels, as well as technical guidance and recommendations, methodologies and life-cycle models, without focusing on any specific integration layer. Ontologies in Semantic EAI: Adoption over the years (RQ3, RQ4), Kinds (RQ5), and Languages/Formalisms (RQ6). The adoption of ontologies in order to promote semantic EAI has grown over the years, as we can see in Fig. 4. The period from 2001 to 2003 reflects the initial phase of adoption, when the number of studies that did not use ontologies was greater or equal than the number of studies that used ontologies. From 2004, on the other hand, and, mainly, from 2007, the use of ontology became the principal means to promote semantic EAI, achieving more than 70% of the studies. Also, the set of all studies that use ontology represents about 71.8% of all the selected studies, indicating a high level of adoption. Petri nets, UML (Unified Modeling Language) models, standards for data exchange, formal languages for event composition, concept hierarchy, etc., were some of the other techniques used for addressing semantics in EAI. These techniques were used in the 28.2% studies that did not use ontologies, although some have appeared in studies that used ontologies. Fig. 4. Adoption of ontologies in semantic EAI along the years Table 3 presents the percentage of studies that use ontologies per integration layer, and the numbers reflect some equivalence. However, we have two exceptions: (i) none (0%) of the studies that focus only on Process layer uses ontology; and (ii) there is a balance regarding the use of ontologies in studies that do not focus on any layer. Besides analyzing the adoption of ontologies along the years, we aimed at identifying the kinds of ontologies that have been used. We identified 5 studies that use Top-level ontologies, which represent 5.4% of the studies that use ontologies. Table 4 presents these studies and the respective top-level ontologies they use. Study Publication year Top-layer ontology [START_REF] Martín-Recuerda | Application integration using Conceptual Spaces (CSpaces)[END_REF] 2006 PSL (Process Specification Language) Ontology [START_REF] Alazeib | Towards semantically-assisted design of collaborative business processes in EAI scenarios[END_REF] 2007 DOLCE -SUMO alignment [START_REF] Bouras | Semantic integration of business applications across collaborative value networks[END_REF] 2007 DOLCE -SUMO alignment [START_REF] Paulheim | Application integration on the user interface level: An ontologybased approach[END_REF] 2010 DOLCE [START_REF] Treiblmayr | Integrating GI with non-GI services -showcasing interoperability in a heterogeneous service-oriented architecture[END_REF] 2011 DOLCE The various studies claim to represent ontologies using a variety of formalisms and techniques, ranging from Semantic Web languages to more simplistic data representation techniques. Based on this aspect, we identified ten categories: "OWL", "RDF and RDFS", "XML", "OIL, DAML and DAML+OIL", "OWL-S", "WSMO", "Knowledge Representation", "Own language", "Other", and "None". The first six categories refer directly to a specific technology. The "Knowledge Representation" category represents languages or formalisms associated to knowledge representation languages (Description logic, First-order logic, Frames, etc.) and graphical representations such as UML and Conceptual Maps, among others. The "Own language" category represents languages or formalisms that were proposed in the context of the corresponding work itself. The "Other" category groups technologies that did not appear in a representative number (three studies or less), including KIF, F-Logic, OCML, Common Lisp, Relational database schema and RDF4S. The "None" category groups studies that only propose the use of ontologies, but do not make commitment to any specific language/formalism. The Fig. 5 presents the percentage of studies per category (a study can fit in more than one category). Fig. 5. The percentage of studies per category of ontology languages We can notice a trend in using Semantic Web technologies, mainly OWL (29%), OWL-S (18%), and RDF/RDF-S (10%). Concerning ontology-based languages for service description, OWL-S (18%) and WSMO (3%) stand out. Despite that WSMO can be used in association with OWL, the largest number of studies used OWL-S instead of WSMO due to a closer relation between OWL and OWL-S. The other categories do not represent, individually, a high number of studies. However they reflect a diversity of ontology representation languages used in the semantic EAI initiatives. It is worthwhile to point out that 8% of the studies do not address any aspect of formalization/implementation, i.e., they just suggest the use of ontologies by proposing general architectures, life-cycle models, guidelines, etc. Discussion Based on results presented in the previous section, in this section, we discuss some important findings and limitations of this mapping. Semantic EAI Efforts over the Years. We consider that the distribution of studies along the years reflects the research efforts in semantic EAI, which suffer influence of the adoption of semantic technologies, mainly ontologies. In our view, the chart shown in Fig. 1 can be analyzed roughly according to the Gartner Hype Cycles [START_REF]Gartner Hype Cycles[END_REF]. The period between 2001 and 2003 corresponds to the "Technology Trigger" phase. The year of 2008 corresponds to the "Peak of Inflated Expectations". The years of 2009 and 2010 correspond to the "Trough of Disillusionment". The lack of change from 2010 to 2011 suggests that we are aimed towards the remaining phases: "Slope of Enlightenment" and "Plateau of Productivity". Business Application Domains in Semantic EAI. The identified diversity of business application domains reflects the coverage of the EAI research area, and, therefore, its relevance. Moreover, we notice that, although traditional business application domains are still the most exploited, EAI initiatives span several niche application domains although in lower rate, characterizing a Long Tail-like [START_REF] Levine | Statistics for Managers using Microsoft Excel[END_REF] distribution (cf. Fig. 2). The domain of "Logistics, Planning and Asset Management" has had the largest representativeness, possibly due to the focus on integration that drives this kind of business, which is founded on interoperation in supply chains. Focus on the Integration Layers. We have observed a predominant number of studies addressing the message/service layer. We believe that this can be justified by the role that functionalities (represented by the message/service layer) play in order to promote the link between data sources and business processes, and the increasing interest in service-oriented architectures in the past decade. We have observed that many of the integration solutions at the message/service layer also consider process technology, which has been seen as a clear trend in EAI. Furthermore, we have observed a low number of studies that focus only on the process layer (3%), suggesting that process layer integration depends on message/service layer integration. Moreover, a considerable number of studies (44%) focus on more than one layer, indicating that integration initiatives have established relations between integration layers to achieve interoperability. Ontologies in Semantic EAI. We have observed that, in the past decade ontologies have become predominant in the semantic approaches to EAI. Ontologies have been used by the solution approaches in order to achieve integration through the various integration layers (data, message/service and process). Regarding the languages and formalisms used to build ontologies in the context of EAI initiatives, we have observed a predominance of Semantic Web languages, leading to ontologies which should be characterized as lightweight ontologies [START_REF] Guizzardi | On Ontology, ontologies, Conceptualizations, Modeling Languages, and (Meta)Models[END_REF]. We have also noted that a number of data representation techniques have been referred to by the studies as ontology representation techniques, indicating a rather permissive use of the term ontology in the literature and a wide variation in what is considered an ontology. Considering the kinds ontologies employed, we can conclude that the use of top-level ontologies in EAI initiatives is relatively underexplored. Nevertheless, these ontologies have gained some attention in the latest years (see Table 4). Limitations of this Mapping. Due to the fact that some stages were performed by only one of the authors, some subjectivity may have been introduced. To reduce this subjectivity, a second author was responsible for defining a random sample (about 35% of the studies) and performing the same stages. The results of each reviewer were then compared in order to detect possible bias. Moreover, terminological problems in the search strings may have led to missing some primary studies. Thus, we performed simulations in the selected databases and included a large number of synonyms in the search string. We decided not to search specific (non-indexed) conference proceedings, journals, or the grey literature (technical reports and works in progress), having worked with studies indexed by the selected electronic databases only. The exclusion of these other sources makes the mapping more repeatable, but with the consequence that we cannot rule out that some valuable studies may have been excluded from our analysis. Finally, the classification of studies regarding their focus on data, message/service and process layers is not straightforward, due to variety of possible approaches and irregularity of use of terminology in the literature. For achieving a more consistent analysis, some studies classifications were discussed in meetings. Thus, we cannot ensure that the results concerning the layers are fully repeatable, due to some level of subjectivity in this classification. Conclusions This paper presented a systematic mapping in the context of semantic EAI. Six research questions were defined and addressed investigating the following aspects: (i) business application domains in semantic EAI initiatives; (ii) focus on the various integration layers; and (iii) the adoption of ontologies in semantic EAI. The contributions of this work are on making evident some aspects associated to semantic EAI research efforts that can drive future research. In this context, we highlight the following conclusions: (i) Most studies in semantic EAI (75%) address message/service layer integration; (ii) Ontologies have became predominant in semantic approaches to EAI; (iii) Semantic Web technologies have been widely adopted by semantic EAI efforts (with OWL being the most common language for ontology representation in the sampled studies); and (iv) The use of top-level (foundational) ontologies, although not expressive yet, has emerged as a new trend in the second half of the period investigated. As future work, we plan to perform deepen our analysis on the use of ontologies in semantic EAI. In particular, we intend to explore how ontologies have been used in semantic EAI, focusing on the role of ontologies in the integration approach. Further, we intend to investigate how the languages/formalisms used to represent ontologies influence the integration solutions. Fig. 1 . 1 Fig. 1. Distribution of the selected studies over the years Table 1 . 1 Results of the selection process stages. Stage Criteria Analyzed Initial N. Final N. Reduction Content of Studies of Studies per stage (%) 1 st Stage Eliminating Title and 702 290 58.6% duplications abstract 2 nd Stage IC1, EC1, EC2, EC3 Title and 290 245 15.5% and EC4 abstract 3 rd Stage IC1, EC2, EC3 and Whole text 245 135 44.8% EC4 4 th Stage Studies published --- 135 132 2.2% before 2001 5 th Stage Studies not accessed --- 132 128 3.0% Table 2 . 2 Data collection form. Field Description Classification schema ID Unique identifier Not applicable Bibliographic Authors, title, conference or Not applicable reference journal, and publication year Business application Business application domains Not defined a priori domain(s) where study was applied Integration focus The integration layer(s) which [Integration at data layer, is(are) the focus of the study Integration at message/service layer, or Integration at process layer] Kind(s) of ontologies Kind(s) of ontologies used in the [Top-level ontologies, or Low- study level ontologies] Ontology language(s) Languages/formalisms used to Not defined a priori implement/create ontologies 4.3 Data Synthesis and Results Semantic EAI Efforts over the Years. In order to offer a general view about the efforts in semantic EAI area, we present in Fig. 1 , a distribution of the selected studies (128) per published year. We can note a growth in the number of published studies from 2001 to 2008, which is characterized by two moments of relative stabilization: from 2001 to 2003, and from 2004 to 2006. After 2008, when we have observed the largest number of published studies, the number of studies decreased until 2010 and remained stable in 2011. Table 3 . 3 Percentage of studies that use ontology per integration layers. Integration layer Studies that use ontology (%) Data layer (only) 71% Message/Service layer (only) 75% Process layer (only) 0% Data and Message/Service layers 86% Message/Service and Process layers 76% Data, Message/Service, and Process layers 87% Without focus on any layer 45% Table 4 . 4 Studies that use top-level ontologies. Acknowledgments. This research is funded by the Brazilian Research Funding Agencies FAPES (Grant 52272362/11), CNPq (Grants 483383/2010-4 and 310634/2011-3) and PRONEX (Grant 52272362/2011).
39,080
[ "1001859", "1001860", "1001861" ]
[ "470043", "485778", "485778", "485778" ]
01474212
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474212/file/978-3-642-36796-0_18_Chapter.pdf
Sabina El Haoum email: [email protected] Axel Hahn email: [email protected] Using Metamodels and Ontologies for Enterprise Model Reconciliation Keywords: enterprise modeling, semantic annotation, model reconciliation, inter-model relations Modeling the enterprise from different views, at different levels of abstraction, and in different modeling languages yields a variety of models. Oftentimes the models referring to the same subject exist independently of each other and their semantic relations are hard to discover or to analyze. This fact hinders the effective exploitation of enterprise models for the purpose of integration and interoperability. The method proposed in this paper is based on semantic annotations and aims for the externalization and machine readability of the model contained information. This assures the accessibility for further automatic processing and facilitates the discovery and analysis of inter-model relations. Introduction In today's economy enterprises operate in a fast changing environment and their competitiveness heavily depends on their ability to quickly respond to these changes in an adequate manner. In this context, decision makers use enterprise models as a means to master this complexity. Depending on the focus in a particular case, models allow to take a certain view and abstraction on the enterprise and concentrate on the goals, processes, structures, competencies, etc. Further, particular models can be broken down into more detailed sub-models. Overall, this yields a "collection of more or less interrelated, special-purpose models" [START_REF] Vernadat | Enterprise Modeling and Integration: Principles and Applications[END_REF]. In contrast to modeling activities known from the field of operations research, business process (re-)engineering, organizational design etc., enterprise modeling accounts for the "need to focus on enterprises as a whole, or at least on a larger set of interacting components, within organizationtaking a more 'total systems' approach" [START_REF] Fraser | Managing Change through Enterprise Models[END_REF]. According to [START_REF] Molina | Enterprise Integration and Networking: Issues, Trends and Vision[END_REF] the main motivations for enterprise modeling are:  The possibility to analyze the enterprise, in order to gain a better understanding and to enable the management of system complexity.  Explicit documentation of enterprise knowledge (know-what, know-how, and know-why).  Improved change management and the possibility to apply enterprise engineering methods.  Enterprise integration and interoperability. Specifically the integration potential lying in enterprise modeling is examined in several works including [START_REF] Molina | Enterprise Integration and Networking: Issues, Trends and Vision[END_REF], [START_REF] Weston | Steps towards enterprise-wide integration: a definition of need and firstgeneration open solutions[END_REF], [START_REF] Li | Introduction to Enterprise and System Modeling[END_REF]. Fig. 1 illustrates possible axes of model-based system integration within enterprises:  Enterprise Hierarchy: From management to production. This integration direction is sometimes referred to as "vertical integration" [START_REF] Vernadat | Enterprise Modeling and Integration: Principles and Applications[END_REF].  Value Chain: From procurement to distribution. This integration direction is sometimes denoted "horizontal integration" [START_REF] Vernadat | Enterprise Modeling and Integration: Principles and Applications[END_REF].  Product Life Cycle: From product development to support. Fig. 1. Possible axes of integration within enterprises, adapted from [START_REF] Li | Introduction to Enterprise and System Modeling[END_REF] The labeled endpoints of each axis denote just the two extremes of the integration dimension. E.g. in the case of integration along the product life cycle axis, models from the product development, design, production and support are involved. Further, enterprise models play an important role with respect to achieving interoperability in and between enterprises. Interoperability problems are concerned with different dimensions (data, service, process, business [START_REF] Chen | Enterprise Interoperability Framework[END_REF], [START_REF] Chen | Architectures for enterprise integration and interoperability: Past, present and future[END_REF]) and have to be addressed at different levels of the enterprise (business, knowledge, ICT systems [START_REF] Chen | European initiatives to develop interoperability of enterprise applications-basic concepts, framework and roadmap[END_REF]). Considering these characteristics of interoperability problems, Ralyté et al. emphasize that they cannot be isolated to a particular level. Rather, it is required to take a holistic Hierarchy Value Chain Management Production Procurement Distribution Development Support Prod. Life Cycle perspective and handle all aspects [START_REF] Ralyté | A knowledge-based approach to manage information systems interoperability[END_REF]. In this context, enterprise models are an important enabler. The remainder of the paper is organized as follows: Section 2 is dedicated to the problem statement. It sheds light on some factors limiting the exploitation of the integration and interoperability potential of enterprise models. Section 3 describes the related work. The proposed solution is presented in section 4, where first high level requirements are formulated, followed by the proposed line of action and the benefits which the authors expect from its implementation. Section 5 contains a conclusion and an outlook. Problem Statement In practice, the potential of enterprise modeling oftentimes cannot be fully exploited. Some limiting factors are:  Different views: Enterprise models come in a variety of models. E.g. product models specifying the characteristics of products, organizational diagrams dealing with the organizational structure of the enterprise, process models dedicated to the activities carried out in the enterprise, to list just a few. Each of these models takes a specific view on the enterprise or some part of it focusing on a certain aspect (e.g. products, organizational structure, processes). As the views reflect the same system from a different angle a certain degree of overlap is unavoidable. In order to maintain a coherent view of the whole system it is crucial to reveal the relations between the overlapping models [START_REF] Li | Enterprise and Information System Architectures. Modeling and Analysis of Enterprise and Information Systems[END_REF].  Different levels of abstraction: Enterprise modeling can take place in a top-down manner. The starting point then is some high-level perspective on the whole system, which by means of decomposition gradually is broken down to more detailed information about parts of the system. Alternatively, it is possible to proceed in a bottom-up mode, where "isolated and limited data are collected and then their relationships are mined before the whole system structure can be formed" [START_REF] Li | Introduction to Enterprise and System Modeling[END_REF].  Different project stages: Enterprise models are used in different project stages to represent: (a) analysis, (b) design, and (c) implementation. Accordingly, on can distinguish (a) as-is models, (b) to-be models, and (c) implementation models [START_REF] Chapurlat | Verification, validation, qualification and certification of enterprise models: Statements and opportunities[END_REF].  Different modeling languages: Enterprise models can be expressed in terms of various modeling languages. Some modeling languages are specific to a certain view, in the context of enterprises e.g. Petri Nets can be used to represent business processes, but are rather unsuitable for modeling the structure of the organization. Other modeling languages offer different diagram types to enable the modeling of different aspects (e.g. UML activity diagrams, class diagrams1 etc. or the various IDEF diagram types2 ). The existing relations between models referring to the same subject of modeling but expressed by means of different modeling languages oftentimes remain unrevealed.  Informal modeling languages: Studies of the modeling practice in Australian enterprises found Entity Relationship (ER) diagrams, flowcharts and UML based models to be the most frequently used modeling techniques [START_REF] Davies | How do practitioners use conceptual modeling in practice?[END_REF]. A similar picture emerges in German enterprises, where ER diagrams, UML, and Event-driven Process Chains are most widely used [START_REF] Fettke | Ansätze der Informationsmodellierung und ihre betriebswirtschaftliche Bedeutung: Eine Untersuchung der Modellierungspraxis in Deutschland[END_REF]. These modeling techniques are popular as they come with graphical notations but their downside is that they are not suited for the application of formal analysis methods. This shortcoming has been described in the literature [START_REF] Li | Enterprise and Information System Architectures. Modeling and Analysis of Enterprise and Information Systems[END_REF], [START_REF] Panetto | Enterprise integration and interoperability in manufacturing systems: Trends and issues[END_REF].  Differences in terminology: The enterprise models make use of natural language to label model elements. Different modelers may use different terms in their models even when they describe the same (part of a) system. Depending on the modeler, his background, his position in the enterprise etc. different terminology flows into enterprise models and results in terminological mismatches. All above mentioned aspects cause a situation of poor model integration and limited interoperability. In an ideal setting, a unified enterprise modeling approach would constitute the solution to this problem. There exist various Enterprise Architecture frameworks supporting unified enterprise modeling (see [START_REF] Li | Enterprise and Information System Architectures. Modeling and Analysis of Enterprise and Information Systems[END_REF] for a survey). However, in practice greenfield projects are seldom and enterprises facing reorganization projects or undergoing mergers and acquisitions have to deal with legacy systems [START_REF] Bailey | Enterprise Ontologies -Better Models of Business[END_REF]. What is required is a means to externalize the inter-model relations in order to overcome the modeling islands built around specific modeling languages, views etc. The authors argue that the approach presented in the paper at hand helps in this situation as it offers a method to deal with diverse models. It allows establishing semantic annotations and therefore facilitates the application of advanced analysis methods. Related Work In recent years Semantic Web methods as a means to achieve model-based integration have been discussed in various works. Liao et al. [START_REF] Liao | Semantic Annotation Model Definition for Systems Interoperability[END_REF], [START_REF] Liao | Formalization of Semantic Annotation for Systems Interoperability in a PLM environment[END_REF] describe semantic annotation of models for the purpose of information systems interoperability. Bräuer and Lochmann [START_REF] Bräuer | An ontology for software models and its practical implications for semantic web reasoning[END_REF], [START_REF] Lochmann | HybridMDSD: Multi-Domain Engineering with Model-Driven Software Development using Ontological Foundations[END_REF] investigate the use of semantic technologies in model-driven software development with multiple domain-specific languages. In their work, Agt et al. [START_REF] Agt | Semantic Annotation and Conflict Analysis for Information System Integration[END_REF] consider the semantic conflict analysis of different models at different abstraction levels of the Model Driven Architecture approach. Several works are dedicated specifically to the semantic enrichment of business process models. The work of Fellmann et al. [START_REF] Fellmann | A Query-Driven Approach for Checking the Semantic Correctness of Ontology-Based Process Representations[END_REF] examines the semantic constraint checking in process models. Missikoff et al. also focus on business process models. They use the BPAL (Business Process Abstract Language) to achieve a formal representation of the business semantics in a Business Process Knowledge Base. Born et al. [START_REF] Born | User-Friendly Semantic Annotation in Business Process Modeling[END_REF] consider in their approach the semantic enrichment of Business Process Modeling Notation (BPMN) models. Lin et al. [START_REF] Lin | Semantic Annotation Framework to Manage Semantic Heterogeneity of Process Models[END_REF] propose a Process Semantic Annotation Model, which based on a metamodel annotation links content and goal annotation to the represented process. The authors of [START_REF] Boudjlida | Enterprise Semantic Modelling for Interoperability[END_REF] and [START_REF] Boudjlida | Annotation of Enterprise Models for Interoperability Purposes[END_REF] turn their attention to interoperability of enterprise models for the purpose of model exchange. They formulate the need for effective support of the semantic annotation process. The Astar (respectively A*) annotation tool [START_REF] Vujasinovic | Semantic Mediation for Standard-Based B2B Interoperability[END_REF] represents one prototype tool for semantic annotation. In the work of Fill [START_REF] Fill | On the Conceptualization of a Modeling Language for Semantic Model Annotations[END_REF] a semantic model annotation language is proposed. Integration of enterprise models can also be based upon their metamodels [START_REF] Karagiannis | Metamodeling as an Integration Concept[END_REF]. In [START_REF] Kühn | Enterprise Model Integration[END_REF] an object oriented metamodel is used as integration foundation for heterogeneous modeling languages. However, the domain semantics aspect is not included in this type of work. A further line of related research is the field of model comparison. E.g. in the work of Gerke et al. [START_REF] Gerke | Measuring the Compliance of Processes with Reference Models[END_REF] the compliance of process models with reference models is examined. They identify the difficulty to overcome different levels of detail in the models to be compared. Combining Metamodels and Ontologies for Model Reconciliation The proposed solution aims for a comprehensive externalization of the information contained in enterprise models. To realize this, a combination of metamodeling and domain ontologies is used. The metamodel is the model of the modeling language itself, it defines a set of modeling artifacts and the valid usage of these artifacts [START_REF] Favre | Megamodelling and Etymology[END_REF]. Domain ontologies are machine readable representations of the concepts in the application domain and the relations among those concepts [START_REF] Guarino | What is an Ontology?[END_REF]. In order to externalize the information of a particular model:  The model is expressed in terms of an ontological representation of the related metamodel.  The model is semantically annotated [START_REF] Boudjlida | Annotation of Enterprise Models for Interoperability Purposes[END_REF], i.e. linked to concepts in a domain ontology. In the literature the combination of metamodel information and semantic annotation linking the model to concepts of domain ontology has been presented as a method to cope with the various kinds of information contained in a conceptual model [START_REF] Karagiannis | Metamodeling as an Integration Concept[END_REF]. Fig. 2. Comprehensive model externalization using metamodel and domain ontology In the example in the figure, the Metamodel Ontology holds a description of the main concepts in the ER metamodel. Describing the model in terms of this Metamodel Ontology makes it possible to store the information about modeling artifacts being used (namely EntityType and RelationshipType). These annotations are shown in the figure as arrows pointing from the modeling artifacts to corresponding concepts in the Metamodel Ontology. Further, the appropriate properties (e.g. from_entity and to_entity) allow expressing the model structure. On the other hand, the Domain Ontology provides for the explication of the semantic of the domain terms used as labels in the model (domain semantics). In Fig. 2 these annotations appear as arrows pointing from the labels "Tutorial" and "Lecture" to the corresponding concepts in the Domain Ontology. Overall the result is a semantically annotated model holding all the information about modeling artifacts, model structure and domain semantics of a particular model instance. Accordingly a substantial part 3 of the model contained information is available in a machine readable form allowing further automatic processing. 3 Obviously, in the given example a complete representation of the model contained information is not reached. On the one hand, the cardinality information is not (yet) handled in the Metamodel Ontology and therefore no annotation of the cardinality of the "belongs_to" relation is recorded. On the other hand, the label "belongs_to" also bears no annotation (yet). However, even with this incomplete coverage of all model details, it is possible to process the annotations and harness the externalized information. 4.1 Requirements A solution using metamodel and domain ontology for the purpose of comprehensive model externalization has to fulfill certain requirements. These requirements are derived from the problem description presented in section 2 and formulated as follows:  R1: The solution facilitates the reconciliation of different views on the enterprise. It is not specific to a certain view, e.g. to process models only.  R2: The solution offers a means to overcome different levels of abstraction and to express that some (part of a) model is semantically related to some other more general or more specific (part of a) model.  R3: Enterprise models are described in terms of different modeling languages. Therefore, the solution must consider multiple modeling languages and their metamodels and be extensible with respect to additional modeling languages respectively metamodels.  R4: The system represents the model contained information in a machine readable manner and enables the application of formal analysis methods (like reasoning). In order to assure the applicability of available state of the art technology, the system processes ontologies in some standard ontology language (e.g. OWL 4 ).  R5: The system enables the user to create new semantic annotations, and to view and/or edit existing ones.  R6: The semantic annotation is not an end in itself. Based on the provided annotation the system discovers inter-model relations and supports their adequate visualization. Enterprise Model Reconciliation Methodology The proposed method works on models represented as individuals of an ontology describing the concepts of its metamodel (metamodel ontology). According to Requirement R3 for each modeling language under consideration the systems holds the corresponding metamodel ontology. Then the line of action is the following: 1. The enterprise models to be analyzed are stored as individuals of the respective metamodel ontology. 2. The semantic annotation of the models is performed. Based on the state of the art methods ( [START_REF] Kalfoglou | Ontology Mapping: The State of the Art[END_REF] presents a survey) annotation candidates are presented to the user, who can accept, modify or reject the proposed annotations. He can also add further annotations manually. The annotations are stored according to a predefined syntax, the so called annotation scheme [START_REF] Boudjlida | Enterprise Semantic Modelling for Interoperability[END_REF], [START_REF] Boudjlida | Annotation of Enterprise Models for Interoperability Purposes[END_REF] or annotation (structure) model [START_REF] Liao | Semantic Annotation Model Definition for Systems Interoperability[END_REF]. 3. The analysis process is executed. The result is presented in a Matrix Browser [START_REF] Ziegler | Matrix browser: visualizing and exploring large networked information spaces[END_REF] (see Fig. 6), where for a pair of models their relations are visualized in a userfriendly way. Method Demonstration We tested the method in the context of the evaluation of the Campus Management Software of the University of Oldenburg. The aim was to reconcile the process models (business perspective) with the data model (technical perspective) of the application. We use this case to explain the proposed method. Suppose one of the processes to be support by the Campus Management Software concerns the preparation of a teaching activity report. According to the university's administration policies every lecturer has to provide such a report at the end of the term. To collect the data the lecture has to determine the courses he has taught and the theses he has supervised in the term under report. Fig. 3 holds the description of this simple process represented as Event-driven Process Chain (EPC) [START_REF] Scheer | Aris -Business Process Modeling[END_REF]. Fig. 3. EPC of the teaching activity report preparation The question is whether the Campus Management Software holds all the data required to prepare the teaching activity report. In order to find the answer the data model of the system is considered. Fig. 4 shows the relevant (in this case minimal) fragment of the data model. of modeling constructs with respect to a metamodel ontology is therefore neglected in the example. Fig. 5. Semantic annotations of the EPC and ER models In Fig. 5 the following annotations are symbolized by an arrow pointing from a term in a modeling artifact label to a term in the domain ontology. There are four annotations from the EPC model (on the left hand side) connecting the terms "courses"and "theses" to the equivalent terms in the ontology and two annotations from the ER diagram (at the top) externalizing the relation between the entity labels "Tutorial" and "Lecture" and the corresponding concepts in the ontology. Based on the provided annotations it is possible to relate the two models. The result of this analysis can be visualized as depicted in Fig. 6 in the Matrix Browser [START_REF] Ziegler | Matrix browser: visualizing and exploring large networked information spaces[END_REF]. There, the ER model appears horizontally in a tree form with the root node ERM, while the process model is shown in the tree with the root node (EPC). According to the domain ontology "Lecture" and "Tutorial" are two specializations of the concept "Course". Hence the system reveals the semantic relation between the function "Determine courses taught" and the event "Courses determined" of the EPC and the entities "Lecture" and "Tutorial" of the ERM fragment. From the result of the analysis the user can extract two relevant facts: 1. As for the courses taught by some lecturer, the relevant information is covered in the data schema fragment of the Campus Management Software. 2. The data schema fragment does not provide the required information about supervision of theses. With this result at hand it is now possible to answer the initial question whether the Campus Management Software holds all the data required to prepare the teaching activity report. Clearly, in the case of two simple and limited models, such an analysis can be done manually without any problem. However in more realistic settings, where we have to deal with a bundle of complex models, the possibility of automatic analysis of intermodel obviously is favorable. Discussion and Outlook The approach proposed in this paper is based upon a combination of metamodel information and semantic annotation linking enterprise models to domain ontologies. The result is the explicit and machine readable representation of the models under consideration. Therefore, from the implementation of the proposed method the authors expect:  A better model documentation and improved readability for the human user.  Enhanced automatic model analysis possibility with respect to different criteria, e.g. consistency of the models.  Qualitative and quantitative model comparison possibility. E.g. the question: Which percentage of one model is covered by another model?  Inter-model navigation possibility based on the discovered inter-model links. While the method demonstration in section 5 highlights the annotations between models and the domain ontology, the metamodel related information remains unused. As the instantiation of the metamodel ontology explicates the modeling artifacts and model structure, also this kind of information facilitates further examination. The work presented here is ongoing, therefore advanced case studies have to be conducted and the evaluation performed yet. Next steps include the implementation of a prototype and experiments with realistic model instances. One critical point which needs special attention in this context is the user interaction in the semi-automatic annotation process. On the one hand, it is desirable to limit the user involvement to the minimum. On the other hand, the quality of the annotation can be expected to increase with stronger user involvement. Another critical question is the availability of appropriate ontologies. While the elaboration of the metamodel ontologies appears a minor issue and partially is covered by the existing research in this field, it is clear that the required domain ontologies are possibly not yet been developed. This aspect is insofar crucial as it can be expected that the quality (i.e. correctness, sufficient level of detail, coverage etc.) of the domain ontologies has a direct impact on the result of the analysis. 1 . 1 Fig. 2 illustrates the idea by means of a simple example. The model under consideration may be a fragment of an Entity Relationship (ER) model belonging to a campus management system in the university domain. In the graphical representation of the ontology the solid arrows symbolize a subclass relation whereas the dashed lines indicate an object property. The arcs pointing from the ER diagram to the Metamodel Ontology on the left hand side and to the Domain Ontology on the right hand side Fig. 4 . 4 Fig. 4. ER representing a fragment of the campus management software data model Fig. 6 . 6 Fig. 6. Inter-model relations visualized in the Matrix Browser http://www.omg.org/spec/UML/ http://www.idef.com/ See http://www.w3.org/TR/owl2-overview/
26,823
[ "1002385", "1002386" ]
[ "486458", "486458" ]
01474214
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474214/file/978-3-642-36796-0_3_Chapter.pdf
Lea Kutvonen email: [email protected] Enhancing the Maturity of Open Service Ecosystems and Inter-Enterprise Collaborations The present business era is labeled by collaborations across enterprise boundaries and by utilisation of service-based computing. Pervasive computing utilities are created to match the basic business activities, such as contracting and breach management, adaptation of innovative business models, and collaboration management. Categories of computer assisted breeding environments and automated service collaboration management ecosystems have been developed to address these needs. However, a maturity framework is required for comparing solutions and indicating gaps in systems development and standardisation, and for adoption of a sufficient set of multidisciplinary research and evaluation methodologies. This paper first introduces steps towards a maturity model, focusing on features that contribute to the correctness of collaborations and scalability of the ecosystem. Second, it introduces the choices made in Pilarcos ecosystem. Finally, it discusses the need for standards and maturity models on this domain, and raises issues on the research methodologies required. Introduction The present business era is labeled by collaborations across enterprise boundaries and by utilisation of service-based computing. Computing utilities are created to match the basic business activities, such as contracting and breach management, adaptation of innovative business models, and collaboration management. These needs are addressed by trends of i) systems where breeding of collaborations across enterprise boundaries is facilitated by glocal applications (glocal= global + local aspects meet to make a pervasive environment) and ii) systems where business services are automatically composed using collateral business processes (choreographies) across organisational boundaries. Roughly, we can consider the first one to be focused on enterprise interoperability. Enterprise interoperability solutions are likely to be run by human decision-makers, because the aim is to address unexpected, new business opportunities that require very close planning and implementation phases to become profitable. The latter focuses on service interoperability, expecting enterprise and business concerns to be used as governance policies, rules and decision-making input. Service interoperability solutions are likely to be allowed to run automatically, addressing new, but expected business cases for which a sufficient amount of software modules are available for runtime composition in a self-administrative manner. Essentially, the technical computing and engineering solutions are very similar, but the expected users of these technical facilities differ; this caricature indicates how the themes of enterprise interoperability, service interoperability and service ecosystems complement each other. A third pattern to observe is the emergence of ecosystems, such as i) software ecosystem by Amazon, Nokia or Apple, ii) eBusiness networks as in supply chains, and iii) social networking platforms, like Facebook or LinkedIn. Each of these bring in elements of discovering new partners for collaboration with explicit or implicit behaviour patterns, business models of explicitly agreed nature and roles of involved partners, and capability of easy evolution. However, each of them addresses only one side of the expected mature ecosystem concerns. For a mature business service ecosystem we expect i) overcoming innovation boundaries [START_REF] Ruokolainen | A Model-Driven Approach to Service Ecosystem Engineering[END_REF]; ii) explicit contracting on business and technology level while preserving partner autonomy in the ecosystem and in collaborations [START_REF] Kutvonen | From trading to eCommunity management: Responding to social and contractual challenges[END_REF]; iii) trust management system to support private decision-making while allowing introduction of new partners into the ecosystem [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]; iv) breach detection and management in an automated, business situation sensitive way (for which the present business transaction techniques are not suitable [START_REF] Kutvonen | Inter-enterprise business transaction management in open service ecosystems[END_REF]). In the CINCO group, we have developed an open business-service ecosystem [START_REF] Kutvonen | From trading to eCommunity management: Responding to social and contractual challenges[END_REF][START_REF] Ruokolainen | 21: Framework for managing features of open service ecosystems[END_REF][START_REF] Kutvonen | ODP RM reflections on open service ecosystems[END_REF][START_REF] Ruokolainen | A Model-Driven Approach to Service Ecosystem Engineering[END_REF][START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF] architecture and supporting ecosystem infrastructure services [START_REF] Kutvonen | Interoperability middleware for federated business services in web-Pilarcos[END_REF], and furthermore, focused on the essential viewpoints and lifecycles [START_REF] Ruokolainen | A Model-Driven Approach to Service Ecosystem Engineering[END_REF] that generate correctness criteria for collaborations [START_REF] Kutvonen | ODP RM reflections on open service ecosystems[END_REF]. This is to address the key problems in inter-enterprise computing today: i) ad-hoc engineering and integration, either directly or through engineering tools that do not have sufficient scientific basis; ii) insecure and misplaced decision-making, e.g., engineers implementing fixed strategies affecting business model or user experience, and iii) missing control and governance of the composed collaboration. This paper first introduces a comparison framework as a step towards a maturity model, focusing on features that contribute to the correctness of collaborations and to the scalability of the ecosystem. Second, we outline the choices made in the Pilarcos ecosystem infrastructure as an example. Thirdly, we discuss the need for standards and maturity models on this domain, and finally raise issues on the research methodologies required. Ecosystem comparison framework For the purposes of comparison we assume the concepts of (business) service, business process, collaboration, and interoperability to be present and that there is vocabulary for declaring their more detailed properties. In addition to these, as ecosystems have different focal areas, we split the comparison framework into three sections: i) innovation and engineering, ii) collaboration lifecycle and iii) ecosystem infrastructure concepts and service. Further, we must note how the ecosystem key concepts are connected across these viewpoints in each case. Interoperability: We define interoperability, i.e. the capability to collaborate, as the effective capability to mutually communicate information in order to exchange proposals, requests, results, and commitments. Technical interoperability is concerned with connectivity between the computational services, allowing messages to be transported from one application to another. Semantic interoperability means that the message content becomes understood in the same way by senders and receivers, both in terms of information representation and messaging sequences. Pragmatic interoperability captures the willingness of partners to perform the collaborative actions. This willingness to participate refers both to the capability of performing a requested action, and to policies dictating whether it is preferable for the enterprise to allow that action to take place. This differs from the standard definitions deliberately by bringing in terms that are important in business terms (like contracts and negotiations), and enforcing concepts from speech act theories to be utilised, due to their suitability for expressing business needs and their technical support. Due to parallel work, the definition also deviates from the term conceptual interoperability that is split into integrability (technical and syntactic), interoperability (semantic, pragmatic) and composability (dynamic, conceptual). Our definition captures the same levels but places composability as a goal of pragmatic interoperability. The comparison framework will include the questions about the support for conceptual, dynamic, pragmatic and semantic interoperability. Innovation and engineering: The traditional software engineering process produces monolithic artefacts that are built with the concepts supported by the engineering tools and the computing platform on which the artefacts are to be run. The process is based on knowledge on computer science and software engineering science, but omits key concepts from other scientific areas; there is little support for solving business issues, for addressing user experience alternatives, and crafting software module composability and management of compositions. Often, the hardest problems are on areas where the engineering phase is not the right time for solving the problem, but should allow operational time decisionmaking, because the decisions can depend on the presence of suitable partners, control of nonfunctional properties such as trust and privacy or transactionality, or regulations forced on the ecosystem to govern all its collaborations. Furthermore, the ecosystem evolution should not be considered only as a software versioning problem, just because the traditional engineering processes are not capable of handling other aspects. Service-oriented software engineering (SOSE) [START_REF]Service-Oriented Software System Engineering: Challenges and Practices[END_REF] enhances the perspective by enriching the engineering process with lessons learned in service sciences in terms of requirements, and SOC platforms [START_REF] Papazoglou | Web Services and SOA: Principles and Technology[END_REF] and development tools in terms of development environment needs. The environment needs to be aware of the memberships, regulation systems and pervasive infrastructure services for runtime compositions. These facilities allow services supported by software artefacts to be composed together to a manageable entity that is aware of its business context and its users' situational preferences. Special business-level challenges to address during the shared innovation and design phases include i) development of collaborative business models for independent partners; ii) partitioning of cost, risk and gained assets in the collaboration contract pattern; iii) trust between partners on being impartial at the design phase; and iv) management of collaborations being made possible by different collaboration and ecosystem members as their roles in the ecosystem requires. The innovation phase creates declarations of business processes and collaboration models for the collaboration lifecycle support processes to utilise. Thus this is one of the collaboration correctness criteria sources, which furthermore is preferably to be considered impartial of ecosystem member incentives. Collaboration lifecycle: The collaboration lifecycle includes traditional phases of i) establishment, ii) operation (or enactment) and control, and iii) dissolution, but also furthermore, iv) collection of experience information for the improvement of further ecosystem activities. Activities in these phases can be mapped to business terms like tenders, proposals, commitments, breaches, and opinions. The collaboration contract is an essential concept for making all the correctness criteria cumulated into the contract from the ecosystem, and collaboration partners. The classifying questions are captured in Table 1. The essential differences in system architectures according to our surveys include splitting to i) enterprise interoperability or service interoperability sys- All phases Is the process a human process with computing support, or automated with human interventions supported? Is the contract involved a multiparty contract or client-server-based? Do the processes always allow partners to make subjective decisions, or is there a centralised decision-making point? Is the decision-making logic binary or deontic? Is the contract dynamic? Does it involve business or technology details or both? Collaboration establishment Nature of information involved: i) partners and their roles in the collaboration pattern; Nature of processes involved: i) Decision-making on trust for the suggested partners or services; ii) interoperability checking; iii) agreement process: level of automation, distribution of the control, quality of the resulting agreement. Enactment and control of a collaboration Levels of interoperability considered; Equality of partners in enactment or centralisation of orchestration control; support for subjective monitoring of processes and NFPs Whether expectations on the communication platform are implicit, explicitly stated, or requirements by which an open binding can be constructed at operational time. Collaboration dissolution Can be triggered by any partner at completion of the task or notification of a breach? Experience collection Metrics for successes and failures; Generation of experience information for reputation systems; Feedback generation for BPR and service improvement Table 1. Classification questions for collaboration lifecycle. tems; ii) dynamism of the collaboration contract and the availability of control interfaces at the enactment phase, iii) multiparty vs client-server constellations, and iv) methods for keeping the membership of the ecosystem in control. Collection of partner information Process of collecting: i) How is the required information produced and published? Granularity of services? Notation suitable (conceptual coverage), extendable, efficient? ii) Does it cover processes, collaboration models, service behaviour /interfaces, NFPs? Information collected: i) How partners are identified? Trustworthy tracking of service offers for contractual needs? ii) Does service knowledge carry explicit requirements information about the runtime service bindings? Information made available: i) Suitability to predefined collaboration structures information available? Collaborations evolvable or fixed? ii) Kind of semantic interoperability support? Does the available information in the ecosystem level databases suffice for interoperability testing? iii) Matching of services to collaboration structure is supported by an ontology or type system? iv) Is there any reputation information associated? Are the services trustworthy, traceable, attributed on their quality? Is the partner/service repository impartial? Partner discovery process Directed for browsing or automated matching, discovery by demand? Client-server or multiparty search with aim to contracting? Quick temporal partner selection or forming strategic networks? Private agent or third party or distributed? Considers interoperability and NFPs? Partner/service selection: Level of automation? If automated, areas of metapolicies for decisions (in what kind of situations automated decisions are permitted)? Style of trust decisions taken? Business needs addressed? Provides for automated eContract negotiation? For contract enforcement? Considers performance and utility aspects? Service selection Do service offers carry interface syntax; behaviour description; service provider; location; type description availability; awareness of resources; awareness of trust; dynamic properties in offers? correctness of information, traceability of announcements? Security and trustworthiness of offers covered? Enactment and monitoring: Scope: external processes only or integrated internal processes? Enactment: active agents or WFMC engine (workflow management engine) or model interpreter or translated process description to implementation? Semantic data transformations explicit or implicit? Breach detection: immediate or delayed? NFP with business issues vs technical SLA? Who are the controllers? Dissolution: BPI metrics? Who provides reports and when? How is information utilised? Reputation information model and processes? Table 2. Classification questions for infrastructure facilities. Ecosystem infrastructure facilities: We take the ecosystem infrastructure as an unbiased, trusted party, and all ecosystem members have systemic trust into its services for each of the collaboration lifecycle phases. The questions investigating the variance within available services are shown in Table 2. In addition, the comparison of solutions should take into account how different threat scenarios have been addressed. Conceptual connectivity between viewpoints: While creating ecosystem models, a small set of essential concepts appear in closely related forms in different viewpoints. For example, a collaboration model under design in the engineering viewpoint will reappear as a contract structure during the collaboration, and eventually will enforce structure for distributing gains and losses for the collaboration members at the dissolution. For a mature ecosystem model, we require these related concepts in different viewpoints be bound together in the lifecycle models. Connectivity should be defined for main concepts, such as contract, business service, breach recovery processes, and NFP (nonfunctional property) frameworks, just to name a few. In a mature ecosystem, the connectivity is managed by metainformation governance, and can be evolved as needed at the ecosystem level. These connections are mostly missed when projects focus on one viewpoint only, but the consequences are serious: Interoperability and correctness failures are often caused by ad hoc transitions from one phase to another. Pilarcos open service ecosystem architecture The Pilarcos open service ecosystem architecture intertwine engineering, governance and operational needs of collaborations and thus involves: -enterprises providing and needing each others' business services, with their published business service portfolios [2, 1]; -business-domain governing consortia, with their published business scenarios and business models [START_REF] Ruokolainen | A Model-Driven Approach to Service Ecosystem Engineering[END_REF]; -infrastructure service providers of individual functions such as service discovery and selection, contract negotiation and commitment to new collaborations, monitoring of contracted behaviour of partners, breach detection and recovery [START_REF] Kutvonen | Interoperability middleware for federated business services in web-Pilarcos[END_REF][START_REF] Kutvonen | From trading to eCommunity management: Responding to social and contractual challenges[END_REF][START_REF] Kutvonen | Automating decisions for inter-enterprise collaboration management[END_REF] and reputation flows from past collaborations [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]; -consortia and agencies that define legislative rules for acceptable contracts [START_REF] Kutvonen | ODP RM reflections on open service ecosystems[END_REF] and joint ontology about vocabulary to be used for contract negotiation, commitment and control [START_REF] Ruokolainen | Modelling framework for interoperability management in collaborative computing environments[END_REF][START_REF] Kutvonen | ODP RM reflections on open service ecosystems[END_REF]; and -infrastructure knowledge-base providers that maintain the information underlying the ecosystem infrastructure functions; this role is essential in enforcing all conformance rules of all ecosystem activities [7, 11, 1]. Key concepts and functionality Three key concepts in the Pilarcos open service ecosystems are those of interenterprise collaborations, eContract agents and ecosystem infrastructure. The Pilarcos architecture views inter-enterprise collaboration as a loosely-coupled, dynamic constellation of business services; it involves multiple partners through their software-based business services and their mutual interactions. A business service is a software-supported service with a functionality suitable for a business need on the market and thus relevant for the networked business. In itself, each business service is an agent, in terms of being able to take initiative on some activity, being reactive to requests by other business services, and being governed by policies set by its owner. The relationship between business service and software supporting it resembles the relationship between an agent and web service [START_REF] Payne | Web services from an agent perspective[END_REF]. Each business services provides business protocol interfaces for each other, but also utilise locally provided agents for connecting to peer services through channels with appropriately configured properties (e.g., security, transactionality, nonrepudiation). The type of the service constellations is declared as business network model (BNM), expressed in terms of the roles and interactions within the collaboration, the involved member services, and policies governing the joint behaviour [START_REF] Kutvonen | From trading to eCommunity management: Responding to social and contractual challenges[END_REF]. Intuitively, a BNM describes a business scenario. The eContract agent governs the inter-enterprise collaboration and captures both business-and technical-level aspects of control, as well the large-granule state information to govern the dynamism of the collaboration. The eContract is structured according to a selected BNM. An essential part of the ecosystem is its ecosystem infrastructure, a set of CaaS agents (Collaboration-as-a-Service) that provide shared utilities for enterprises to discover and select services available in the ecosystem, negotiate and establish collaborations, govern those collaborations through eContract agents, and utilise reputation information and collaboration type information. From the business point of view, the Pilarcos ecosystem provides for the maturity of ecosystems by addressing at the same time four intertwining tiers [START_REF] Kutvonen | Multi-tier agent architecture for open service ecosystems[END_REF], as illustrated in Figure 1. The main ecosystem activities involve service engineering (left and bottom), ecosystem and collaboration governance (left and right), operational-time collaboration support (right and bottom), and ecosystem governance (rules within infrastructure in the bottom), as discussed below. The left side of Figure 1 depicts processes related to engineering steps at each involved enterprise or consortia. Here, metainformation is brought to the system by designers and analyzers: i) available services are published by service providers (enterprises including public and private sector providers), ii) the publicly known BNMs are created by teams of designers and published after acceptability analysis, and iii) regulations for conducting collaboration at administrative domains are fed in by enterprise and ecosystem administrators knowledgeable about local and international laws and business domain practices. This body of knowledge accumulates into metainformation repositories within the globally accessible infrastructure layer. The repositories only accept models that fulfil the set consistency criteria, thus providing a point of control. All created collaborations inherit suitable correctness criteria to be monitored at the operation time. This modeling tier is where service and collaboration in- novation take place, utilising the skills of designers and the feedback gathered from collaborations already operational. The methodologies to be used here apply service-oriented software engineering (SOSE) and model driven engineering (MDE) methods and tools. The modeling tier and ecosystem repositories together give a basis for evolution of the ecosystem with service and collaboration models [START_REF] Ruokolainen | Modelling framework for interoperability management in collaborative computing environments[END_REF][START_REF] Ruokolainen | A Model-Driven Approach to Service Ecosystem Engineering[END_REF]. The right side of Figure 1 depicts the collaboration tier supporting the lifecycles of collaborations from the establishment to the termination phase. The collaboration lifecycle management is automated in all routine cases, and triggers human intervention in new or undefined situations. The automated management decisions can be commitments to collaborations or refusals to participate. In practice, the collaboration establishment is initiated by one of the partners suggesting the use of commonly known BNM that can be picked from the infrastructure repositories. Further, the infrastructure services help in discovery and selection of suitable partner services for the collaboration and running a negotiation protocol between the selected partners. Within the negotiation step, the local, private support agents of each partner consider especially the suitability of the collaboration for the enterprises' strategies and sufficiency of trust in partners. In the enactment and control phase, the local support agents provide protective monitoring and the required contract-related communication. In this way, the CaaS tier services become usable by enterprises or other organisations for making tenders, proposals, commitments, and to react to breaches, as well as initiating, negotiating, committing, and dissolving collaborations, and even, helping the subjective control of new kind of business transactions. The individual eContract is the key element for each collaboration as it governs that multiparty, dynamic agreement with details from business level to communication technology. The eContract also provides interfaces for each partner to notify their observations of the collaboration behaviour, deviations of the expected behaviour, their refusals to accept the recent progression of the collaboration, or their approvals on completing business transactions. The Pilarcos architecture emphasises on subjective and private decision-making support for partners on all collaboration phases. The arrows leading to the left at the right side of Figure 1 depicts the experience information gathered from all the collaborations in the ecosystem and providing feedback information for re-engineering and future decision-making processes in the ecosystem. The bottom part of Figure 1 represents the global, federated infrastructure services that participates the governance, engineering and collaboration management processes, i.e. the CaaS tier (Collaboration-as-a-service). The CaaS tier includes ecosystem-widely available infrastructure services, such as service discovery and selection [START_REF] Kutvonen | From trading to eCommunity management: Responding to social and contractual challenges[END_REF], eContracting [START_REF] Kutvonen | From trading to eCommunity management: Responding to social and contractual challenges[END_REF], breach management [START_REF] Kutvonen | From trading to eCommunity management: Responding to social and contractual challenges[END_REF], and reputationbased trust management system [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF] that allows the scaling of the ecosystem membership. The scaling can be achieved only by creating incentives for parters to behave according to their contracts, and especially according to expectations at the ecosystem maintenance processes like reputation exchange and helps enterprises in adjusting to rapidly changing business situations and participation in natural competition between collaborations and ecosystem members. The reputation-based trust management concept facilitates the scalability of the ecosystem. Here we can rely on social ecosystem studies [START_REF] Fehr | The nature of human altruism[END_REF]: The number of potential partners in the ecosystem is very limited if there are no established behaviour norms, and only slightly higher if misbehaviour is sanctioned. However, if also leaving misbehaviour unreported is considered as misbehaviour, an increasingly large ecosystem can be kept alive. The reputation production mechanism together with the negotiation step, where partners can reflect the collaboration suitability for their strategies and the potential risk predicted with reputation information, creates a cycle that has this necessary control function. It effectively emulates the social or legal system pressure of business domain. This functionality is much missing from other approaches. Further, the ecosystem tier is the source of ecosystem level regulations, thus forming an explicit ecosystem engineering discipline. For each ecosystem this discipline has to be specialised individually. Comparative details Within the above frame, we take a more detailed look at Pilarcos using the maturity framework aspects. Comparisons to other systems in e.g. [START_REF] Kutvonen | Multi-tier agent architecture for open service ecosystems[END_REF]. Innovation and engineering are addressed by the SOSE processes [START_REF] Ruokolainen | A Model-Driven Approach to Service Ecosystem Engineering[END_REF] producing BNM and service types into the infrastructure repositories (service offer repository, service type repository, BNM repository). Service type definitions form a basic vocabulary for declaring BNMs and publishing service offers, and can be reused. The BNMs can be designed collaboratively between multiple impartial organisations, and be then verified and validated for their suitability for the market domain. The vocabularies created by service types and BNMs eventually support the checking of pragmatic interoperability at the operational phase, as the business services in a collaboration do not have a joint inheritance hierarchy that would enforce interoperability. The decisions on the partner selection and trust are postponed to collaboration establishment time. We have separated the model design phases from the collaboration establishment phase, to enable automation at the commitment phase and to separate the innovation phase as the actor sets involved in these phases differ. The traditional virtual organisation breeding environment way (e.g., in ECOLEAD and CrossWork [START_REF] Mehandjiev | Comparable Approaches to IVE. Advanced Information and Knowledge Processing[END_REF]) of first choosing the partners and base the business processes on their capabilities actually forces the design phase for each individual collaboration, and the actors be shared in these two phases. For acquiring correctness of collaborations [START_REF] Kutvonen | ODP RM reflections on open service ecosystems[END_REF], the infrastructure repositories must control the publication of offers or models following the rules provided by the ecosystem management. The control must consider traceability of the declaration makers, acceptability of the models in terms of best practices on a business domain, regulatory rules, and securing the coherence of the repository contents, especially the asserted relationships between stored concepts. For the collaboration lifecycle management the key agents are the the private agents representing the involved enterprises and the eContract agent that governs the collaboration itself. The local support agents subjectively represent the enterprise, and provide a local interface to the ecosystem infrastructure services for the local business services. The enterprise agents are needed for tasks of i) contract negotiation, ii) monitoring during collaboration operation, and iii) experience reporting when the collaboration terminates either having reached its purpose or terminating prematurely due to breaches. A contract negotiator provides interfaces for application software or administrative interfaces to initiate collaboration establishment, or for responding to suggestions from other enterprises. The contract negotiator first utilises the populator for helping in the the initial service selection that is based on public information. As the populator provides suggestions for sets of interoperable partner services for the collaboration, the contract negotiator initiates the negotiation phase that involves private decision-making by all suggested partners. In the negotiation phase each suggested collaborating party can agree to join the collaboration, or refrain. The decisions are split to automatic rejections and approvals, and grey area cases that are forwarded for human decision-making with a kind of expert tool support [START_REF] Kutvonen | Automating decisions for inter-enterprise collaboration management[END_REF][START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]. The decision-making is governed by enterprise policies [START_REF] Kutvonen | Automating decisions for inter-enterprise collaboration management[END_REF][START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF] related to i) strategic policies indicating what type of collaborations or which partners are of interest and worth investing the resources to collaborate with; ii) reputation-based trust that weights the anticipated risk and tolerated risk level [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]; and iii) privacy-preservation that may overrule otherwise acceptable collaborations due too high privacy costs involved. Although trust and privacy are closely related, the decision-making processes on the issues are separate and parallel. Trust decisions weight expected benefits against anticipated losses in a specific business case; privacy decisions guard access to private information, metainformation and behaviour patterns. We define trust as the extent to which one party is willing to participate in a given action with a given partner in a given situation, considering the risks and incentives involved. Trust decisions are subjective evaluations made by the trustor, targeting a given trustee and a given action in terms of standard assets shared between organizations: monetary, reputation, control and satisfaction [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]. We define privacy as the right of subjects to determine themselves for whom, for what purpose, to what extent, and how information about them, or information held by them, is communicated to others [START_REF] Shen | Privacy preservation approach in service ecosystems[END_REF]. Here, the subject can be a person, social group organisation or organisational group. As the result of negotiation phase, eContract agent is created. It comprises of the collaboration metamodel thus providing a shared-language view on the collaboration structure, behaviour, policies and abstracted state. The eContract provides interfaces for the collaboration partners for renegotiation, epoch changes (where membership or responsibilities can be changed), progressing to defined milestones in the business processes, and declaring detected breaches. The logical eContract is physically replicated to the computing systems of each collaboration member. The private contract agents are responsible of keeping the local services in their governance in synchrony with the committed eContract. The eContracts include policies as rules of expected behaviour patterns. For policy expressions we use deontic logic [START_REF] Von Wright | Deontic logic[END_REF]. Deontic logic is not binary (denied/compulsory), but uses rules of prohibition, obligation and permission instead. This is necessary in an environment where there is no single policy maker or enforcer of the policies but the actors are independent of each other. Thus it is not possible for force a partner to refrain from an action, or to force that partner to take another action. However, it is possible to agree that it is a violation of a prohibition to take certain actions, and in addition, to agree on the consequences of violations. The detailed behaviour on functional or nonfunctional aspects of the partners cannot either be (practically) agreed on, but some optional behaviour patterns can be allowed without causing violation management. This area is where permissions clarify the behaviour: something is optional to take place, and there is a specification in existence about the followup behaviour. In the enactment and control phase monitoring agents check the acceptability of the behaviour (messaging) [START_REF] Kutvonen | Interoperability middleware for federated business services in web-Pilarcos[END_REF]. The monitors receive rules from eContract and from their local enterprise policy repositories. The deontic-logic policy approach allows us to make clear distinction between violations of the contracts and acceptable behaviour according to that contract [START_REF] Ruohomaa | From subjective reputation to verifiable experiences -augmenting peer-control mechanisms for open service ecosystems[END_REF]. However, each partner in the collaboration uses subjective rules for decision-making on whether to join the collaboration, or on whether to report to the eContract some violation detected in the sequence of actions they get exposed to. The private enterprise rules and eContract based rules can be contradictory. At the negotiation phase only those policies are checked that are explicated both in the eContract and in the enterprise policies. The enterprises may change their local policies during the collaboration and the arising contradictions can cause breaching business obligations, or failing quality of service agreements, such as availability, timeliness, and privacy-preservation, or as non-repudiation and immutability. At detected breach situations, the partner needs to decide (automatically or through human intervention) whether the breach is serious enough for terminating or leaving the collaboration. In case of an essential breach, the eContract is notified for triggering recovery steps. The breach recovery process is defined as part of the eContract, as there are different categories of theoretical recoverability capabilities. At collaboration termination, successful or unsuccessful, experience reporting is required [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF]. The local agent feeds reports to reputation flow agents that aggregate reputation information, arranged into several asset aspects including monetary, reputation, and control assets. The reputation information becomes available for future trust-decisions throughout the ecosystem. Therefore, a dynamic incentive mechanism is effectively created for ecosystem members to keep to their service offers and eContract commitments (including privacy rules), and especially to the reporting protocols [START_REF] Ruohomaa | The effect of reputation on trust decisions in inter-enterprise collaborations[END_REF][START_REF] Shen | Privacy preservation approach in service ecosystems[END_REF]. The ecosystem infrastructure provided by Pilarcos differs from related approaches. Instead of a simple service offer repository, in Pilarcos the service discovery and selection is supported by a populator agent. The collaboration initiator selects a model from the public BNM repository and invokes the populator to find matching service offers for remaining roles [START_REF] Kutvonen | From trading to eCommunity management: Responding to social and contractual challenges[END_REF]. The populator returns a contract proposal that ensures that the set of services it proposes do match to the roles for their service types, are not denied to work together by regulations, and are interoperable on technical, semantic and pragmatic levels. Furthermore, the populator checks that the additional requirements indicated in all the involved service offers do not inhibit collaboration. New contract proposals can be picked within selected resource limits. In comparison with other service offer repositories (UDDI [START_REF] Bellwood | UDDI Version 3.0. UDDI Spec Technical Committee Specification[END_REF], ODP/OMG trader [START_REF]1: Information technology -Open distributed processing -Trading function: Specification[END_REF]) the fundamental difference is the populator service providing a multipartner matching instead of a client-server setup, and also checks not only technical and semantic interoperability but also takes into account pragmatic interoperability aspects. The pragmatic aspects include views to BNMs, acceptable role combinations and environment contract information (i.e., requirements of the communication channel properties). The information base utilized by the populator agent is based on ODP trading service. Further, the negotiation phase is only refining the populator suggestion in terms of policy agreements and choices at the communication channel structures needed for dynamically configuring open bindings between business services. An open binding provides a constellation where distribution transparencies, transactionality support elements and security levels are selectable, and where a management interfaces stays available for the users of the binding [START_REF] Fitzpatrick | Supporting adaptive multimedia applications through open bindings[END_REF]. For the enactment phase, Pilarcos does not include a business process execution engine, but business services are active agents able to independently trigger business process actions on each other. As the capabilities of the technical software supporting the service may be wider, policies and monitoring is needed to restrict that behaviour to the contracted or enterprise-widely accepted limits. Naturally, contracted behaviour limits are monitored only in the scope of external processes. Monitoring is enhanced towards the business-level NFPs [START_REF] Ruokolainen | 21: Framework for managing features of open service ecosystems[END_REF]. Breach detection is designed to allow immediate resolution to take place, although each collaboration contract may have differently designed breach recovery processes captured in the eContract. Conceptual connectivity is one of the cornerstones of Pilarcos development. For consistency enforcement, the ecosystem repositories are governed by several ontologies or heaps of metamodel hierarchies [START_REF] Ruokolainen | Modelling framework for interoperability management in collaborative computing environments[END_REF][START_REF] Ruokolainen | A Model-Driven Approach to Service Ecosystem Engineering[END_REF]. The purpose is to connect innovation time and enactment time concepts together, and thus ensure that there is no conceptual misunderstandings caused by the change of modeling team tools to enactment time monitors. The conceptual connectivity facilities are based on a conceptual analysis that captures the key concepts and processes required by all ecosystems, and a methodology for creating tailored, evolvable ecosystems for a certain business domain [START_REF] Ruokolainen | A Model-Driven Approach to Service Ecosystem Engineering[END_REF]. Discussion Our experiments on developing the Pilarcos ecosystems have created some opinions on the direction of future work, standards, and expectations on the requirements on the scientific base on the field. The CINCO group mission is to develop a mature, dynamic ecosystem architecture for protecting organisations from interoperability mistakes, future needs of major collaboration platform change, and for supporting easy innovation in multiple, governed inter-enterprise environments. The Pilarcos contributions address the key problems in inter-enterprise computing: i) ad-hoc engineering and integration (can be within tools); ii) insecure and misplaced decision-making; and iii) missing control and governance of the systems composed. We expect mature open service ecosystems to provide -ecosystem infrastructure with management functionality involving embedded model verification and validation; -private and public decision-making points that address the needs of business stakeholders and can be policy-driven but allow intervening; -scalability through automation for breach detection and limiting of misbehaviour of the ecosystem members (incl. trust, privacy); -systematic support and automation on collaboration lifecycle management involving autonomous parties; -enhanced safety/correctness of collaboration lifecycles based on eContracts and underlying metamodel hierarchy; and -a subjective, relaxed view to business transactions. Open service ecosystems provide an environment in which enterprises (or organisations, even individuals) can easily pick a collaboration model, find potential partners beyond their normal strategic networks, and manage the lifecycle of the dynamic collaboration. In order to adopt and trust ecosystem services, these enterprises need to preserve their autonomy and to gain understanding on how, and to which extent, the ecosystem services can ensure the correctness of collaborations. Correctness of collaborations can intuitively be connected to freedom of deadlocks and livelocks, fairness, consistency of the partners view on the state of affairs at each milestone of the collaboration, and conformance of the collaboration behaviour to the subjectively set policy requirements for the collaboration. Only some of these properties the enterprises themselves can enforce while the others must be produced by collaborative engineering and control functionalities at the ecosystem or at the collaboration level. Furthermore, for the sustainability of the ecosystems it is essential that the evolution of the ecosystem is supported and the facilities provided by the ecosystem are scalable in terms of the ecosystem membership and service numbers and capacity of learning from experience. A future challenge is to appropriately interface organisational processes with the ecosystem agents. Existing standards on the field are limited to singular business domains where tailored dictionaries and processes, or technological solutions are defined. In addition, trials of description languages for services and business processes have been made, leading much to the same expressive power. Inclusion of semantics notes is still not resolving the needs of composability, but a more systemic, and unfortunately, rather complex solution is required. This requires the courage to view all elements of the ecosystems at the same time. Some of the major cornerstones lie within the conceptual metamodeling hierarchy that is not prone to standardisation as such, but more likely to appear in forms of ecosystem engineering methodologies [START_REF] Ruokolainen | A Model-Driven Approach to Service Ecosystem Engineering[END_REF]. In order to choose appropriate candidate areas for standardisation, we must understand the evolution path of large systems. At present, this field has seen individual environments built, focused on one or two of the essential viewpoints. In addition, interoperability solutions based on tools and modeling have been tried; much of the present EU-level research is working with problems of the model and tool interoperability. This provides for collaboration management from a design perspective. The next major step is to push the capabilities generated by these tools into a common infrastructure in a generic form. This provides for ecosystem engineering and intertwining above-discussed perspectives. The new standards should not hinder this step, but the standards should be chosen from the areas supporting both the ecosystem engineering and the collaboration management tiers. It is likely that those standards may indicate different maturity levels for systems in different phases. Good standard candidates include contract structuring, domain-specific collaboration modeling methods, innovation support, and an open binding infrastructure to mature ESBs. The field is multidisciplinary in nature, which causes debates on the science base or research and evaluation methodologies. This interdisciplinary nature forces us to solutions constructed from elements from more than one scientific field. Considering our work with Pilarcos, we have applied several underlying computer science disciplines (such as extended state machines and protocol verification, even coloured Petri nets lately; patterns of reflective systems to control software artefacts with models; multi-agent technologies and type disciplines), complex adaptive systems theories, speech act theories with multi-agent systems, and basics of business, economy and psychology. An essential goal in our work is that the ecosystem innovation and collaboration management environment should protect the engineers and especially the business-oriented users of the ecosystem facilities from most of these scientific considerations. Complexity should be hidden within the tools and methodologies, and the metainformation hierarchies built so that the complex rules are taken into consideration automatically, in the guidance of expert systems. Considering the evaluation methods, the situation is equally complex. With Pilarcos, we started from constructive research, building prototypes and measuring their performance and balancing the cost with the achieved automation. We also had ATAM-like discussions with collaborating companies for validating question setting, focus of interest, and thresholds for adoption. On the side, we made threat analyses to understand security weaknesses of the introduced services. For the architecture as a whole we have done ODP-based modeling for major parts, giving concepts and processes a formalism beyond the functional verification. Recently some group members have taken the direction of design science. In addition, we have been pondering if there should be a basic benchmark defined for trust management and ecosystem governance [START_REF] Ruohomaa | Behavioural evaluation of reputation-based trust systems[END_REF]. Researchers can only make sure to catch for each project people with solid research skills in different, supporting disciplines and evaluation processes, and furthermore, ensure efficient cooperation despite the seemingly different goals. More importantly, researcher educators and funders should recall that overly multidisciplinary groups do not provide sufficient support for most researchers. Unfortunately, many of the current research financing instruments fail by requiring too much simultaneous multidisciplinary work that leads to progress by additional detail where structural innovation is needed. Fig. 1 . 1 Fig. 1. An overview of the Pilarcos open service ecosystem. Acknowledgements This paper draws from work in the CINCO group (Collaborative and Interoperable Computing, http://cinco.cs.helsinki.fi/) at the University of Helsinki. Over the years, the group has been funded by various companyrelated projects through TEKES, and the Academy of Finland.
51,383
[ "1002379" ]
[ "50895" ]
01474215
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474215/file/978-3-642-36796-0_4_Chapter.pdf
Yanyan Han Lei Wu Shijun Liu Xiangxu Meng An Interoperability Points Based Interoperability Approach for SaaS applications Keywords: enterprise interoperability, SaaS, interoperability point SaaS applications have been widely adopted especially by small and medium enterprises. At the same time, the features "multi-tenancy" and "loosely coupled" bring new challenges to enterprises interoperability. On the basis of the layered interoperability model, the paper presents an approach based on interoperability points to implement interoperation between SaaS applications in the service layer. After carrying out the interoperability point matching algorithm, the intermediary Enterprise Service Bus (ESB) performs dynamic selection of interoperability points dictated by Quality of Service (QoS) attributes. In the premise of a comprehensive consideration of the functional and non-functional preferences and constraints, dynamic interoperation between SaaS applications is realized. Finally, this paper shows a case study of applying the interoperability approach. Introduction In the current industrial and economic context, enterprises should be capable of seamlessly interoperating with other enterprises across organizational boundaries to gain more benefits. Enterprise Interoperability (EI) has therefore become an important area of research to ensure the competitiveness and growth of enterprises [START_REF] Charalabidis | Enterprise Interoperability Research Roadmap[END_REF]. At the same time, SaaS (Software as a Service) [START_REF] Dubey | Delivering Software as a Service[END_REF] has been widely accepted as a popular way to carry out the software service delivery. SaaS applications have been adopted by more and more business partners, especially by small and medium enterprises (SMEs). Software delivered in a SaaS model is no longer running exclusively for one customer at a customer's premise but supports multi-tenants over the Internet, which is called "multi-tenancy". Enterprises once accomplish their business through the interaction between traditional on-premise software must face with the interoperability issues between SaaS applications hosting anywhere. The feature of "loosely coupled" means that interoperability bridge between two SaaS applications must be services with standard interfaces. The above two features are exactly two main challenges of interoperability between SaaS applications [START_REF] Liu | Dynamic Interoperability Between Multi-Tenant SaaS Applications[END_REF]. In this paper, we focus especially on the new framework and approach to implement interoperation between SaaS applications in the service layer. In our proposed framework, a SaaS application which wants to interoperate with other SaaS applications should expose a standardized web service interface as an interoperability point which acts as a source interoperability point. After searching among other interoperability points according to the basic attribute constraints analyzed from the interoperation request, we can gain several related interoperability points. On a basis of an interoperability point matching strategy, we put forward an interoperability point matching algorithm. The algorithm takes the operation interfaces of the related interoperability points as input, and produces some target interoperability points sorted by matching degree. The intermediary ESB performs dynamic selection of these target interoperability points dictated by QoS attributes and gains the optimum interoperability point to interoperate with. The dynamic interoperation between SaaS applications is realized finally. The following parts of the paper are organized as follows: Section 2 introduces the previous work on the layered interoperability model as well as the research actualities of enterprise interoperability. In section 3, we present an overview of the interoperability framework in the service layer and the main components, which is followed by section 4 that describes the process of interoperability point discovery. Section 5 discusses the implement of dynamic interoperation based on ESB. Section 6 presents a case study. Finally, conclusions and future work directions are shown in the last section. Related Work Researchers have presented many initiatives which are concerned with the elaboration of an enterprise interoperability framework. Kassel [START_REF] Kassel | An Architectural Approach for Service Interoperability[END_REF] presents some foundations for introducing a decision support model into a model-driven interoperability architecture for services. Arafa et al. [START_REF] Arafa | A High Level Service-Based Approach to Software Component Integration[END_REF] set out a framework for a high-level approach to software component integration. For another work, Yang et al. [START_REF] Yang | a1.: Data Service Portal for Application Integration in Cloud Computing[END_REF] provide a novel service and data management platform called DSP (Data Service Portal) that facilitates the integration of applications by sharing their information in a loosely coupled manner. Other significant pieces of work such as the LISI approach [START_REF]The C4ISR Architecture Working Group (AWG) (CAWG): Levels of Information Systems Interoperability[END_REF], the IDEAS interoperability framework [START_REF]IDEAS: Interoperability Development for Enterprise Application and Software Roadmaps[END_REF] and the European Interoperability Framework (EIF) [START_REF]EIF: European Interoperability Framework[END_REF] aim at different concerns. In previous work, we have designed an approach to develop SaaS applications and implemented a service based collaboration supporting platform (New Utility platform & Tools for Service, Nuts) [START_REF]Nuts[END_REF] to deliver them to enterprises. With the ultimate aim to provide means to resolve all kinds of interoperability challenges that may hamper the effective usage of SaaS applications in supporting enterprise collaborations, we have given the definition of the "layered interoperability model" [START_REF] Liu | A Process Interoperability Method for SMEs[END_REF]. For the interoperability of independent SaaS applications must be implemented from the UI layer to the data layer underneath. The enterprise interoperability framework is designed as a layered model with 5 layers including data layer, service layer, process layer, business layer and presentation layer. A modified Widget model is used to implement interoperation in the presentation layer. Service layer interoperability refers to discover, composite different kinds of application functions or services for well collaborative work, which is the core interoperability of the five layers. The goal of interoperability in the process layer is to make various processes work together. Interoperability for the business layer is on the standpoint of organization and company, and it deals with the interoperation barriers causing by diverse business rules, policies, strategies, legislation and culture. Business layer interoperability is established by negotiation mechanism and monitoring facilities, which makes the use of a federated analogous interoperability form. Data synchronization toolkit and message engine are implemented to address the integration issues in data layer. The interoperability in the same layers is interconnected by two or several interoperability points. The interoperability point is defined as an interface between two interoperability entities and has different forms in different interoperability layers. To implement interoperation between two SaaS applications, we should detect and define the interoperability point for different SaaS applications in different layers. Focusing on the interoperability approach for SaaS applications in the service layer, this paper outlines an interoperability framework and gives the formal definition of the interoperability point in the service layer. 3 The Interoperability Framework in the Service Layer In the subsections below, we give a brief overview of the key components in the framework as shown in figure 1. Fig. 1. The interoperability framework in the service layer. Web Service Registry This module mainly includes two components: 1. Register Interface Web Service (WS) technologies rapidly become the de facto standard to expose the functions of business application. The ISVs (independent software vendors) package and publish the business functional modules as web services ahead of registering to the web service registry on the platform. The platform automatically extracts the service metadata information from the given WSDL, such as service name, operations, input/output parameters, etc. On this basis, the ISVs need to add some other service attributes to complete the registration, such as service description and service type. Our Web Service formalization definition is given below: Definition 1 (Web Service): A Web Service is a tuple WS = (SA, OPs), where: SA delegates the public attributes of service, including service name, service description and service type; OPs is a finite set of operations, then for every OP∈OPs, OP = (Oname,Ins,Os), Oname is the operation name; Ins represents the input parameters of OP; Os represents OP's output parameters, for each I∈Ins, O∈Os, I = (Ipname,Iptype), O = (Opname,Optype). Ipname and Opname are respectively the input parameter name and the output parameter name. Iptype and Optype are the input parameter type and output parameter type. The web service registry realizes the service classification, the standards and specifications of the service description and enhanced service discoverability. Through the unified classified standard, services that registered in web service registry can be searched by name or other constraints. Expose Interface The interoperability point is the interface between two interoperability entities, namely two SaaS applications. In order to fully realize interoperation among SaaS applications, we should define interoperability points formally and analyze the procedure of searching, matching and selection of interoperability points. The SaaS application can selectively expose the registered web service as an interoperability point. Only by the exposure operation will the SaaS application be possible to interoperate with other SaaS applications. The interoperability point not only inherits all the attributes of the web service, but also appends several new attributes, such as enterprise attributes, QoS attributes and URI. The enterprise attributes can be used as one of the conditions of interoperability point searching. By identifying the enterprise attributes, SaaS applications can interoperate with related enterprise's SaaS applications. Interoperation between SaaS applications should be based on mutual trust. In some cases, SaaS applications only hope to interoperate with their partner enterprises' SaaS applications. The QoS attributes can be updated by the monitor on the Nuts platform in real-time and can be used as the basis for the ESBbased dynamic selection among target interoperability points. The URI uniquely identifies the interoperability point which serves as the entry point for the interoperation call. The formalization definition of Interoperability Point: Definition 2 (Interoperability Point): A Interoperability Point is a tuple IP = (SA, OPs, QoS, EA, URI). It inherits the whole attributes of Web Service. QoS, EA and URI are three new attributes, where: QoS attributes including response time, reliability and usability; EA means the enterprise attributes; URI uniquely identifies an interoperability point and serves as the entry point for the interoperation call. Interoperability Proxy The interoperability proxy is responsible for interoperability point discovery. Similar with web service discovery, the interoperability point discovery in this paper refers to obtaining target interoperability points which both satisfy the users' basic attribute constraints and match with the source interoperability point according to the operation interface constraints. The proxy briefly includes several following components: 1. Listener Component 1) Listening interoperation request This component carries on the analysis of the interoperation request and obtains the basic attribute constraints of interoperability points, such as service name, service type, enterprise attributes and so on. The formalization definition of Interoperation Request: Definition 3 (Interoperation Request) : : : : IR = (SN, SD, ST, EA, w), where : SN : service name; SD: service description; ST: service type; EA: enterprise attributes; w: the threshold value of matching degree between interoperability points. 2) Listening fresh exposure of interoperability points The framework is also able to support run-time interoperability point discovery. The listener component can dynamically discover new interoperability points exposed by SaaS applications. According to the current interoperation request, it determines whether the new interoperability points can be used as new target interoperability points. Searching Component In a large scale of interoperability points, how to discover the target interoperability points rapidly, accurately and efficiently is a tough problem. In order to reduce the time consuming of the interoperability point matching algorithm, we divide the process of interoperability point discovery into two phases, namely the searching phase and the matching phase. In the searching phase, the proxy obtains several related interoperability points after querying according to the basic attribute constraints in the interoperation request. An operation interface matching algorithm is applied to related interoperability points in the next step. This strategy can effectively filter out the irrelevant interoperability points, reduce the input range of the matching algorithm and improve the efficiency of the algorithm. Matching Component To enable interoperability points seamlessly interact with each other, the way how to design the interface matching algorithm is a key. We put forward a matching algorithm for the operation interfaces of interoperability points. On the basis of related interoperability points get from the last searching phase, we can get a set of target interoperability points ranked according to the matching degree. A number of different business processes will be formed after invoking the matching algorithm. The same web service exposed by different SaaS applications may become different interoperability points which have the same web service attributes. For example, the interoperability points IP5, IP6 and IP7 in the figure 2 are exposed from the same web service WS2, but they belong to different SaaS applications. At the same time, the same SaaS application may deploy multiple instances, so there may also exist interoperability points possessing the same web service attributes. For example, the interoperability points IP1, IP2 and IP3 in the figure 2 which belong to the different instances of the same SaaS application also possess the same web service attributes. The target interoperability points which have the same web service attributes possess the same matching degree after matching with the source interoperability point, so the searching and matching process can be omitted. Meanwhile, they generate the same business process, the user can choose according to their actual needs as well as the matching degree obtained. As shown in figure 2, after interoperability point searching and matching, two processes have been generated: IP0->{IP1, IP2, IP3, IP4}; IP0->{IP5, IP6, IP7}. After performing the selected process, ESB perform dynamic selection of these target interoperability points dictated by QoS attributes. ESB routing engine Through the searching and matching performed by the interoperability proxy, we have get some target interoperability points which meet the goal of a business process. From the above, we know that there may be multiple target interoperability points, and new interoperability points that meet the request and matching rules may be exposed, and some target interoperability points may be no longer available or can no longer respond to the request. In these cases, to obtain a fast response and high quality service, we need an intermediary to conduct the dynamic selection and the discovery of target interoperability points. ESB is the core and basis of SOA, and one of its core functions is message routing [START_REF] Keen | Patterns:Implementing an SOA Using an Enterprise Service Bus[END_REF]. Message routing mainly refers to the delivery of messages between request endpoint and provider endpoint according to certain rules and logic. In addition, ESB supports transport protocol conversion and message format conversion and applications are able to flexibly connect with each other, regardless of the platform and technical differences. Consequently, we use ESB in our framework to determine an optimum interoperability point from candidates based on the QoS attributes in that we think the quality of the target interoperability point is one of the main concerns. 3.4 The process Using the interoperability framework, the process could be illustrated as follows: 1. The ISVs package the business functional modules in the SaaS applications, publish as web services according to defined rules and norms and register to the web service registry on the platform after determining some service attributes. 2. SaaS applications can selectively expose the registered web service as an interoperability point. Only by the exposure operation will the SaaS application be possible to interoperate with other SaaS applications. 3. The interoperability proxy obtains several related interoperability points after querying according to the basic attribute constraints in the interoperation request. On this basis, according to the operation interface matching rules, some target interoperability points and several business processes will be gained after invoking the matching algorithm. 4. After the searching and matching phase, ESB performs dynamic selection of these target interoperability points dictated by QoS attributes and obtain the optimum interoperability point ultimately. Interoperability Point Discovery Web service discovery is based on web service matching. The functionality provided by web services is accomplished by calling the operations. The operation is the basic functional entity of web services. Every web service comprises a number of operations. Service matching is ultimately reflected in the operation matching. We have acquired a set of related interoperability points after the searching phase. In order to further find the target interoperability points that can actually interact with the source interoperability point, we make full use of the service operation structure information provided by the current standard service description language WSDL, establish matching rules and design the interoperability point matching algorithm based on operation interface descriptions. Interoperability Point Matching Rules Each web service has an associated WSDL document, describing the service functionality and interface. Every service contains a series of operations and each operation is a set of names corresponding to the operation's input and output parameters. WSDL document describes the name and data type of each parameter in more detail. The main content of the WSDL description document of a web service can form a tree structure logically. As shown in Figure 3, there are four layers in the figure . The root node represents an interoperability point. The nodes in layer 2 represent the operations. The nodes in layer 3 represent the input or output messages. And the nodes in layer 4 represent the parameters of the messages. The input and output parameter types of operations are defined with XML Schema. The parameter type can be divided into simple data type and complex data type. Simple data type needs only parameter name and internally defined parameter type such as int and string. Each parameter is presented in the form of <name, type>. But for complex data type, the model group tags which nest other simple data types or complex data types are used. We begin with the input and output parameters of operations and match them in three aspects: the number of parameters, parameter name and parameter type. When the matching degree calculated by matching the output parameters of a source interoperability point and the input parameters of a related interoperability point in the aforementioned three aspects reaches the threshold user preset, the two interoperability points match successfully. And the related interoperability point can be treated as a target interoperability point. The concrete matching rules are shown as follows: 1) The number of the output parameters of the source interoperability point has to be the same with that of the input parameters of the related interoperability points. That is the precondition of the following matching processes. 2) The simple data type parameters are shown in the form of <name, type>, so matching degree is the combination of parameter names matching degree and parameter types matching degree. For parameter names, we can match them according to semantic similarity. For example, we can use the existing WordNet [START_REF] Miller | WordNet: A lexical database for english[END_REF] semantic dictionary. For parameter types, we can reference the classification method in article [START_REF] Yu | Automatic Web Service Composition Based on Interface Matching[END_REF]. 3) The complex data type parameters nest other simple or complex data type parameters. So we implement algorithm with recursive. Interoperability Point Matching Algorithm The following are the main matching algorithms. Algorithm 1 getTargetIPs Input: SIP, the source interoperability point RIPs, the set of related interoperability points W, the threshold value of matching degree Output: TIPs, the set of target interoperability points Set sp as the operation of the source interoperability point; Set OPs as the set of target operations in target interoperability points; Set MD=0; For each interoperability points RIP in RIPs{ For each operation p∈ RIP{ MD= getMatchDegree(sp,p); If(MD>w){ RIP.OPs.add(p); If(RIP is not in TIPS) TIPs.add(RIP); } } Algorithm 1 matches the operations of source interoperability point with all the operations of the related interoperability points. The interoperability point whose calculated matching degree is greater than the threshold value user preset will be added to the set of target interoperability points. Algorithm 1 calls Algorithm 2 to calculate the matching degree between operations. Algorithm 2 is used to calculate the matching degree between the operations of interoperability points. Firstly it judges whether the number of parameters are the same. Secondly, the operation matching degrees of simple data type and complex data type are calculated respectively. The returned value is used in Algorithm 1. Algorithm 2 getMatchDegree The calculation of the matching degree of parameter names and parameter types is not the emphasis in the paper and no more words about it here. ESB-based Dynamic Interoperability On the basis of functional matching, target interoperability points must guarantee some kind of quality. So we use QoS attributes information, such as response time, reliability and usability, as the basis of dynamic target interoperability points selection. NUTs platform provides a monitor which can update the QoS attributes information of interoperability points in real-time and the monitor can select the interoperability point with optimum performance according to some certain rules. We need an intermediary to receive the request messages and route the messages to the target interoperability points. ESB implements message routing that receives and dispatches messages from source to the target. In addition, ESB establishes transport protocol conversion and message format transformation. Among several ESB implementations, we choose Mule and integrate it to our interoperability framework to realize the interoperation between SaaS applications. Web Service Proxy is one of the commonest scenarios in ESB and also one of the four pattens of Mule. There are three components in the Web Service Proxy, as shown in Figure 4. MessageSource MuleMessage is received or created by MessageListener. For example, If the DefaultInboundEndpoint is adopted as the MessageSource, SOAP messages will be received from the socket. OutboundEndpoint It is in charge of receiving and distributing messages. AbstractProxyRequestProcessor It is responsible for handing MuleEvent and rewriting WSDL addresses. There are two implementation classes, which are StaticWsdlProxyRequestProcessor and DynamicWsdlProxyRequestProcessor respectively. By the following codes, we get the optimum interoperability point's address based on QoS analyzation and add an output endpoint with the new address dynamically. Then Mule can transfer the request messages to the optimum interoperability point. A Case Study This section demonstrates the features of our interoperability framework by referring to an example. On the NUTs platform, there exists a good deal of SaaS applications. Many SaaS applications expose the standardized web service interfaces uniformly registered by ISVs as interoperability points. For Example, there are two SaaS applications on the delivery platform, one is supply business management system (SBM) and the other is Advanced Plan Optimization (APO). Several organizations tenant these SaaS applications and maintain their own instances. We can observe from figure 4 that SBM_A, APO_B and APO_C are three typical SaaS applications which expose some web service interfaces as interoperability points. If Tenant A which rents SBM_A wants to optimize the result plans list queried by PurchasePlanQuery, it can put forward an interoperation request. PurchasePlanQueryA should be treated as a source interoperability point and three target interoperability points will be figured out after the searching and matching process. Two different business process "Purchase Plan Query->Supply Forecast" and "Purchase Plan Query->Plan Optomize" will be presented to Tenant A. Tenant A should choose one of the business processes based on own preferences. Then ESB will dynamically select interoperability points and perform transport protocol conversion and message format transformation simultaneously. SaaS applications The Inbound which serves as the request client of Mule receives the request messages. Web Service Proxy receives not only request message but also the optimum interoperability point address selected by the monitor on Nuts platform. Web Service Proxy creates a dynamic endpoint and rewrite the OutboundAddress as the new endpoint address. When the call is triggered, Mule will deliver the request message to the optimum interoperability point and the dynamic interoperation between two SaaS applications is realized finally. Fig. 6. An example process of dynamic interoperation between two SaaS Applications. Conclusion and Future Work The paper presents an approach to implement interoperation between SaaS applications in the service layer. We provide the formalization description of the interoperability point and put forward an interoperability point matching algorithm on a basis of an interoperability point matching strategy. After interoperability point matching, the intermediary ESB performs dynamic selection of interoperability points dictated by QoS attributes. In the premise of a comprehensive consideration of the functional and non-functional preferences and constraints, we finally realize dynamic interoperation between SaaS applications. In our algorithm, interoperability points are sorted in a particular order. We need match each interoperability point with the source interoperability point one by one exhaustively. The matching algorithm will meet efficiency problem when the number of interoperability points reaches some order of magnitudes. In our future job, index mechanism will be introduced to build the function index of interoperability points and a matching algorithm based on index will be provided. Fig. 2 . 2 Fig. 2. The target interoperability point in a different case. Fig. 3 . 3 Fig. 3. Matching between two interoperability points based on the operation interface descriptions in the WSDL document. Fig. 4 . 4 Fig. 4. The Web Service Proxy Pattern of Mule. Fig. 5 . 5 Fig. 5. An example process of interoperability point discovery. Acknowledgment The authors would like to acknowledge the support provided by the National High Technology Research and Development Program of China (2011AA040603, 2012AA040904), the National Key Technologies R&D Program of China (2012BAF12B07), the Natural Science Foundation of Shandong Province (ZR2009GM028, ZR2011FQ031) and Independent Innovation Foundation of Shandong University (IIFSDU)
29,310
[ "1002388", "1002389", "1002390", "1002391" ]
[ "26886", "26886", "26886", "26886" ]
01474216
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474216/file/978-3-642-36796-0_5_Chapter.pdf
Jorick Lartigau email: [email protected] Xiaofei Xu email: [email protected] Lanshun Nie Dechen Zhan email: [email protected] Similarity evaluation based on intuitionistic fuzzy set for service cluster selection as cloud service candidate Keywords: Cloud manufacturing (CMfg), Service cluster, Cloud service, Intuitionistic fuzzy set (IFS), Artificial bee colony (ABC) Cloud manufacturing (CMfg) provides new opportunities toward the servitization, and embeds a set of functional features to enhance the collaboration among various service providers and their resources. The main target is to compose dedicated manufacturing cloud, by encompassing a set of cloud services, to manufacture a requested service. CMfg is a recent concept, but already widely spread in the academic and industrial researches in China. The paper firstly focuses on the manufacturing environment background to understand its purpose. Thus as an introduction, the concept of CMfg is discussed. Finally, we present a method based on intuitionistic fuzzy set for the similarity evaluation between cloud services and service clusters. The objective is to match the best service cluster to provide composite resource services as cloud service candidates. Our method is ABC (Artificial Bee Colony) optimized, and its performance are discussed through experiments. Background Modern manufacturing industries are facing a major change in their organization, conducted by an unpredictable competition on a worldwide scale [START_REF] Tao | Study on manufacturing grid and its resource service optimal-selection system[END_REF], the emergence of new information technologies, and cloud technology. Indeed, IoT (Internet of Things) / IoS (Internet of Services), Future Internet, Cloud computing and Virtualization techniques offers many new possibilities to remodel the manufacturing environment significantly. Meanwhile, during the past two decades, many advanced manufacturing models and technologies have been proposed in order to realize the aim of TQCSEFK (i.e. faster time-to-market, higher quality, lower cost, better service, better environment, greater flexibility, and higher knowledge) for manufacturing enterprises. Typical examples include computer integrated manufacturing (CIM), lean manufacturing (LM), digital manufacturing, agile manufacturing (AM), networked manufacturing (NM), virtual manufacturing (VM), application service provider (ASP), collaborative manufacturing network, industrial product-service system (IPS), manufacturing grid (MGrid), crowd sourcing and supply chain [START_REF] Tao | Cloud manufacturing: a computing and service-oriented manufacturing model[END_REF]. But the modern manufactur-ing fa Enter High-performance and precision equipment In a world of competition with a wide range of offers, a service demander may give priority to the "best of breed". This phenomenon necessarily stimulates the enterprises to invest in better equipment and manufacturing resources, to enhance the service or product quality. Such investment can be unreachable for SMEs, especially with a large number of machineries and resources implied. As a result, collaboration between SMEs becomes an undeniable fact for their survival and expansion. On another hand, enterprises willing to invest might face under utilization of high-performance and precision equipment. From a business view, an interesting fact will be to offer the use of these equipments on demand, enabling a full sharing and open new business opportunities. Enhance the QoS (Quality of Service), with interoperability, collaboration and standardization As mentioned above, collaboration appears to be one of the main factors to succeed in modern manufacturing. Along the collaboration setups, come the interoperability and standardization challenges. The required manufacturing resource is transmitted among the enterprises, which employs different standardization strategies [START_REF] Ning | The Architecture of Cloud manufacturing and its key technologies research[END_REF]. Considering a third-party platform, the core service has to insure the interoperability and standardization coverage of the shared resources among the service providers. The objective is to enhance the quality of the requested service, by evolving the best resources and equipments, while maximizing the collaboration between the service providers and their occupancy. The emergence of Cloud Computing Cloud computing is a concept consisting to dispatch computer programs to distant servers rather than local server or customer computer. The users are not anymore the host or the manager of the computer services, but can access from anywhere to online services without the need of managing the infrastructure model, often very complex. Cloud computing is changing the way industries and enterprises do their businesses in the meaning that dynamically scalable and virtualized resources are provided as a service over the Internet. Enterprises currently employ Cloud services in order to improve the scalability of their services and to deal with resource demands [START_REF] Buyya | Market-Oriented Cloud Computing: Vision, Hype, and reality for delivering it services as Computing Utilities[END_REF]. The concept of Cloud computing can be extended to the manufacturing field, providing on-demand service from remote resource service providers. Cloud computing mainly emerged to satisfy the needs of long time follow-up and service quality [START_REF] Xu | From Cloud computing to Cloud manufacturing[END_REF]. where : → 0,1 is the membership function of A and ∈ 0,1 is the membership of ∈ in A. The notion of Intuitionistic Fuzzy Set (IFS) was introduced as generalization of the notion of fuzzy set [START_REF] Atanassov | Intuitionistic Fuzzy Sets[END_REF]. Let , , … , , … be a fixed set of cardinality n. An IFS A is expressed as: 〈 , , 〉| ∈ (2) where respectively : → 0,1 and : → 0,1 are the membership degree and the non-membership degree of A, as ∈ 0,1 is the membership degree of ∈ in A and ∈ 0,1 is the non-membership degree of ∈ in A. Naturally is introduced the degree of indeterminacy [START_REF] Tao | Resource service optimal-selection based on intuitionistic fuzzy set and non-functionality QoS in manufacturing grid system[END_REF] of to A, determined as: 1 (3) If 0, for all ∈ , then the IFS A is reduced to a fuzzy set, else 0, thus an indeterminacy occurs for the element . The similarity between two IFSs A and B is defines as [START_REF] Xu | Some similarity evaluations of intuitionistic fuzzy sets and their applications to multiple attribute decision making[END_REF]: , 1 | | | | 2 (4) with , → 0,1 expressing the similarity degree. Environment Definition A service to manufacture S is a set of cloud services as , with 1,2, … , and ∈ . Both input and output can globalize functional and non-functional parameters. The proposed approach is then, fully customizable and scalable to any type of cloud services. Thus, an IFS is introduced to characterize the degree of membership of the elements from the set within based on eq.( 2). 〈 , , 〉| ∈ (5.a) And a second IFS to characterize the membership of the elements from the set to . 〈 , , 〉| ∈ (5.b) The objective is to evaluate the similarity for each element between a given and a set of service cluster eligible, within the same domain (e.g. power supply, journal bearing). Therefore, we consider , , … , , … the set of service clusters associated to the same domain than Membership, Non-membership and Indeterminacy functions generation The membership functions , and the non-membership , can be obtained through different methods; e.g. consult specialists, use predefined membership functions, sort of the membership functions automatically [START_REF] Genari | Similarity evaluations based on fuzzy sets. Federal university of Uberlandia, Brazil 16[END_REF]. In our case they will express the degree of optimization and ownership of a given set of inputs or outputs to a given manufacturing resources. The advantage is to define the domain of capabilities and optimization for the whole set of service cluster candidate according to the cloud service definition. However, for the sake of simplicity, we consider the intervention of several specialists to evaluate the membership of an element ∈ . That's why we setup the following matrix for membership and non-membership function generation process for a given and the set of input and the set of output as: where , is the number of positive evaluation for the membership of the element in the IFS associated to the service cluster , , the number The route of the algorithm is defined as [START_REF] Karaboga | A comprehensive survey: artificial bee colony (ABC) algorithm and applications[END_REF]: 1 Initialization; set cycle 2 Repeat 3 Place the employed bees on their food sources 4 Place the onlooker bees on the food sources depending on their nectar amount 5 Send the scouts to search new areas for new food sources 6 Memorize the best food source found so far 7 Until cycle=0 Here the food source represents the possible solution, which in our case is an eligible service cluster, and the nectar amount the fitness, which is related to the similarity of the service cluster to a given cloud service. An onlooker bee selects its food source according to the probability value associated with that food source as: ∑ with the population number equal to the number of possibilities, and the fitness value of the solution i. In our case: (12) ∀ ∈ 1, ; (13) Thus, the objective is to minimize , ∀ ∈ 1, and ∀ ∈ 1, . In ABC, as the search approaches the optimal solution in the given population of service clusters, the research area is adaptively reduced. There are three controlled parameters to setup the search and evaluation environment: (a) The limit is the number of cycle, during which one, each bee will search for better food sources in its neighborhood. If the fitness is not improved by then; the food source is abandoned. (b) The NP, the number of colony size (employed bees + onlooker bees). (c) MCN (Maximum Cycle Number) set up the number of time the sequence of foraging will last. These parameters are settled arbitrarily, and can influence the performances of the algorithm significantly. But as an advantage, ABC has only three [START_REF] Anandhakumar | Modified ABC Algorithm for Generator Maintenance Scheduling[END_REF]. Performance Evaluation We evaluate the performances of our method optimized through ABC, and the same similarity evaluation method using LP (linear programming) through all the possible solutions, enabling us to identify the optimal solution. During these experimentations we st servi all th . However, the LP method consisting to browse all the solutions to match the optimal one presents the best fitness possible. While the problem size is increasing our similarity evaluation ABC optimized shows a distance between its best fitness and the optimal fitness (Fig. 7.). The discrepancy is linked to the number of . Since ABC is based on a probability selection process, it is impossible to define a sure value of according the problem inputs. Nevertheless, the CMfg system can train a neural network aiming to define the best . But of course, has a strong influence on the computational time. Therefore, the advantage of our similarity evaluation ABC optimized is to scale the computational time and the quality of the fitness evaluation, according to the equipment restriction and / or quality fitness requirements. Conclusion & Future work The research work presented in this paper proposes a method to evaluate the similarity between service clusters and cloud services, to match the service cluster with the highest similarity value, according to a set of definition ( and / or ). The ABC optimization offers satisfying computational time for the proposed method. However, this method has to be considered in the whole problematic of cloud service composition. As mentioned, this process represents the core value of the CMfg system, enabling the creation of services and innovations. Therefore, the CMfg system has to select the best matches toward the functional parameters of the cloud services to manufacture, and also the non-functional parameter as QoSs, to satisfy the demander's requirements. In this purpose, our method represents an entry point to the whole composition process, enabling to select the best service cluster. Thus, a strategy concerning the evaluation of composite resource service within the same service cluster must be established. Since our method is mainly designed to insure the coordination among the requested inputs and outputs for a given cloud service, a strategy more QoS-aware oriented will be a relevant point to ponder. Fig. . The two IFSs to characterize the elements x membership from the two sets and within a given are defined as: Fig. 5 candi 4 4 Fig. 5 candi Onlooker Bees, who position themselves on food sources presenting higher nectar amount. (c) Scout Bees, who search to discover new food sources area. Fig. 6 Fig 6 Fig. 6 ). cially toward t the survival of f the SMEs (S Small-Medium m needs of innov vation ess where ma anufacturing c companies em mbrace servic e e and better se ervices, with th he aim to satis sfy customer' s vantages and enhance firm m performance e [3]. The ser r- ber of implied resource serv vice providers s collaborating g ue creation. A According to [ 4], 58% of U US manufactur r- less than 20% of Chinese m manufacturers had servitized d tries, servitiza ation is a valua able source of f expenditures s. nted the main part of IBM M capital expe enditures sinc e to create a pr roduct-service e shift. It imp plied a circula ar eate service o opportunities. This relation nship improve s o has a fundam mental impac t on the prod duct leading to o It becomes a m major change agent and dri iver of produc ct petition and g global market, , the innovati ion is the con n- business, and maintaining it ts impact. The e innovation i s arious service providers an d demanders in a high col l- ithin many tim me zones, dist tances, or ente erprise organi i- iary core plat form among the service pr providers, thei ir mander, to ma anage and orc chestrate the o operations is a s, from a busi iness perspect tive, manufac cturing compa a- etitiveness of f their produc ct gravitates a around all th e ditional servic ces (e.g. mach hinery mainte enance, human n ovider / Custom mer relationsh hip on a servit tization scale Acknowledgement This work has been partly funded by the MOST of China through the Project Key Technology of Service Platform for Cloud Manufacturing. The authors wish to acknowledge MOST for their support. We also wish to acknowledge our gratitude and appreciation to all the Project partners for their contribution during the development of various ideas and concepts presented in this paper. of negative evaluation (non-membership), and , the number of indeterminacy; e.g. specialist who did not evaluate the parameter membership of . By analogy, we setup the functions , and using . Similarity evaluation between and We propose the following framework (Fig. 5.) to illustrate the similarity evaluation process between a given and . For the sake of readability, we only consider the set . Our approach is to evaluate the similarity separately (heuristic approach) between all the elements from , and , respectively the set of membership and non-membership functions from the elements in , and the set of membership and non-membership functions from the elements in . Thus, the similarity evaluation for the element is defined using eq.( 4) as: Finally, the overall similarity between the IFS and is linearized and computed as: with the weight associated to the importance of the element as ∈ and ∑ 1 .
15,944
[ "1002392", "1002381", "1002393", "1002394" ]
[ "380001", "380001", "380001", "380001" ]
01474217
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474217/file/978-3-642-36796-0_6_Chapter.pdf
Viet Duc Bui Maria Eugenia Iacob Marten Van Sinderen Alireza Zarghami email: [email protected] Cape Groep Achieving flexible process interoperability in the homecare domain through aspect-oriented service composition Keywords: process interoperability, process flexibility; aspect-oriented service composition, homecare services, orchestration In elderly care the shortage of available financial and human resources for coping with an increasing number of elderly people becomes critical. Current solutions to this problem focus on efficiency gains through the usage of information systems and include homecare services provided by IT systems. However, the current IT systems that integrate homecare services have difficulties in handling the user-context dynamicity and the diversity of needs and preferences of care-receivers. This makes the available homecare services hardly interoperable at the process level, particularly due to the lack of support for process flexibility. In this paper, we present an approach capable of dealing with such interoperability issues based on aspect-oriented service composition. We demonstrate the feasibility of our approach and of the proposed architecture by implementing a prototype for a reminder service scenario. Introduction European countries are experiencing a rapidly growing number of elderly people. According to European Union's Health portal, by 2050, "the number of people aged 65 and above is expected to grow by 70% and the number of people aged over 80 by 170%" [START_REF]Elderly[END_REF]. Consequently, healthcare systems are under pressure to address the increasing demands for elderly care. Information systems offering and integrating IT homecare-services for elderly [START_REF] Gaßner | ICT enabled independent living for elderly, A status-quo analysis on products and the research landscape in the field of Ambient Assisted Living[END_REF] are believed to have the potential of reducing healthcare costs by supporting independent living of elderly in their own home. There are already several providers of commercial services for remote monitoring services, such as bio-signals monitoring (e.g., blood-pressure, heart-rate and oximetry), and contextual information services (e.g., location and temperature). However, these services fail to deliver the expected benefits since they are to a large extent offered and used as isolated services. In addition, such services cannot cope with the diversity of needs and preferences of the user (e.g., in the case of a reminder service, one user may prefer a light signal to announce a reminder, while another prefers the vibration from a cell-phone), nor with the dynamicity of the user's context (e.g., change of the user's location or of the activity in which the user is engaged [START_REF] Dey | A conceptual framework and a toolkit for supporting the rapid protx`otyping of context-aware applications[END_REF]). Thus, an important challenge is to integrate existing homecare services through a single platform that allows user-driven service composition according to personal needs and preferences and that can automatically adapt the execution of a service composition according to the user-context at hand. We approach this challenge from the enterprise interoperability perspective [START_REF] Chen | Architectures for Enterprise Integration and Interoperability:Past,Present and Future[END_REF] applied to the homecare domain [START_REF] Zarghami | Service Realization and Compositions Issues in the Homecare Domain[END_REF][START_REF] Zarghami | Decision as a service: Separating decision-making from application process logic[END_REF]: homecare services from different organizations should be able to work together or side by side, flexibly coordinated through an independent platform as required by needs, preferences and context. Our focus is not on syntactic or semantic interoperability, but on process interoperability. More precisely, we target flexible process interoperability, here defined as the ability to coordinate different services in order to fulfill a complex user need in a dynamic environment. User needs are complex since they require multiple services and may be context-dependent. Contextdependency arises from the dynamic nature of the environment in which the user consumes the services. We propose an aspect-oriented service composition architecture which is based on the principles of service orientation, and is able to deal with context dynamicity and user requirements diversity [START_REF] Zarghami | Service Realization and Compositions Issues in the Homecare Domain[END_REF]. We argue that the use of aspect-orientation facilitates the maintenance and management of complex processes and business rules. Also, we adhere to the design science research methodology as proposed in [START_REF] Hevner | Design research in information systems research[END_REF][START_REF] Peffers | A Design Science Research Methodology for Information Systems Research[END_REF]. The remainder of the paper is organized as follows. In Section 2, the reminder service scenario is presented that will serve as illustration, running example and test case for the approach and prototype we further propose in the paper. Then, a review of the existing web-service composition approaches will be presented in Section 3. In Section 4, our aspect-oriented service composition architecture will be presented. The implemented prototype described in Section 5 demonstrates the feasibility of our approach. The proposed architecture has been instantiated in a prototype, which in turn has been tested in two situations arising from the chosen scenario. Finally, in Section 6, we address the advantages and weaknesses of our approach, draw some conclusions and give some pointers to future work. Scenario To capture the dynamicity and diversity of user-context in the homecare domain, we use the following "reminder service" scenario: Jan is an elderly person who lives in an apartment which equipped with the necessary infrastructure to support homecare applications. For example in the apartment two medicine dispensers are available, which are connected to the internet and can exchange information with our homecare service platform. Jan has to take some medication at 11:50 PM, on a daily basis. Among other things, the homecare system must remind him to take the medication. A reminder is sent to Jan three times up to 15 minutes later than the scheduled intake time. If the medication is not removed from any of the available medicine dispensers, an alarm will be sent to the care center. Jan also has a hearing impairment and uses a wheelchair, so the doors inside the apartment open automatically. He prefers to take the medication from the closest medicine dispenser (MD) at night. Two MDs, filled with the required medication, have been installed, one in the kitchen and the other one in the bedroom. The MD inside the kitchen has embedded light. The TV installed in the bedroom, the lights in the apartment and a wristwatch can all be used as reminder devices for taking medication. However, Jan prefers not to be reminded by lights after midnight. Linda, another care-receiver, prefers her PDA as reminders. Nancy, as a care-giver, wants to create the desired application for both Jan and Linda. Because she understands better than IT specialists her patients' situation and requirements, she must tailor the application for both of them. Background In this section, we give first identify the requirements imposed by the homecare application domain to the future service composition architecture, and we give an brief overview of the existing web-service composition approaches. Homecare constraints Besides the dynamicity of user-context and the diversity of users' preferences and needs, there are three major requirements imposed on applications used in the homecare domain, namely, safety, non-intrusiveness for care-receivers and limited technical skills of care-givers and users. Safety constrains require healthcare systems to be error-free [START_REF] Garde | Requirements engineering in health care: the example of chemotherapy planning in paediatric oncology[END_REF] due to ethical and legal considerations regarding the impact such systems may have on human lives. Hence, safety means that any reaction/behavior of the system is controllable. In other words, care-givers have to know exactly how the system behaves. The non-intrusiveness requirement refers to the fact that the system should have no impact on the normal life of care-receivers [START_REF] Shin | Ubiquitous House and Unconstrained Monitoring Devices for Home Healthcare System[END_REF][START_REF] Eslami | Flexible Home Care Automation[END_REF]. The limited technical skills of care-givers and care-receivers require the system to be designed such that it can be used and tailored by persons with no technical knowledge [START_REF] Garde | Requirements engineering in health care: the example of chemotherapy planning in paediatric oncology[END_REF]. Existing web-service composition approaches A web-service composition is "an aggregate of services collectively composed to automate a particular task or business process" [START_REF] Erl | SOA Design Patterns[END_REF]. According to Rao, et al. [START_REF] Rao | A Survey of Automated Web Service Composition Methods[END_REF], there are three approaches to service compositions: static workflow-based compositions, dynamic workflow-based compositions and Artificial Intelligence (AI) planning -based compositions. In the case of the static workflow-based composition approach, a predefined process model has to be specified before the actual composition of web services takes place. Thus, in this type of composition the selection and binding of web services is realized upfront [START_REF] Rao | A Survey of Automated Web Service Composition Methods[END_REF]. The dynamic workflow composition approach is based on the generation (at run time) of process models and selection of web services [START_REF] Rao | A Survey of Automated Web Service Composition Methods[END_REF]. Based on logical theorem provers or on AI planners, AI-planning approaches produce service compositions automatically without a predefined workflow [START_REF] Rao | A Survey of Automated Web Service Composition Methods[END_REF]. Why dynamic workflow composition and Aspect-oriented approach The specific homecare constraints influence the way in which suitable web-service compositions are determined. AI planning approaches allow the creation of the webservice composition in an automatic manner with minimum interactions with caregivers, and, thus, diminishing the impact of limited IT skills of care-givers. However, practically, it is not feasible to generate service compositions automatically in all cases with high accuracy due to the highly complex web service environment and the difficulty in capturing behavior in sufficient detail [START_REF] Rao | A Survey of Automated Web Service Composition Methods[END_REF][START_REF] Hull | E-services: a look behind the curtain[END_REF]. Therefore, from the perspective of safety criteria, AI planning approaches have serious disadvantages due to unreliability and lack of control/predictability of the system's behavior. Static workflow composition approaches serve safety criteria much better because the care-givers know exactly the behavior of system. However, the static approaches seriously constrain the adaptability and flexibility of the service compositions. As the result, the care-givers have to aid the system in dealing with new changes of carereceivers' needs. Furthermore, mostly such compositions will become intrusive, by forcing care-receivers to adapt to the system. Taking the considerations above into account we have chosen the dynamic workflow composition approach because of the following reasons: • It is based on a generated workflow. Thus the care-givers can still control the system's main activities and behavior. • It is capable to capture changes in the user-context or the user needs, and to generate accordingly different composition workflows, by adding extra services into a predefined reference workflow. Which services are inserted into the reference workflow is determined by a set of business rules. In this way, by external changes, the rules may evaluate differently, and consequently the resulting compositions are adjusted accordingly. In this way, diversity and dynamicity are also partly supported. There are many techniques that focus on the idea of dynamic workflow composition as described above. Inspired by aspect-oriented programming (AOP), Charfi and Menzini [START_REF] Charfi | AO4PBEL: an aspect-oriented extention to BPEL[END_REF] propose an approach to externalize business rules from processes by proposing an extension of BPEL (AO4BPEL). This approach requires the modification of process engines to make it possible to handle the so-called pointcuts, advices and aspects [START_REF] Charfi | Hybrid web service composition: business processes meet business rules[END_REF]. Rosenberg and Dustdar [START_REF] Rosenberg | Business Rules Integration in BPEL " A Service-Oriented Approach[END_REF] propose a Rule Interceptor Service which intercepts all incoming and outgoing Web service calls, maps them to business rules, and then applies associated business rules. A mapping document is used to map a call to business rules. Eijndhoven et al. [START_REF] Van Eijndhoven | Achieving Business Process Flexibility with Business Rules[END_REF] exploit the power of a business process engine (Aqualogic BPM Studio) and ILOG business rule engine. At variability points in the process, the process engine sends a request to the rule engine. Based on the input data from the request and the current context, the rule engine evaluates its business rules and the returns the result to the process engine. In [START_REF] Sapkota | A Simple Solution for Information Sharing in Hybrid Web Service Composition[END_REF], a tuple space has been proposed to provide more flexibility with respect to data flows. In this approach, the data can be added and shared by a process or rule engine on the fly. This approach, is somewhat similar to [START_REF] Van Eijndhoven | Achieving Business Process Flexibility with Business Rules[END_REF], in the sense that it also assumes that the process engine only needs to call the rule engine at some specific variability points. In [START_REF] Zarghami | Decision as a service: Separating decision-making from application process logic[END_REF], the decision-making rules have been wrapped and provided as a so-called decision service which can be called by the process engine. Moreover, the decision service can notify the processes asynchronously to update their behavior. In the homecare domain, the process specifying the behavior of the system in case of a user-context change can be very complex. For example, the process to aid carereceivers with Alzheimer disease to go out for physical exercise can be very complex and may involve many rules and activities. In addition, in order to be able to manage service compositions for care-receivers with different diseases, we need a method that can diminish the maintenance tasks. Taking this into account and the specific requirements imposed by the homecare domain, we argue that a service composition approach in should be capable 1) to execute the rules anywhere and anytime during the process instead of limiting that to some variability points and 2), to insert new services anywhere in the process (if necessary) as a reaction of system to a context change. Both requirements cannot be satisfied by approaches such as [START_REF] Van Eijndhoven | Achieving Business Process Flexibility with Business Rules[END_REF][START_REF] Sapkota | A Simple Solution for Information Sharing in Hybrid Web Service Composition[END_REF], since both the invocation of rules and the adaptation of processes is limited to variability points. Using the Aspect-Oriented approach (AOA) by Charfi and Menzini [START_REF] Charfi | AO4PBEL: an aspect-oriented extention to BPEL[END_REF] would enable us to support these capabilities. However, AOA should be applied in such a way it can be used in combination with existing implementation platforms. An aspect-oriented service composition architecture 4.1 Aspect-Oriented Approach Before presenting our architecture, we start with description of the AOA [START_REF] Charfi | AO4PBEL: an aspect-oriented extention to BPEL[END_REF] and the basic constructs used in aspect orientation. As mentioned above, in all dynamic workflow composition approaches, there are two basic elements: general workflows and business rules. More exactly, the general workflow captures the non-variable part of the service composition, and models the basic control flows, services and the dataflow. Business rules are used to represent policy-sensitive aspects of the composition, which are likely to change over time. Charfi and Menzini [START_REF] Charfi | AO4PBEL: an aspect-oriented extention to BPEL[END_REF] introduce the AOA in order to separate business rules from business processes by using three elements: aspects, joint points and advices. Aspect information encoded in XML files (so-called aspect files) includes a set of joint points. A joint point is a specific point, after or before one activity in the workflow. A joint point also links to advices, which are external services or processes that have to be executed when the process execution reaches that joint point. Conditional statements can be embedded in the joint point to check the inputs taken from the general workflow and decide which advices are deployed. In a similar way, aspect information may also describe the computation rules which are applied to calculate new values of certain business process variables according to user-context changes [START_REF] Charfi | Hybrid web service composition: business processes meet business rules[END_REF]. Figure 1 shows the relation between the elements mentioned above. It is worth noticing that an aspect is different from a business rule because besides containing a business rule, an aspect also specifies where the business rule is applied by its joint points list. Thus, one can separate business rules from specific processes and can increase the reusability of business rules. Figure 1: relationship between AOA elements For example, the general process shown in Figure 8 involves some mandatory services, such as a service to activate the reminder and a service to check whether the medicine is taken. One possible business rule (in the form of an aspect) may concern saving user-context after each activity of the general process. When reaching the joint points (following all activities of the general process), the process executor will invoke the respective advice and save the user-context. 4.2 The proposed architecture In this section, we propose the aspect-oriented service composition architecture that enables the interoperability and integration of homecare services. These thirdparty services providing information on bio-signals (heartbeat rate and blood pressure) and location are assumed to be available and can be exposed through their interfaces. It should be noted that we do not intend to design such services. To support aspects, joint points and advices, we introduce the following three components. First, an advice repository is introduced to store advices. An advice, as mentioned earlier, is an external service or process which is written in BPEL and can be executed by a process engine. Second, an aspect repository is used to store aspect files. Third, we develop an aspect manager with two main functions -calculating new values for variables in general processes and determining advices. When determined, the advices are then handled by the process executor. To support dynamicity and diversity, besides the components above, the system also needs the following infrastructure components. Adaptor: this component has the ability to "provide connectivity, semantic disambiguation and transition services" between our application and 3-rd party services [START_REF] Papazoglou | Service oriented architectures: approaches, technologies and research issues[END_REF]. Therefore, not only it can enable communications in two directions between third party services and the system but it can also convert the different interfaces, protocols, data formats of different parties into the standardized ones for the system and vice versa. Context server: the context server has four functions: listening to the user-context changes from adaptors, storing context information of users and devices in a database, allowing querying of context information by the aspect manager and informing the aspect manager about user-context changes. Process executor: the process executor takes care of the execution of general processes and of the external services/processes, as shown in Figure 2. Service discovery manager: in case there are many third-party services offering the same functions, the service discovery manager aims to assist care-receivers by searching services, prioritizing them and select the most suitable ones. In the scope of this paper, we assume that the service discovery manager, adaptors and the context server are available and ready to use. We focus solely on the process executor, the aspect repository, the advice repository and the aspect manager. The proposed architecture is shown in Figure 3. To show the feasibility of our approach, we have implemented a prototype that follows the proposed architecture. In this section we discuss the development platforms we used and we explain the implementation of each element, with an emphasis on the aspect manager. Finally, we use in two scenarios to demonstrate how the prototype can handle user-context dynamicity and diversity of needs/preferences. 5.1 Development platform For building the prototype we have used the Lombardi process engine [START_REF]Lombardi tasks[END_REF]: a business process manager that allows creating process models, implementing process steps, running and inspecting processes, optimizing and installing process applications [START_REF]Lombardi tasks[END_REF]. Another feature of this engine we have used is its JavaScript API, which allowed us to invoke one process/service programmatically from another process. Implementing the architecture's components Below we discuss the implementation of the three elements supporting AOA. Aspect files' structure: With respect to aspects, we follow the specification of AO4BPEL proposed by Charfi and Menzini [START_REF] Charfi | Aspect-Oriented Workflow Languages: AO4BPEL and Applications[END_REF] describing the structure of aspect files. In short, this structure starts with the name of the aspect, followed by pointcut element containing a set of joint points. An advice is a BPEL activity. Other elements like variables, partner links, fault handlers, structured and basic activities in AO4BPEL are inherited from BPEL. Figure 4: The behavior of the aspect manager Advices: as mentioned above, advices can be defined in aspect files as BPEL activities. However, in our approach, we decided to define advices in separate files. In this way, it is possible to reuse advices in different aspects. Aspect manager: to avoid a modification of the process engine, we propose an independent aspect manager. This component is a Java script embedded before or after one step of a general process. Because of the predefined structure and the content of aspect files, the script can parse aspect files to archive information about joint points, conditional statements and advices. Then, it evaluates condition statements to determine suitable advices or calculates new values for variables in the general process according to user-context changes. The behavior of the aspect manager is depicted in the diagram shown in Figure 4. Figure 5: Inserting a process in a general process Figure 5 shows the location of an aspect in a general process and explains how external processes/services (in the form of advices) can be inserted, via aspects, between the steps of this process. The arrows in Figure 5 have the following significance: 1: The aspect manager (JavaScript code) parses aspect files 2: The advice is found in advice repository 3: The process engine executes the advice After the addition of the OpenDispenser external (advice) process to the general process, the actual generated and executed process is as depicted in Figure 6. Reminder OpenDispenser Figure 6: The produced process To update the variables of the general process, a java script is also placed in a parallel process such that the update task is performed independently without influencing or being influenced by the general process. Test cases In this section, two situations related to the chosen scenario will be addressed. Each of them is associated with different types of requirements. The first one focuses on dynamicity aspects and refers to the case when the care-receiver (Jan) moves from the kitchen to the bedroom where the dispenser needs to be opened automatically (a service to open this dispenser should be available). The second case captures the diversity of preferences/needs of different care-receivers by introducing a second care-receiver. However, before discussing these situations in further detail, the general process for the reminder service scenario is presented. The general process for reminder service scenario. In the reminder service scenario, the general process that is stable for all user-context changes and user preferences is depicted in Figure 8. The process is triggered by a care giver. The first activity is an inquiry of the user-context information to initialize the variables of the process. For example, based on the care-receiver's location information, the system can calculate t1, t2 and the endpoints of the web-service to invoke (i.e., the suitable reminder device). t1 is the waiting time from the moment the process is triggered until the first reminder is sent. After that, the system waits in t2 before checking whether the medicine is taken or not. If not, the process goes back to the reminder task. This loop is executed until the number of sent reminders is equal to a predefined number, resulting in one alarm sent to care-givers. The logic for the calculation is specified in an aspect file. The reason is that this variable calculation is a crosscutting concern occurring in many places: after the first inquiry and after any information update sent by the context server. Situation 1: user-context dynamicity. In this paragraph, we describe the expected behavior of the system's in case of a context change and the results of the generated process execution. Jan moves from the kitchen to the bedroom. In the kitchen, built-in lights are used as reminders while in the sleeping room the TV is used. The medicine dispenser in the bedroom needs to be opened automatically. b) The process' variables are:t1, t2 as described above; an endpoint address pointing to a specific device (as reminder); user-context information including user' ID number, location and time. c) Expected system behavior: in dealing with the change, the system needs to update the endpoint address to point to the TV in the bedroom. The external service to open the dispenser is required to be invoked before Jan can remove his medicine. d) Aspect configurations: there are two aspect configurations: first, for calculating new values of variables; second, for inserting a service to open dispenser. For the sake of simplicity and to avoid the confusion that may be caused by XML tags, we simplify the two aspects' specification by using natural language as shown in Figure 9 and in Figure 10. The first aspect, called calculating variables, states that the following action will be performed if the care-receiver is Jan and the current step is after inquire user-context information or listening in the general process. This aspect simply matches the user's location with the endpoint of services, which invokes the corresponding device at the Type: after Variable: location user's location. Hence, when Jan is in the bedroom, the television becomes the reminder. This can be considered a computation rule without calling advice. However, as the care-receiver changes his location to the bedroom, one advice (open dispenser) needs to be injected in the general process. This aspect (rule) is shown in Figure 10. This aspect, called open dispenser, means that for the user with ID "p104jan", if the current step is before send reminder step, an advice to invoke open dispenser will be performed. e) Result: after applying these two aspects, the general process will change into the generated process shown in Figure 9. b) Aspect configuration: similar to the previous situation, by changing the conditional statements, a different preference is formed (see Figure 12). Conclusions In this paper, motivated by two current problems of the homecare domain, namely, dynamicity and diversity, we have applied an aspect-oriented approach to the design and implementation of an architecture for the dynamic workflow composition of services. By this approach, not only we have externalized business rules from business processes, but we have also ensured the flexible process interoperability of homecare services in complex processes. Moreover this approach facilitates enterprise interoperability through the composition of services and processes resulting in complex integrated applications. As demonstrated in the previous section, the combination of aspect orientation and SOA has many advantages. The diversity of user preferences is easily controlled by changing the aspect configuration as discussed in the second situation described in the previous section. With regard to the dynamicity of usercontext, due to the separation of business rules (in aspect files) from processes, the care-givers can easily update or add new business rules to adapt the system's behavior according to different care-receiver needs or contexts. Furthermore, the idea of externalizing advices and storing them into an advice repository can increase the reusability of advices. This is particularly useful in complex processes where finding places to insert/remove services is time-consuming. Another advantage worth being noted is the fact that aspect manger provides a light-weight AOA implementation solution in the sense that it does not require a modification of the process engine to support aspects (as suggested by [START_REF] Charfi | AO4PBEL: an aspect-oriented extention to BPEL[END_REF]), as long as a JavaScript API is supported. Finally, as a part of general processes, the aspect manager can access directly the its variables, minimizing the effort to pass data to external processes and services. Our approach also has some limitations. Some are inherited from the aspectoriented approach of Charfi and Menzini and concern the lack of support for complex and multiple business rules [START_REF] Charfi | AO4PBEL: an aspect-oriented extention to BPEL[END_REF]. The usage Java script also raises concerns about its flexibility and about its ability to handle different types of business rules. We are currently improving our approach to also support inference rules. Regarding the nonintrusiveness criterion, it should be noted that only pre-defined changes can be handled by the system. In the case of unforeseen events, the care-givers have to assist system developers in defining new business rules and incorporate them in general processes. Figure 2 : 2 Figure 2: The behavior of the process executor Figure 3 : 3 Figure 3: The proposed SOA-based architecture Figure 7 : 7 Figure 7: Update variables Figure 8 : 8 Figure 8:The general process for the reminder service scenario a) User-context change: Jan moves from the kitchen to the bedroom. In the kitchen, built-in lights are used as reminders while in the sleeping room the TV is used. The medicine dispenser in the bedroom needs to be opened automatically.b) The process' variables are:t1, t2 as described above; an endpoint address pointing to a specific device (as reminder); user-context information including user' ID number, location and time. c) Expected system behavior: in dealing with the change, the system needs to update the endpoint address to point to the TV in the bedroom. The external service to open the dispenser is required to be invoked before Jan can remove his medicine. d) Aspect configurations: there are two aspect configurations: first, for calculating new values of variables; second, for inserting a service to open dispenser. For the sake of simplicity and to avoid the confusion that may be caused by XML tags, we simplify the two aspects' specification by using natural language as shown in Figure9and in Figure10. Figure 9 : 9 Figure 9: Produced composition Situation 2: diversity of preferences/needs. To illustrate the behavior of the system in case of diversity of user preferences/needs, we use another example, in which a new care-receiver, Linda, is introduced, adding new references. a) Different references: Linda, another care-receiver, prefers her PDA over lights or any other devices as reminders.b) Aspect configuration: similar to the previous situation, by changing the conditional statements, a different preference is formed (see Figure12). Figure 10 : 10 Figure 10: calculate variables for Linda c) Result: as the ID is a part of user-context information, composing aspects with different IDs reflect the diversity of users' preferences/needs. Aspect name: open dispenser Pointcut: send reminder Conditions: user's ID is "p104jan" Type: before Variable: location Location Endpoint Device Conditions of advice: current location is "bedroom" Kitchen  http://130.89.227.130:9090/w s/Ucare_WS_notifyReminder/ Lights Advice: open dispenser Bedroom  http://130.89.227.132:9090/w s/Ucare_WS_notifyReminder/ TV Figure 10: Open dispenser Figure 9: Calculate variables for Jan Aspect name: calculating variables Pointcut: inquire user-context information; listening Conditions: user's ID is "p104jan" (Jan's ID)
34,477
[ "1002395", "998713", "998714", "1002396" ]
[ "486463", "303060", "303060", "303060" ]
01474218
en
[ "info" ]
2024/03/04 23:41:46
2013
https://inria.hal.science/hal-01474218/file/978-3-642-36796-0_7_Chapter.pdf
Milan Zdravković email: [email protected] Miroslav Trajanović email: [email protected] On the extended clinical workflows for personalized healthcare Keywords: Ontology, Enterprise Interoperability, Supply Chain Management, SCOR There are many cases in the clinical practice where using personalized medical products could decrease the cost of treatment and risk of possible complications. However, due to the large costs and long manufacturing lead time, the medical products are customized to the individual patient's needs only in a few critical treatments. One of the main cost factors of the collaboration between the clinical centres and custom medical product suppliers is uptake of human effort in exchange of knowledge between two domains and corresponding issues. In this paper, we use the concepts of the networked enterprises to define the extended clinical workflow which spans the medical and manufacturing practice. We identify the associated systems infrastructure of this workflow and related interoperability issues. The extended workflow is demonstrated on the case study for custom orthopedic implants manufacturing. Introduction Current fragmentation of health sciences and medical care along traditional boundaries is considered [START_REF] Fenner | The EuroPhysiome, STEP and a roadmap for the virtual physiological human[END_REF] as artificial and inefficient with respect to many scientific hypotheses that establish the correspondences between the concepts from the different scientific disciplines (biology, physiology, etc.) or clinical specialties (such as cardiology, neurology, etc.). This fragmentation can be considered at modeling level, where reductionist approach (modeling on a dimensional scale, such as organ, tissue, cellular and molecular) dominates over the systemic one [START_REF] Clapworthy | The virtual physiological human: building a framework for computational biomedicine I[END_REF]. It is foreseen that a more effective approach will integrate the different relevant areas according to the focus of the particular problem, unconstrained by scientific discipline, anatomical subsystem and temporal or dimensional scale [START_REF] Welsh | Post-genomic science: cross-disciplinary and largescale collaborative research and its organizational and technological challenges for the scientific research process[END_REF]. The Virtual Physiological Human (VPH) paradigm [START_REF] Fenner | The EuroPhysiome, STEP and a roadmap for the virtual physiological human[END_REF] is intended to provide a unifying framework that enables and practically benefits from the integration of interdisciplinary data and observations about human's biology. These observations may be collected, organized and shared across the laboratories and hospitals, so that clinical and non-clinical experts can collaboratively interpret, model, validate and understand the data. Thus, this unifying framework is expected to facilitate: 1) integration of physiological processes across different length and time scales; 2) integration of descriptive data with predictive models; and 3) integration across disciplines [START_REF]Seeding the EuroPhysiome: a roadmap to the virtual physiological human[END_REF]. Then, this integration will eventually lead to the practical benefits of the future healthcare system, such as personalized care solutions; reduced need for experiments on animals; more holistic approach to medicine; and preventative approach to treatment of diseases [START_REF]Seeding the EuroPhysiome: a roadmap to the virtual physiological human[END_REF]. The impact of the VPH on industry will first be felt in the medical device and pharmaceutical industries [START_REF] Fenner | The EuroPhysiome, STEP and a roadmap for the virtual physiological human[END_REF]. The prediction sets an interesting assumption that the knowledge relevant to VPH will be integrated faster across the boundaries of all organizations involved in a healthcare (including hospitals, clinical centers, as well as pharmaceutical and manufacturing industries), than within the clinical centers. The prediction is argued by the global distribution of innovation interest and knowledge and developing trend in providing personalized healthcare, which is often related to customization of the medical products. As an effect of this integration, the traditional clinical workflows will be extended to involve all actors that contribute to delivery of a personalized healthcare, in systematic, efficient way. In addition, the rate of use of custom medical products, such as custom head and neck support systems [START_REF] Bentel | A customized head and neck support system[END_REF], orthopedic implants [START_REF] Zdravković | A case of using the Semantic Interoperability Framework for custom orthopedic implants manufacturing[END_REF], patient rooms [START_REF] Yassine | Investigating the role of IT in customized product design[END_REF], blood coagulants [START_REF] Hess | Damage control resuscitation: the need for specific blood products to treat the coagulopathy of trauma[END_REF] and others will increase. As a consequence, more and more supply chains, and not only pharmaceutical ones [START_REF] Puschmann | Customer relationship management in the pharmaceutical industry[END_REF] will span the clinical workflows. This effect will facilitate higher degree of customization of the medical products. It reduces the risk, efficiency and cost of treatment, due to increased similarity to the individual patient's anatomy and physiology. For example, standard bone implants are sometimes not sufficient because of abnormal joint anatomy or possible risks of postoperative complications [START_REF] Keenan | Treatment of supracondylar femoral fracture above total knee replacement by custom made hinged prosthesis[END_REF], such as aseptic loosening which occurs due to uneven stress distribution on the bone surface. This problem can be addressed by custom design process in which the design of the implant is accommodated to the specific features of the patient's anatomy. However, the traditional approach to supply chain planning cannot be applied in the scenarios of custom medical products manufacturing, due to long delivery times. Manufacturing of the custom medical products is considered as one-of-a-kind manufacturing, where the customization requirements often affect not only a principal manufacturer but also its suppliers. The manufacturing of a custom medical product could also include high-tech services by different suppliers, which are based on the models which need to be exchanged (for example, the reverse engineering of the missing part of a bone). Typically, some of these services precede supply chain planning phase because their results often determine the basic product's topology. Because of such a complex scenario, clinicians often choose standard products, even at the cost of sacrificing the above listed benefits of custom ones. Exactly this, not always desirable compromise was the main motivation for the research presented in this paper. The key research problem was identified as "high complexity of the supply chain planning and execution in custom medical products manufacturing". In our research, this problem is addressed by combining practices of collaborative networked organizations with clinical practices. As one of the results, an extended clinical workflow is proposed. Besides the traditional activities of the clinical practice, this extended workflow also encompasses planning, decision making, design, sourcing and manufacturing of custom medical products. It also considers systems' and knowledge infrastructures which facilitate the efficient execution of this extended workflow. In a way, the models and knowledge required to resolve interoperability issues of such a workflow can be considered as extension to VPH paradigm, because the topology and design of a custom medical product correspond to physiological and anatomical features of a patient, represented by VPH models. The remainder of this paper is structured as follows. In part 2, the traditional clinical workflow is described in context of Electronic Health Record, a paradigm which is often used to integrate patient specific information throughout the history of medical care delivery. Part 3 presents the extended clinical workflow, associated resources, namely systems infrastructure; and analysis of interoperability issues of such infrastructure. In part 4, a study is presented, on the case of manufacturing the custom orthopedic implant for diagnosis of bone cancer of tibia. Finally, in part 5, the main conclusions are drawn. Electronic Health Records and clinical workflows In practice, the clinical workflows are often defined in context of Electronic Health Record (EHR). Health Information Management Systems Society's (HIMSS) defines EHR as1 "the longitudinal electronic record of patient health information generated by one or more encounters in any care delivery setting…" Many benefits from maintaining EHR are expected, such as automation and streamlining of the clinical workflow, evidence-based decision support for diagnosis or treatment prescription (based on accurate and complete record of a clinical patient encounter), support to other care-related activities such as billing, reporting and quality management. An EHR enables the hospital administrator to extract the billing data, the physician to assess the effectiveness of treatments, a nurse to monitor treatment and reactions and a researcher to analyze the efficiency of medications. One of the main issues of EHR is the fact that it is not a record of all care provided to the patient in all facilities over time. It is generated and maintained within the single medical centre. Even so, one of the greatest challenges of maintaining EHR arises from the collaborative effort in collection and analysis of its data. Namely, medical centers can be considered as complex enterprises. They typically consist of multiple healthcare facilities, such as affiliated hospitals and clinics, diagnostic and treatment centers and laboratories. Managing all of these departments implies the complex business processes, for which EHR is fully associated. Clinical workflow In a way, EHR is the patient specific representation of a clinical workflow, combined with information (from the observations) collected in the course of this workflow. It typically connects administrative data with information from the relevant health information systems. Figure 1 illustrates a simplified representation of a clinical workflow for inpatient care. Fig. 1. Simplified representation of the clinical workflow for inpatient care Registration, admissions, discharge, and transfer (RADT) data are the key components of EHRs. These data include vital information for accurate patient identification and assessment, such as name, demographics, employer information, etc. The registration portion of an EHR contains a patient identifier (master patient index -MPI), which is identifiable only inside the organization in which the EHR is maintained. EHR record for a specific patient is recovered during his/her admission. Admission notes are added in case that inpatient care need to be provided and include patient's status, reasons why the patient is being admitted and initial instructions for patient care. An EHR can be considered as patient specific RADT data, integrated with respective information from Laboratory Information Systems, Radiology Information Systems, Electronic clinical documentation systems and pharmacy systems. This integration is carried out by Computerized Physician Order Entries (CPOE) which permits clinical providers to electronically order laboratory, pharmacy, radiology and other services. CPOE entries are initially entered according to first patient observations and treatment plan. Once the treatments are launched, namely during and after entered orders execution, it may become necessary to require additional actions, such as preoperation planning, other tests, etc. When all treatments (in one or multiple iterations) are carried out, the patient is discharged or transferred. All treatments' results and notes, including the administrative data on the discharge and transfer are added to a patient's EHR. Extended clinical workflows and associated systems infrastructure In general, two of the most critical non-technical barriers to customization are: 1) lack of efficiency of manufacturing enterprise to handle one-of-a-kind production tasks; and 2) lack of efficiency in transfer of multi-disciplinary knowledge, required for the design of custom product. Manufacturing enterprises refine their designs for simplicity and cost; they design their workflows for volume manufacturing. Hence, by default, they are not capable to handle one-of-a-kind manufacturing tasks efficiently. One-of-a-kind manufacturing is considered as a case for the Virtual Enterprises. Virtual Enterprise (VE) is a temporary network of independent organizations, who join together quickly to exploit fast-changing opportunities and then dissolve [START_REF] Browne | Extended and virtual enterprises -similarities and differences[END_REF]. It is characterized by a short-living appearance of a supply chain, capable to produce low volume of high variety of products, by drawing from the loosely-coupled, heterogeneous environment of available competences, capabilities and resources. This environment is sometimes called Virtual Breeding Environment (VBE), defined as a pool of organizations and related supporting institutions that have both the potential and the will to cooperate with each other, through the establishment of a long-term cooperation agreement and interoperable infrastructure [START_REF] Sánchez | Virtual Breeding Environment: A First Approach to Understanding Working and Sharing Principles[END_REF]. In our research, VBE and VE paradigms are used to propose the interoperable infrastructure which will support the extended clinical workflows for custom medical products. Fig. 2. Simplified representation of the extended clinical workflow In traditional settings, the workflow for manufacturing of custom medical products includes many human analysis and decisions, such as interpretation and analysis of CT scans and lab results, mechanical analysis, collecting inputs and approvals, etc. The lack of efficiency to adapt their traditional workflows to these activities becomes even more critical when enterprises are required to subcontract the different parts or services suppliers. All this human involvement includes a number of interactions between different experts in which functional (medical), organizational and other perspectives to the custom manufacturing need to be considered. Hence, efficient design elaboration and mutual understanding on the complex variety of issues require involvement of experts with multi-disciplinary skills and knowledge. In order to overcome the barriers above, the extended clinical workflow and associated systems infrastructure is proposed. Figure 2 illustrates simplified representation of the extended clinical workflow with associated systems, foreseen as facilitators of this workflow. Traditional clinical workflows (see Fig. 1) are based on the order-delivery service sequences and/or cycles, where these services are related to specialized observations and/or treatments. In the extended clinical workflow, the manufacturing of a custom medical product (with all associated services) is considered as a single service which can be ordered by using CPOE entry in the Clinical Information System. For fulfillment of this entry, six key activities are required: 1) procedure/treatment planning; 2) custom product design; 3) source product; 4) manufacture product; 5) product delivery; and 6) product installment. While procedure/treatment planning and installment are fully integrated in the traditional clinical workflow, other activities are carried out in the shared or environment of VBE, which is the main supplier of the clinical center for custom medical products of a certain type. Each of the activities of the clinical workflow should be facilitated by specific (hypothetical) system, as it is illustrated appropriately on Figure 2. On the systems infrastructure for extended clinical workflows The design of the custom medical product is never considered in isolation from the procedure of its installment or a treatment method; as it must take into account the constraints and requirements of the specific intervention (e.g. surgery). Typically, the procedure/treatment planning is not facilitated by the information system or a tool. The decisions made in this phase are used to select from a range of standard medical products. In most of the cases, the problem of selecting a standard product is trivial. However, in case of custom medical products, the Procedure/Treatment Planning System (PPS) is considered as essential, because its output is later used by the system for a product design, to define the main features (mechanical, geometrical, chemical, etc.) and topology of the custom product. Namely, in great most of the cases, there are strong correspondences between these features and steps, micro-steps and assets used in the installment of the custom medical product or treatment process. Hence, PPS is intended to be used for developing and generating a kind of a process model, which significant features will be then mapped to the features of the custom medical product conceptual model. The product model is considered as conceptual because it includes only features which are necessary and sufficient for establishment of the abovementioned correspondences (both with PPS and VBERPS) and is designed by using the Conceptual Product Design System (PDS). VBE Resource Planning System (VBERPS) is foreseen to be used in the sourcing step of the extended clinical workflow, where the Virtual Enterprise (VE) is formed from the VBE, according to the features of the conceptual product model. VBERPS is expected to have access to the relevant information for determining the capacity and availability of each enterprise of VBE to carry out a specific role, according to the conceptual product model (including its Bill of Material) and associated requirements, defined by the features of the respective parts. Finally, the lifecycle of this VE is managed by using VE Resource Planning System (VERPS), which is typical an ERP system of the VE's focal partner. Systems interoperability in extended clinical workflows Interoperability is one of the main enablers of the extended clinical workflow, because it facilitates the flexible collaboration; it reduces the time needed for the setup and discontinuation of the VE. Given the high requirements for the workflow's efficiency, it is of outmost importance to remove as many as possible preconditions for the collaboration and requirements for any kind of previous agreements in exchange of relevant information between its actors. Exactly these preconditions and requirements are considered as some of the most difficult challenges in implementing the extended clinical workflow. Systems interoperability issues of the extended clinical workflow can be easily identified in the intersections of the systems' scopes from the proposed infrastructure (see Fig. 2). They are related to the interoperations and data exchanges illustrated on Figure 3. Fig. 3. Systems interoperations and models infrastructure in the extended clinical workflow Since the capacity to interoperate is unidirectional capability of systems, the twofold consideration of each interoperation is assumed. Namely, in every interoperation between two systems, each of these two systems must exhibit the non-interrelated, independent capabilities to send and receive (and interpret) the exchanged messages or invocation requests. However, the minimum requirement is considered as use of pre-determined or pre-selected dictionaries, vocabularies or even formal models (e.g. ontologies) in formulating these messages and requests, so they can be correctly interpreted. Thus, a Model-Driven Architecture for resolution of the interoperability requirements is foreseen. The conceptual view of involved models and dependencies between these models is illustrated on Figure 3. The most difficult interoperability challenge of the extended clinical workflow is related to establishment of the correspondences between two, quite different domains of manufacturing and clinical practice. While the manufacturing domain knowledge is embedded in VBE and supply chain models and partially, in product model, the clinical practice is formalized by the EHR and procedure models. Today's EHR records often suffer from the vendor-specific realizations of patient record data sets which rarely accommodate to the controlled terminologies [START_REF] Harris | From Clinical Records to Regulatory Reporting: Formal Terminologies as Foundation[END_REF]. However, the inefficiency of the clinical workflows which extend beyond the boundaries of a single medical centre is establishing EHR interoperability as one of the main requirements for health information systems. The issues of EHR interoperability are addressed by combining the standards for clinical vocabularies and healthcare message formats; with EHR ontologies (i.e., content and structure of the data entities, both from vocabularies and messages, in relation to each other). Procedure model can be considered as a process model, as it is intended to formalize a set of actions in a linear or more complex flows, that could also include the equivalents of the error handler and compensation blocks from the workflow management. Case study -Extended workflows for custom orthopedic implants manufacturing The research of custom orthopedic implants manufacturing is typically focused to direct fabrication technologies [START_REF] Gibson | Direct Fabrication of Custom Orthopedic Implants Using Electron Beam Melting Technology[END_REF]. Namely, direct manufacturing of high-strength materials provide far greater efficiency in one-of-a-kind runs for producing a finished custom implant than the conventional manufacturing technologies. Depending on nature of the bone trauma, the custom orthopedic implant can be assembled of some of different types and designs of fixtures and scaffolds. In addition, some services may be associated to the product manufacturing and/or implementation, such as: pre-operation planning, reposition simulation, digital reconstruction, remodeling, analysis of biomechanical properties of the implant, sterilization, ethical review, product certification and others. For example, in case of bone cancer of tibia (larger of the two bones in the leg, below the knee), the missing part of the bone is replaced with the scaffold, which is enforced with the inner fixture. The scaffold is designed on the basis of bone geometry, which is digitally reconstructed from CT scans. Geometry and topology of inner fixture is designed on the basis of diagnosis and pre-operation plan, developed by surgeon. The process of manufacturing of the custom part is associated also with review of the design by the clinics ethical committee and analysis of biomechanical properties. Obviously, in above scenario, efficiency brought by the use of additive manufacturing is only a tip of the iceberg. It needs to be complemented by the effectiveness of the appropriate collaboration infrastructure which will facilitate all planning, sourcing, manufacturing and delivery aspects. In our case, we propose to extend the clinical workflow for treatment of tibia bone cancer with the manufacturing of the custom implant parts and provision of the associated services. This is carried out within the VBE, which consists of the enterprises, capable, certified and competent to deliver a manufactured product and/or to provide associated services. VBE is organized as a cluster and technically coordinated by the brokering enterprise (broker). Each case of supply of the product and associated services is considered as a case of VE. In this case, the systems and models infrastructure, proposed in Section 3 is instantiated, as it is illustrated on Figure 4. The proposed infrastructure is implemented by using semantic applications and information systems which exploit the framework of inter related ontologies (models), that consist of the different domain (system) concepts and logical relationships between those [START_REF] Zdravković | A case of using the Semantic Interoperability Framework for custom orthopedic implants manufacturing[END_REF]. The ontological framework corresponds to the assumed models infrastructure for extended clinical workflows and it is managed by the instances of the relevant assumed systems. Clinical information system (CI-Sys) is used to create the order (O) for the custom implant manufacturing and to trigger the execution of the next (N) order of the extended clinical workflow, upon the installation of the implant. System for pre-operation planning (Pre-OP-Sys) is used to plan (P) this installation (I). Pre-operation planning is based on the location and the arrangement of anatomical structure parts within the human body, expressed in quantitative or qualitative way (by using spatial orderings such as superior, anterior, lateral, etc). This arrangement can be formalized by appropriate anatomical ontology [START_REF]Anatomy Ontologies for Bioinformatics[END_REF]. When operation is planned, the relevant spatial features are used to determine the features of the microsteps which are carried out during the surgery, such as bone screw entry angles, fixture-bone assembly contact locations, etc. Hence, relevant properties of the spatial relations can be exploited for automated reasoning [START_REF] Schulz | Parts, Locations, and Holes -Formal Reasoning about Anatomical Structures[END_REF], which assists pre-operation planning process. In order to make this possible, Pre-OP-Sys need be capable to infer the spatial relations and corresponding micro-steps features, by exploiting previously established logical correspondences between anatomical ontology and pre-operation process ontology (model). Above-mentioned spatial relations are also relevant for the custom implant design (D), which is facilitated by Impl-D-Sys system. These relations provide formal definitions of the geometry restrictions which are typically considered when design of the orthopedic implant is determined. For example, the angle between distal and proximal part of the inner fixture depends on the specific arrangement of bones and joints. Impl-D-Sys is a semantic application which formalizes parthood relationships of the product (Bill Of Material -BOM) and features of the respective parts and subassemblies. BOM also include relevant services. Based on the product's topology and manufacturing or delivery strategies of each product part (including the services), a sourcing (S) strategy, namely the supply chain configuration is generated by SC-CONF-Sys application. SC-CONF-Sys is based on SCOR reference model for supply chain operations [START_REF] Stewart | Supply-chain operations reference model (SCOR): the first cross-industry framework for integrated supply-chain management[END_REF], a standard approach for analysis, design and implementation of core processes in supply chains. SC-CONF-Sys is semantic application which uses SCOR ontologies at two levels of conceptualization. While implicit SCOR ontology is used to enable interoperation of the SC-CONF-Sys with proprietary SCOR tools, explicit SCOR ontology is expressive domain ontology which defines the meanings of the implicit SCOR entities and thus, it facilitates interoperation of SC-CONF-Sys with other enterprise applications [START_REF] Zdravković | An approach for formalising the supply chain operations[END_REF]. The supply chain configuration is based on the common rules related to the orderings of SCOR source, make and delivery processes in the different cases of the manufacturing strategies: make-to-stock, make-to-order and engineer-toorder; and a capacity of the supplier to deliver the desired part. At this moment, the capacity is evaluated only by checking the part production schedules of the suppliers through the semantic queries to the local ontologies of their information systems. Exactly this last feature of the SC-CONF-Sys application demonstrates how the planning processes of custom orthopedic implants manufacturing could benefit from the semantic interoperability of the systems. Namely, during the process configuration, all local ontologies (representing their Enterprise Information Systems) of all registered manufacturing enterprises of the VBE, are queried by the SC-CONF-Sys for the production schedules of a given part. Then, based on the part availability at the calculated time, the selected enterprises are automatically assigned to specific process categories. Besides selection process, which is carried out on the basis of above criteria, corresponding semantic relationships between the SCOR ontologies and local ontologies of the EISs of the VBE partners can also facilitate planning of sourcing, manufacturing and delivery of custom product parts at all levels of BOM, as early as in the supply chain process configuration phase. Conclusions Extended clinical workflow aims at complementing the clinical practice with functions which are typically considered as external to the conventional clinical workflows. These functions extend the scope of the clinical practice and they are: procedure/treatment planning (in context of custom product implementation), conceptual custom product design, sourcing and implementation. In broader sense, even manufacturing and delivery can be considered in this extended scope. The main objective is to facilitate efficient application of custom medical products in daily practice. The interoperability challenges implied by the need to resolve many crossdomain issues are addressed by the high-level system and models infrastructure. This infrastructure is expected to enable execution of the processes that span the boundaries of the clinical centre and enterprises from the VBE. The above assumptions are, to a certain extent validated in a case of manufacturing of custom orthopedic implants. Presented case confirms the hypothetical systems and models infrastructure and instantiates it by realizing the assumed functionality and purpose. It is expected that the proposed infrastructure could reduce the lifecycle of the VE for custom orthopedic implant manufacturing to 4-8 days. This is considered as acceptable period for many cases of trauma, especially when having in mind that delivery lead time for custom orthopedic implants, even when manufactured by using additive technologies can reach up to 3 months [START_REF] Christensen | Personalizing Orthopedic Implants[END_REF]. The estimation of the saved time is based on the fact that integrated infrastructure practically automates the process configuration phase of VE lifecycle and exchange of information between relevant systems, by removing the need for complex technical preconditions so this exchange can occur and by minimizing the human effort in relevant knowledge and information exchanges. Thus, it significantly reduces the time typically needed for supply chain planning. Fig. 4 . 4 Fig. 4. Systems and models infrastructure for custom orthopedic implants manufacturing http://www.himss.org/ASP/topics_ehr.asp
31,742
[ "1002397", "1002398" ]
[ "480959", "480959" ]
00147431
en
[ "spi" ]
2024/03/04 23:41:46
2007
https://hal.science/hal-00147431/file/CEP-special_issue.pdf
Gérard Morel email: [email protected] Paul Valckenaers email: [email protected] Jean-Marc Faure email: [email protected] Carlos E Pereira email: [email protected] Christian Diedrich email: [email protected] Manufacturing plant control challenges and issues Keywords: Manufacturing plant control, networked automation, intelligent manufacturing systems, dependable manufacturing systems, education Enterprise control system integration between business systems, manufacturing execution systems and shop-floor process-control systems remains a key issue for facilitating the deployment of plant-wide information control systems for practical e-business-to-manufacturing industry-led issues. Achievement of the integration-in-manufacturing paradigm based on centralized/distributed hardware/software automation architectures is evolving using the intelligence-in-manufacturing paradigm addressed by IMS industry-led R&D initiatives. The remaining goal is to define and experiment with the next generation of manufacturing systems, which should be able to cope with the high degree of complexity required to implement agility, flexibility and reactivity in customized manufacturing. This introductory paper summarizes some key problems, trends and accomplishments in manufacturing plant control before emphasizing for practical purposes some rationales and forecasts in deploying automation over networks, holonic manufacturing execution systems and their related agent-based technology, and applying formal methods to ensure dependable control of these manufacturing systems. 1 Manufacturing Plant Automation Context Manufacturing enterprises are intensively deploying a host of hardware and software automation and information technologies to meet the changing societal environment required by the increasing customization of both goods and services desired by customers. Legacy models and standards 1 , 2 enable manufacturing enterprise control system integration and interoperability (Table 1) from the business level to the process level, to meet industry-led Business-to-Manufacturing (B2M) issues [START_REF] Pétin | Formal specification method for systems automation[END_REF]. The resulting automation model (Fig. 1) is a wide network of automata that is challenging researchers and developers to achieve synchronic (in time) integration of shop-floor process controls in the large (robotics, assembly, machining, …) into plant-wide information control systems and diachronic (through time) integration of product life cycles over the manufacturing chain, as addressed by the overall Integration in Manufacturing (IiM) (Banaszak and Zaremba, 2003), the pivotal technology will require a form of technical intelligence that goes beyond simple data, through information to knowledge. This will be embedded in manufacturing system components and within the products themselves, and will make it possible to meet agility in manufacturing over flexibility and reactivity, as addressed by the shifting Intelligence in Manufacturing (IIM) paradigm. This complexity of efficiently deploying interoperability and autonomy for manufacturing plant control and production management issues is challenging the industry-led international Intelligent Manufacturing Systems 3 (IMS) initiative that will define, develop and deploy the next generation of open, modular, reconfigurable, maintainable, and dependable manufacturing systems. The IFAC Coordinating Committee on Manufacturing and Logistics Systems [START_REF] Ollero | From MEMS to enterprise systems: milestone report of the manufacturing and instrumentation coordinating committee[END_REF][START_REF] Nof | From plant and logistics control to multi-enterprise collaboration[END_REF] and the IFAC Technical Committee on Manufacturing Plant Control contribute to promoting the related scientific challenges of intelligent manufacturing systems (Monostori et al., 2003, Morel andGrabot, 2003), intelligent assembly and 3 www.ims.org disassembly [START_REF] Borangiu | Intelligent assembly and disassembly[END_REF] and of information control problems in manufacturing (Kopacek et al., 2005). This special issue deals with some current key problems and applications, recent major accomplishments and trends, and the main research-development forecasts related to information control in the field of networked manufacturing automation (Section 2), IMS modeling and experiments (Section 3), dependable control of discrete systems (Section 4), and education and training (Section 5). The conclusion (Section 6) of this introductory paper addresses some rationale issues among the many that are complementary to those of this special issue and that should be debated. 2 Networked Manufacturing Automation There is an increasing deployment of web technology to monitor the ubiquitous coherence between the physical flows of goods and the related information flows of services throughout product life cycles in production and logistics networks. These networking issues involve the twodimensional integration of automation [START_REF] Galara | Process control engineering trends[END_REF] for both vertical (synchronic) integration through the IEC/ISO 62264 standard for B2M applications and the IEC 61499 standard for SFC applications as well as horizontal (diachronic) integration through emanufacturing de facto standards for SCM and CRM applications. Among these interoperability issues between e-manufacturing applications, Neumann addresses in this special issue what is going on in communication in industrial automation to control the communication problems that arise from this increasing impact of the worldwide distribution of the Internet on the manufacturing automation domain. 2.1 Current key problems Embedding distributed technical intelligence (data and information processing, storage and communication) into field automation has been studied extensively to enable actuation and measurement system interoperability as well as to ensure control, maintenance and technical management system integration [START_REF] Iung | Engineering process of integrated-distributed shop floor architecture based on interoperable field components[END_REF]. Among many rationales to assess and predict the performance degradation [START_REF] Leger | Integration of maintenance in the enterprise: towards an enterprise modeling-based framework compliant with proactive maintenance strategy[END_REF] of a process, a machine or a service, on-site and remote infotronics components can be merged in a closed device-to-business loop to move from traditional fail and fix to predict and prevent practices [START_REF] Erbe | Infotronics technologies for e-maintenance regarding the cost aspects[END_REF]. Embedded accurate algorithms improve the precision of customized information and enable the prognosis of when the performance is becoming unacceptable, the diagnostic of why the performance is degrading and the decision as to what maintenance action to perform as well as the performance benchmarking coming from similar operating Watchdog Agents™ (Lee, in Kopacek et al., 2005). Another major technological challenge in the development of distributed embedded systems is to guarantee both the reliability and the temporal predictability of the underlying software and hardware infrastructures, which must be flexible enough to accommodate the requirements imposed by new applications and services. Vertical communication at the control level and horizontal communication between elements in the factory hierarchy must also be managed. Finally, the efficient use of these promising mechatronics, infotronics and communication technologies is highly dependent on dealing with the complexity of intelligently combining a host of existing techniques for global rather than local performance. These engineering issues require field device metamodels to integrate the devices in the entire engineering life cycle of the automation system [START_REF] Diedrich Ch | Three-component model for field device integration in control systems[END_REF]. The de facto industrial Unified Modeling Language (UML) is the candidate for designing distributed automation architectures in a collaborative and multidisciplinary way, and there are several socalled UML profiles being promoted. Special profiles for real time, safety and dependability must be evaluated carefully, such as the following. • Profile for schedulability, performance, and time specification. 4 A major industrial communication challenge of the related multilevel communication architectures is to unify plant networking with 4 http://www.omg.org/docs/formal/03-09-01.pdf 5 http://neptune.irit.fr/Biblio/02-01-02.pdf Ethernet. The resulting automation challenge is to guarantee the same deterministic features as those of more specific fieldbuses currently applied in shop floor manufacturing. That opens a new field of applications for intelligent control techniques to model, evaluate and optimize the communication system behavior within distributed automation architectures. For example, applying fault detection and isolation/fault tolerant control (FDI/FTC) techniques to networked control systems (Fig. 2) should improve safe control and monitoring of such complex automation systems as well as their global reliability, dependability and availability, by dynamically accommodating network performance, reconfiguring network components and adapting the application to the delivered quality of service [START_REF] Georges | A design process of switched ethernet architectures according to real-time application constraints[END_REF]. The huge investment in Ethernet-based industrial communications by the main industrial players (e.g., PROFInet by Siemens, Industrial IP by Rockwell, and Modbus IP by Schneider) is a challenge for researchers, because large distributed systems with new characteristics and new opportunities are being built. These systems must be configured, parameterized, operated and maintained with real-time, safety and security constraints. 2.3 Forecasts In future, specific industrial communication techniques and other commercial communication systems such as telecommunication for maintenance and remote access or private networks can become components of these systems. Systems that cross intranet borders or wide area networks are virtual automation networks with new quality-of-service challenges and new management tasks. In detailing real-time constraints, safety and security are the main requirements for the new architectures; technology combinations will require the joint research efforts of the automation and communication communities to prevent networks becoming the Achilles' heel of embedded and distributed manufacturing automation. These networked automation issues are challenging the traditional centralized-architecture hierarchical-model control approaches (Table 2, levels 2 and3) to meet interoperability andagility in manufacturing (Table 2, levels 4 and5). IMS Modeling and Experiments Intelligence in manufacturing is perceived in various ways ranging from intelligent control and information communication techniques through human intelligence in the operating/engineering loop [START_REF] Lhote | The extension of principles of cybernetics towards engineering and manufacturing[END_REF] to agents' self organization. The area of intelligent systems is challenging-to an extent occasionally verging on controversy-both the research community and the industrial sector to cope with the traditional and centralized automation approaches to meet the high degree of complexity and practical requirements for robustness, generality and reconfigurability in manufacturing control as well as in production management, planning and scheduling. Among many rationales, trends and experiments, a general consensus exists that holonic manufacturing systems (HMS) should be the unifying technology as well as the productprocess engineering (PPE) approach for all product-driven control and management issues required by the customized manufacturing era (Muhl et al., in Morel andGrabot, 2003, Cheng et al., 2004). 3.1 Current key problems Today, the key problem is the lack of tools and/or platforms to test and validate IMS developments on realistic problems, in terms of both the size of the manufacturing system itself and the thoroughness of the evaluation techniques. Concerning advanced manufacturing control, conceptual designs exist that address the major research issues, at least in principle, for instance Valckenaers (in Morel and Grabot, 2003). The complexity of these system designs makes formal proof of their performance and capabilities infeasible and definitely impractical. Therefore, an environment is required in which the research community can provide and retrieve (emulated) test cases of realistic size and complexity; in other words, research developments must be tested in real-world factories (in emulation) in place of the toy test cases and token evaluation campaigns that remain the norm. Moreover, the evaluation campaign must answer industrial requirements, which typically implies that test runs must cover several months of production. Evidently, IMS systems must be properly designed to allow drawing hard conclusions from test runs; for instance, a manufacturing control system design must randomize parameters and decisions as a default. The IMS network of excellence 7 has started to make such an environment available for advanced manufacturing control and supply network coordination. Valckenaers et al., in (Panetto et al. 2006), describe the development status and roadmap for this research effort which will equip the IMS community with a benchmarking service. Such testing and evaluation platforms will enable researchers to generate solid proofs of concept for their research results with normal levels of development efforts and resources. Secondly, there is a need for better and deeper understanding of scalability and robustness, typically only achievable through designs that use emergence and self-organization. These designs give up the ability to prescribe explicitly how the system will behave in return for a significant increase in operating range. The analogy in human organizations is to replace explicitly prescribed procedures (cookbook rules) by empowerment of the people performing the work. It is well known that empowerment produces superior results, given adequately skilled personnel. This shift toward empowered elements in an IMS system requires further research for deeper insight on how this shift can be executed and what benefits can be expected. In other words, better understanding of the concepts of emergence and self-organization is necessary, especially in the design of such systems (synthesis of IMS artifacts). Finally, research must address information handling in sophisticated IMS designs, with traceability as a primary concern. Manufacturing control systems already provide the potential to address this issue, but it should be brought to the 7 www.ims-noe.org surface, and the need for additional support that transpires must be answered. 3.2 Recent major accomplishments and trends Recently, research on applying multiagent systems in manufacturing has produced many valuable results (Muhl et al., in Morel and Grabot, 2003). However, various obstacles for deployment in industry remain. Marik and Lazansky discuss in this special issue the industrial applications of agent technology, and they emphasize that only very few real-life industrial experiments are in use, despite laboratory experiments on the promising MAS and HMS approaches. Often, these obstacles require multidisciplinary solutions, in which, for instance, the manufacturing system design and the manufacturing control both are conceived to offer flexibility, robustness, scalability and cost effectiveness. Likewise, advanced designs for multiagent manufacturing control have emerged, promising to address many issues. However, a definitive proof of concept requires the developments described above. Initial steps to provide such missing links have been taken already, and key elements of the solution already exist (e.g., suitable emulation technology). In advance of the availability of such a benchmarking service, Mönch addresses in this special issue a simulation-based benchmarking of production control schemes for complex manufacturing systems to deal with more specialized but more detailed models for practical purposes. These advanced designs give up functional decomposition in favor of an object-oriented design approach in which a reflection of the world of interest in the software of the control system plays a prominent early role, much like maps are key elements in solving navigation problems. The PROSA architecture is an illustration of this trend [START_REF] Van Brussel | Reference architecture for holonic manufacturing systems: PROSA[END_REF]. The object-oriented approach is extended in a multiagent approach (active objects reflect active entities in the manufacturing system) and by novel coordination mechanisms inspired by insect societies. Through an emergent and self-organizing design, such systems promise robustness and scalability. In contrast to older research based on market mechanisms, it is not necessary to reduce the dimensions of the information in the system, and many tuning problems are avoided. The novel designs postpone the introduction of the decisionmaking software components until the end. Therefore, the reusability and operating range of the system increases significantly. St Germain addresses in this special issue an engineering perspective on the supply network control problem by extending the HMS paradigm for inter-and intra-enterprise logistics issues. 3.3 Forecasts Significant development can be expected in the foreseeable future in the domain of the emanufacturing execution system [START_REF] Morel | Manufacturing enterprise control and management system engineering: rationales and open issues[END_REF]. Some promising studies are addressing the interest in formal techniques for e-MES issues [START_REF] Qiu | A formal model for incorporating shop floor controls into plant information systems[END_REF], to incorporate shop floor controls formally into plant-wide informationcontrol systems for enabling 'on the fly' rescheduling of product routes as well as manufacturing process reconfigurability [START_REF] Tang | Integrated design approach for virtual production line-based reconfigurable manufacturing systems[END_REF]. Another reason behind this is an explosion of enabling information technologies among which wireless technology such as radiofrequency identification (RFID) is a prominent example, ensuring state coherence between the physical and information flows all through the product life cycle. For example, in this special issue, Parlikad and McFarlane investigate the role of this product information in end-of-life decision making. This rationale then raises the possibility of a hierarchical, integrated vision of enterprise-wide control for a more interoperable and intelligent system by postulating the customized product as the 'controller' of the manufacturing enterprise's resources (Fig. 3). Fig. 3: Product-driven manufacturing enterprisewide control Manufacturing execution is a complex task because of the nonlinear nature of the underlying production system, the uncertainties stemming from the production processes and the environment, and the combinatorial growth of the decision space. Schedules and plans, originating from higher levels in a manufacturing organization, can become ineffectual within minutes on a factory floor. Manufacturing is a very dynamic environment, and handling changes and disturbances are high on its list of research challenges. Moreover, the range of existing manufacturing system types and the performance issues therein, as well as the different kinds of equipment and processes, is very wide. This heterogeneity is challenging as well. To cope with these challenges, future manufacturing execution system designs must apply the most fundamental and recent insights in self-organizing systems, a topic that is being intensely investigated by the multiagent systems community today (Di Marzo et al., 2004). To design such self-organizing systems (Table 2, level 5), it is also essential to apply insights from fundamental research (Waldrop, 1992, Valckenaers, in Morel andGrabot, 2003) and to define the related modeling framework, to obtain the required system features. Important expected progress in the domain is the emergence of manufacturing execution systems that are able to forecast the emerging state of the underlying manufacturing system while preserving the level of decoupling that has made older multiagent manufacturing execution systems robust and configurable [START_REF] Valckenaers | Ant colony engineering in coordination and control: how to engineer a short-term forecasting system[END_REF]. These recent and ongoing developments finally promise to deliver the best of both worlds: the planning ability of centralized older solutions and the ability to cope with the real-factory dynamics of the self-organizing multiagent systems. In addition, enabling technologies are bringing the above research results closer to actual deployment. Tracking technologies such as RFID provide the eyes for the manufacturing execution system. Omnipresent networking and web technologies provide communication and actuation. Modern PLC and industrial PC designs support the deployment of multiagent systems developed in higher-level programming languages. Moreover, the customer calls for traceability as a basic product attribute. Products without a production history are becoming virtually worthless. Open research issues remain, however. First, the cooperation between high-level planners and schedulers and the manufacturing execution system is virtually unexplored. Secondly, scaling the MES technology to multisite manufacturing coordination and control is in only the initial stages of research. Furthermore, the development of a comprehensive methodology and theory for design, implementation and deployment is in its infancy. Overall, the future holds a multitude of challenging research activities in this domain. DEPENDABLE MANUFACTURING SYSTEMS CONTROL There is a growing demand for formalized methods in industrial automation engineering for dependability issues, to control the increasing complexity of software-intensive applications (Fig. 4) and their related ease-of-use techniques [START_REF] Polzer | Ease of use in engineeringavailability and safety during runtime[END_REF]. Another issue is to comply with fail-safety legacy certification [START_REF] Moik | Engineering-related formal method for the development of safe industrial automation systems[END_REF], as safety for people and for industrial investments has become a key factor because of internationally accepted rules. Johnson addresses in this special issue the role of formal methods in improving automation software dependability. He points out the need for verification techniques to check the real softwarehardware value-creation chain in industrial automation systems when addressing high levels of the organization (Table 2, levels 3 to 5), so that software dependability cannot affect the correctness of the control design and the reliability of the respective controllers in operation. Fig. 4: IMS-OOONEIDA 8 R&D framework 4.1 Current key problems A first rationale issue should be to control information and its related communication technology better, as they are problematic in manufacturing plant-wide automation, to prevent dependability concerns in the near future. The increasing use of networked control systems within factories and enterprises can increase or decrease systems dependability, depending on how the networks have been designed and set up. Ethernet-TCP/IP (Transmission Control Protocol/ Internet Protocol)-based networked control systems, for instance, ease the access to process data and hence enable new monitoring, diagnosis 8 (www.oooneida.info) and maintenance functionalities. However, a question arises immediately: is the traffic increase coming from these new functionalities compliant with the reactivity constraints required for the application? If not, how can we route this new traffic? Moreover, networked control systems impact security by providing potential means to disturb or to damage systems. Another current trend is the growing importance of safety-and dependability-related standards when designing industrial controllers. These standards may be domain dependent (such as specific standards for railway transport and power plants) or may cover a wider scope, like the IEC 61508 standard (functional safety of E/E/PE safety/related systems), which introduces a safety life-cycle model and the concept of safety integrity level (SIL). These industrial automation standards recommend the use of formal methods for a priori proving high levels of SIL at early steps of system requirements, but without defining how they can be applied. Finally, dependability becomes a major concern even for managers, because current economic constraints ask for increasing availability while the demand from society to control technological risks better requires accurate safety analysis. As managers focus continually on cost control and often claim that dependability improvement leads to tooexpensive systems, development of new design processes that address both cost and dependability concerns is therefore a challenging issue. The work presented in (Papadopoulos and Grante, in Kopacek et al., 2005) combines semiautomatic safety and reliability analysis with multicriteria optimization techniques. This will assist the gradual development of designs that can meet reliability and safety requirements within pragmatic cost and profit constraints, and is a good example of such a process. Combining a priori system definition approaches with a posteriori system implementation approaches when addressing formal proofs of system behavior remains an open problem that should be attacked to cope with the increasing vulnerability of nondeterministic automation technologies. 4.2 Recent major accomplishments and trends The main dependable manufacturing systems control concerns remain the following. • Dependability analysis must be carried out with a system engineering view. This amounts to saying that analysis should not focus only on process safety or control software dependability, but should be structured by the automation paradigm (Fusuoka et al., 1983[START_REF] Pétin | Formal specification method for systems automation[END_REF] as stressed for performance-oriented system automation (Fig. 5). r(k) u(k)= γ (r,x,k) u(k) J [x(0, …, x(K), u(0), …, u(K)] y(k ) CONTRO L PERFORMANC E u( k)= γ (r,x,k) u(k) J [x(0, …, x(K), u(0), …, u(K)] y(k ) CONTRO L PERFORMANCE E SYSTEM ) , , ( k u x f x = & Fig. 5: A closed-loop system model with system performance optimization rather than control performance optimization (Morel, in Erbe, 2003) • Dependability must be taken into account, starting with requirements expression and throughout the system life cycle. This can be achieved by using the semiformal models provided by UML (Unified Modeling Language) and by its Systems Modeling Language 9 extension (SysML). Starting with the requirements down to the implementation with integrated verification and test steps, the software-driven V model can be applied. This also implies bridging the gap between conventional dependability analysis methods (such as fault tree analysis, failure modes, and effects and criticality analysis) and emerging formal methods for proof-based system engineering [START_REF] Morel | Proof-oriented fault-tolerant systems engineering: rationales, experiments and open issues[END_REF] as well as the gap between industrial practices for dependability assessment and/or improvement (such as simulation techniques and testing) and these formal methods. Other key issues should be noted: • the use of formal or semiformal analysis and synthesis methods for design, implementation and validation of system components and communication systems; • the use of formal or semiformal analysis and synthesis methods on industrial-size examples; • the impact of networked control systems on manufacturing systems dependability; • the improvement of fault forecasting methods thanks to formal temporal analysis (introduction of temporal logic in fault forecasting methods); • the improvement of design methods for fault-tolerant systems thanks to formal methods, 9 www.sysml.org • reconfigurable systems design and mode management; and • definition of metrics for dependability, safety and security. The classical methods for dependability improvement have been developed since the 1960s for analyzing physical systems (these methods deal with process dependability) and are based on designers' and users' skills and knowledge. Current manufacturing systems include many processors and software systems and different kinds of networks, and are strongly constrained by production objectives. Their increasing complexity leads us to look for new methods that rely on sound formalisms to enable automatic dependability analysis and to facilitate dependability improvement. Several research results recently issued by the communities for safety and reliability analysis and for discrete event systems (DES) control seem able to provide solutions to these industrial concerns. Fault forecasting using dynamic or temporal fault-tree analysis, dependability modeling using Bayesian networks, fault-tolerant systems design, formal verification of control software, timed and probabilistic model checking, fault detection and diagnosis of DES, for instance, provide promising solutions for increasing systems dependability. 4.3 Forecasts Significant progress is to be expected in the formal combination of all these related techniques. However, we must take into account significant, though often antagonistic, concerns. As mentioned in Faure and Lesage (in Kopacek et al., 2001), these methods may be ranked, with a life-cycle criterion, into two categories: offline dependability and online dependability. The purpose of the offline dependability methods is to minimize the fault risk during design and implementation, i.e., before the system is used. On the other hand, the objective of online dependability methods is to ensure that an implemented and running system is dependable. 4.3.1 Offline dependability Formal verification and synthesis methods based on DES theory, like model-checking techniques [START_REF] Berard | Systems and software verification: model-checking techniques and tools[END_REF] and supervisory control theory [START_REF] Ramadge | Supervisory control of a class of discrete event processes[END_REF] seem capable of promising solutions for dependability improvement. They should allow the design a posteriori or a priori of controllers that comply with the application requirements. Numerous research results based on these two approaches have been published recently. Nevertheless, the current results of these studies are mainly theoretical and have been generally tested on only small case studies (toy problems). There is therefore a need for new research aimed at making these results on formal methods for DES available to automation engineers. The key strengths and shortcomings of existing verification methods relative to railway signaling applications are noted in [START_REF] Johnson | Dependable software in railway signaling[END_REF]. [START_REF] Morel | Proof-oriented fault-tolerant systems engineering: rationales, experiments and open issues[END_REF] address the issue of bridging the gap between industrial practices and formal methods. Flordal et al. in [START_REF] Roussel | Algebraic approach for dependable logic control systems design[END_REF], who develop a specific algebraic synthesis method for industrial controller design, or by [START_REF] Stursberg | Improving dependability of logic controllers by algorithmic verification[END_REF], who apply the timed model-checking tool UPPAAL to verify Sequential Function Chart (SFC) programs. Moreover, neither of these two approaches (verification and synthesis) is able to provide a global solution. Hence, another interesting prospect is coupling several formal methods to build toolboxes for dependable systems design and implementation. Using formal verification and formal synthesis techniques in a convenient way, for instance, would surely increase the potential of both approaches. [START_REF] Music | Combined synthesis/verification approach to programmable logic control of a production line[END_REF], for instance, describe a two-stage method for designing logic controllers. Supervisory control theory is used in the first stage to test the controllability of the specifications and to derive a finite automaton representation of the admissible behavior of the system; in the second stage, reachability analysis is performed on a Petri net model derived from this representation. All the approaches mentioned above are based upon deterministic modeling; unfortunately, addressing dependability problems requires consideration of nondeterministic behaviors. Hence, research dealing with probabilistic modeling of DES is required. Kwiatkowska et al. in this special issue address controller dependability analysis by probabilistic model checking for systems that exhibit stochastic behavior. Meanwhile, the usual fault-forecasting methods must also be improved to cope with the complexity of today's manufacturing systems. [START_REF] Papadopoulos | Continuous assessment of designs and re-use in model-based safety analysis[END_REF] outline a technique that automates the construction of fault trees and FMECAs and explains how this technique can be repeatedly applied to functional and architectural models to enable continuous assessment of evolving designs; this technique is well suited to manufacturing systems based on standard interoperable components and allows reuse of safety analyses. An improvement of FTA, named deductive cause-consequence analysis (DCCA), is presented in [START_REF] Ortmeier | Deductive cause-consequence analysis[END_REF]. DCCA allows rigorous proof of whether a failure at the component level is the cause for a system failure. This enables designers to prevent flaws when designing fault trees. Finally, bridging the gap between fault forecasting methods and DES formal methods is a challenging issue. Industrial users will accept formal methods only if they are integrated within a computer-aided framework for dependability that should include and automate existing industrial techniques, such as FTA and FMECA. To reach this objective, [START_REF] Santiago | From fault-tree analysis to model-checking of controllers[END_REF], for instance, propose a method that can state the formal properties of a logic controller, a prerequisite for formal verification using model checking, from a fault-tree analysis, taking into account both the controlled process and the controller. 4.3.2 Online dependability Several studies of DES fault detection and diagnosis, reconfiguration techniques and faulttolerant control, as well as of DES identification have delivered promising results that look useful for dependability improvement when operating a system. [START_REF] Lafortune | Failure diagnosis of dynamic systems: an approach based on discrete event systems[END_REF] outline methodologies for fault detection and isolation based on the use of discrete-event models that have been successfully used in a variety of technological systems ranging from document processing systems to intelligent transportation systems. [START_REF] Genc | A distributed algorithm for on-line diagnosis of placebordered petri nets[END_REF] present a new distributed algorithm for online fault detection and isolation of discrete event systems modeled by Petri nets. Identification of DES might be particularly considered as a prerequisite for fault detection and diagnosis. [START_REF] Klein | Fault detection of discrete event systems using an identification approach[END_REF], for instance, focus on the identification of large-scale discreteevent dynamic systems for fault detection. The properties of a model for fault detection are discussed, and metrics to evaluate the accuracy of the identified model are defined. An identification algorithm that allows setting the accuracy of the identified model is also presented. Finally, diachronic integration between faultforecasting methods, providing some formal models of faults, are built up during this step, and diagnosis is also a challenging issue. 5 Education and Training Nof (in Panetto et al., 2006) emphasizes that e-manufacturing is highly dependent on the efficiency of collaborative man-man and manmachine e-work. E-manufacturing, and consequently collaborative e-work, is addressing in industry the need for agile workforces in competitive organizations, while in training bodies, it is addressing the difficulties faced by high-level trainers and trainees when learning about complex systems paradigms (Table 2). Any operational system emerges in real life from an ad hoc combination of formal, informal and intuitive issues by combining top-down and bottom-up approaches. The learning complexity of such holistic paradigms imposes on both research and academic training an appropriate project system that reproduces a realistic systems engineering context (Fig. 6). Fig. 6: Large-scale project engineering approaches [START_REF] Rumpe | A manager's view on large scale XP projects[END_REF] In one approach, we could apply a normative document-driven process for engineering a system 10 such as ISO/IEC 15288 or the modeldriven system definition, development and deployment approaches (Table 2, levels 3 and 4) within a computer integrated manufacturing (CIM) context 11 . A complementary approach could be to adapt the extreme programming-like (XP) approach currently applied in agile software development, to facilitate face-to-face learner-to-learner and teacher-to-learner collaborative e-work, reproducing complex engineering situations with lower-level methods (Table 2, levels 4 and 5). Bruns and Erbe present in this special issue such an e-learning low-cost evolution of previous CIM training concepts. It allows trainers and trainees to enter a mixed situation with an idealized computer simulation to understand the stepwise abstraction and concretization of technical system complexity. The proposed learning environment merges onsite and remote components into a cooperative learning process to bridge reality and virtuality as addressed by the infotronics world. Conclusion Many other issues should be debated to anticipate the next automation of manufacturing 10 www.incose.org 11 www.aip-primeca.net systems besides those presented in this special issue [START_REF] Dolgui | Information control problems in manufacturing[END_REF]. One challenging approach could be to explore with more holism the appropriate balance between the increasing complexity of software-intensive systems (Fischer, 2006), ranging from embedded micro-systems to macro-systems of systems, and a more human-centered automation [START_REF] Mayer | Special section on Human-centered systems engineering[END_REF] for safety or ecoefficiency purposes. Another challenging approach could be to explore others modeling artifacts such as the promising System of Systems [START_REF] Chen | Advancing Systems Engineering for System of Systems challenges[END_REF] to cope with the high degree of complexity required to deploy plant-wide information control systems in enterprise. Fig. 2 : 2 Fig. 2: Networked control systems tolerant to faults 6 Table 1 : 1 Enterprise Control System Integration in Manufacturing B2M Systems Integration CRM Customer relationship management SSM Sales services management APS Advanced planning system SCM Supply chain management ERP Enterprise resources planning MES Manufacturing execution system SFC Shop floor controls MECHS Mechatronic systems MEMS Micro mechanical systems AUTO ID Automatic identification Table 2 : 2 Capability profile between system architecture feature and the related theoretical and technical modeling framework(Morel et al., 2003) System Architecture Theoretical and Modeling Paradigms Feature 5. Intelligent Kenetics, MAS, HMS 4. Interoperable Cognitics, Ontology, Object-Oriented 3. Integrated Systemics, Systems Engineering 2. Hierarchical System Theory, Automatic Control 1. Isolated Empiricism, Ad hoc approaches this special issue apply the Ramadge-Wonham supervisory control theory to the automatic model generation and plc code implementation for coordination of industrial robot cells. Such examples of industryoriented research are addressed by (www.strep-necst.org)
41,287
[ "833051" ]
[ "1001", "35640", "30464", "35641", "35642" ]
01474361
en
[ "shs" ]
2024/03/04 23:41:46
2017
https://hal.science/hal-01474361/file/Dashtipour-Vidaillet-Organization-2017.pdf
Parisa Dashtipour email: [email protected] Bénédicte Vidaillet Work as affective experience: The contribution of Christophe Dejours' 'psychodynamics of work' Keywords: Affect, Dejours, psychoanalysis, work, work collectives Psychoanalytic perspectives (such as the Kleinian/Bionian and Lacanian literature) have made significant contributions to the study of affect in organizations. While some have pointed out the affects involved in work tasks, most of this literature generally focuses on the affects linked to organizational life (such as learning, leadership, motivation, power, or change). The center of attention is not on affects associated with the work process itself. We draw from the French psychodynamic theory of Christophe Dejours-who is yet to be known in English language organization studies-to make the following contributions. First, we show the relationship between affect and working by discussing Dejours' notions of affective suffering, the real of work, the significance of the body, and 'ordinary sublimation'. Second, we advance critical research in organization studies by demonstrating the centrality of work in the affective life of the subject. Third, the article reinterprets Menzies' well-known hospital case study to illustrate how Dejours' theory extends existing psychoanalytical approaches, and especially to point to the significant role of the work collective in supporting workers to work well. We conclude by suggesting that if the centrality of work in the affective life of the subject is acknowledged, it follows that resistance strategies, and work collectives' struggle for emancipation, should focus on reclaiming work. Introduction Psychoanalytic approaches to work and organizations related to the Tavistock Institute have long studied affective dynamics in organizations [START_REF] Fotaki | What Can Psychoanalysis Offer Organization Studies Today? Taking Stock of Current Developments and Thinking about Future Directions[END_REF][START_REF] Gabriel | Emotion, Learning and Organizing[END_REF]. Drawing mainly from Klein and Bion, they have particularly focused on how anxiety and defenses against anxiety shape organizational behavior (e.g. [START_REF] Jaques | On the Dynamics of Social Structure: A Contribution to the Psychoanalytical Study of Social Phenomena Deriving from the Views of Melanie Klein[END_REF][START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF]. Recent publications, inspired by Lacanian psychoanalysis (e.g. [START_REF] Kenny | Someone Big and Important": Identification and Affect in an International Development Organization[END_REF][START_REF] Stavrakakis | Peripheral Vision: Subjectivity and the Organized Other: Between Symbolic Authority and Fantasmatic Enjoyment[END_REF][START_REF] Vidaillet | Working and Resisting When One's Workplace Is under Threat of Being Shut down: A Lacanian Perspective[END_REF] have also centered on affect, especially to explore the operation of power in organizations. Generally, the psychoanalytic literature has investigated and demonstrated the affects associated with organizational life (such as leadership, power, learning, and change) but the affects due to the activity of work itself are not adequately explored. The aim of this article is to address this gap by introducing the psychoanalytic and critical perspective of Christophe Dejours to organization studies.1 The originality of Dejours' approach is that it illuminates the affective, subjective, and embodied experience of working, focusing particularly on the affect of suffering-as a consequence of the encounter of the subject with what Dejours calls the 'real of work'-and the way in which this affect can-or cannot-be sublimated. This framework also articulates the role of the work organization and the significance of the work collective in creating and/or transforming such affect. Dejours' theory is much needed in psychoanalytically inspired research in organization studies because it points out the centrality of work in human life and, as we will see, this has political implications. The article thus develops a threefold contribution. First, we extend existing psychoanalytic perspectives by showing how the work process itself is affective. Second, we advance critical research in organization studies by demonstrating the centrality of work in the affective life of the subject. Third, we point to the significant role of the work collective in supporting workers to work properly and in overcoming affective suffering. While we recognize a variety of activities as work, this article focuses on work conducted within formal organizations. Dejours, a French psychoanalyst, psychiatrist, and occupational health physician, is associated with the 'psychodynamics of work' movement, an approach that has been developed in the last approach' to organizations or the Lacanian organization studies literature and has been advanced separate from these. While also using Hegel, Henry, and Merleau-Ponty, Dejours' central reference is Freud. This is because Freudian metapsychology is the only intellectual and clinical tradition that explores the development of human subjectivity by focusing on the articulation of the body and the psyche and explaining the central function of sexuality (in its transformations) in this process (Dejours, 2009a). 'Sexuality and work have much closer relationships than usually thought. Subjectivity is structured by sexuality but it is also, whether we want it or not, totally involved in the relation to work' (Dejours, 2009a: 20-21). Dejours ' (1980) theory is thus clearly psychoanalytic. His original field 'the psychopathology of work', which studied illness from a medical perspective, did not investigate the psychic processes involved in 'non-ill' people. He therefore founded a new field called 'the psychodynamics of work'-with Freud as the key reference-which explores the unconscious dynamics implied in the working process. Dejours extends Freud by integrating the issue of 'work' into Freudian theory and designating a central role to affectivity at work. To a large extent, Dejours' work is concerned with identifying the conditions that turn the experience of work either into one of pleasure, subjective expansion, and freedom or one of pathological suffering. The general viewpoint is that work is central to subjectivity and health, to the relationship between men and women, to the community, and finally to the theory of knowledge [START_REF] Dejours | Souffrance en France[END_REF](Dejours, , 2009a[START_REF] Dejours | Travail vivant-Tome 2: Travail et émancipation[END_REF]. [START_REF] Deranty | What Is Work? Key Insights from the Psychodynamics of Work[END_REF], who has extensively interpreted Dejours' theory in English within philosophy, points out that it has become unpopular in social theory to claim the centrality of work to subjectivity; work is generally depicted as carrying nothing more than a utilitarian value. This 'thin' understanding of work is reflected in neoliberal economic thinking, which prioritizes rationality and the instrumental aim of work. Lacanian organizational research, outlined below, along with many other critically oriented organizational scholars (such as [START_REF] Fleming | Towards a Worker's Society? New Perspectives on Work and Emancipation[END_REF], also view work as a field of exploitation for instrumental reasons. Dejours' perspective concurs with this, but it would also show how work has important affective and subjective functions. We begin by reviewing psychoanalytically inspired studies on affect, work, and organizations (mainly the Kleinian/Bionian and Lacanian approaches) and argue that while they demonstrate the central role of affect in organizational life, their theorization of the affects related to work activity is limited. We then outline Dejours' theory of work, focusing on affective suffering, the significance of the body and 'ordinary sublimation' in work. Subsequently, we point out the importance of work organization in creating and transforming affects at work. Next, we reinterpret [START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF] well-regarded case study through the viewpoint of Dejours' theory. We have chosen [START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF] because it is a key resource in psychoanalytic approaches in organization studies. More importantly, the case helps to illustrate how Dejours' theory can extend existing psychoanalytic perspectives. We end the article by discussing the political implications of Dejours, highlighting in particular the ethical, creative, and emancipatory aspects of work, and the significant role of the work collective. Psychoanalytic perspectives on affect in organizations Psychoanalytic approaches associated with the Tavistock Institute have been significant in showing how affects shape organizational life [START_REF] Gabriel | Psychoanalytic Contributions to the Study of the Emotional Life of Organizations[END_REF][START_REF] Obholzer | The Unconscious at Work[END_REF]. This approach, drawing from Klein and Bion, often uses the term emotion, rather than affect (see [START_REF] Eisold | The Intolerance of Diversity in Psychoanalytic Institutes[END_REF][START_REF] Jaques | On the Dynamics of Social Structure: A Contribution to the Psychoanalytical Study of Social Phenomena Deriving from the Views of Melanie Klein[END_REF][START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF][START_REF] Obholzer | The Unconscious at Work[END_REF]. From a psychoanalytic perspective, the main difference between 'affect' and 'emotion' is that affect-as described by Freud's (1915a[START_REF] Freud | Repression[END_REF]Freud's ( , 1915b[START_REF] Freud | Repression[END_REF]) use of the German term Affekt-refers mainly to the translation and expression of the sexual drive in the psychic apparatus (the drive being translated in the psychic apparatus into affect or representation). This process denotes both the subjective and qualitative dimension of the affect; it can be pleasant or painful, precise or undefined, and so on-as well as its quantitative and energetic dimension; the affect being related to a specific quantity of libidinal energy that makes it more or less intense. Emotion, on the other hand, emphasizes the function of communication with the external world; our emotions are reflected and expressed by our body and can hence be interpreted by our environment. Klein studied the relationship and the mutual adjustment between the child and their mother, and she treated some elements of the intrapsychic world as external and vice versa via the concept of 'object'. The notion of 'emotion' thus enabled her and her successors like Bion to describe a quality of the links between the subject and their objects [START_REF] Widlöcher | De l'émotion primaire à l'affect différencié[END_REF]. The Tavistock perspective mainly uses the term emotion to refer primarily to unconscious (paranoid or depressive) anxiety, which is seen as an inevitable part of organizational and group life, but is often too painful to acknowledge [START_REF] Halton | Some Unconscious Aspects of Organizational Life: Contributions from Psychoanalysis[END_REF]. Such anxieties and the defenses against themsuch as denial, projections, and splitting-can prevent people from conducting their work properly and inhibit organizational performance. Jaques' (1953: 3) classic study of the Glacier Metal Company showed how changes in roles altered the protections staff had erected to defend themselves against psychotic anxiety. Menzies ' (1960) study of nurses in a teaching hospital has been the key resource for many subsequent researchers in the psychoanalytic approach to organizations (e.g. [START_REF] Czander | The Psychodynamics of Work and Organizations: Theory and Application[END_REF][START_REF] Diamond | The Symbiotic Lure: Organizations as Defective Containers[END_REF][START_REF] Eisold | The Intolerance of Diversity in Psychoanalytic Institutes[END_REF][START_REF] Hirschhorn | The Workplace within: Psychodynamics of Organizational Life[END_REF][START_REF] Hirschhorn | Dealing with the Anxiety of Working: Social Defenses as Coping Strategy[END_REF][START_REF] Hyde | Organizational Defences Revisited: Systems and Contexts[END_REF][START_REF] Obholzer | The Unconscious at Work[END_REF][START_REF] Willcocks | A Psychoanalytic Perspective on Organizational Change[END_REF]. [START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF] suggests that the nursing service has been built, over time, to provide a socially structured defense system that offers some protection against anxiety caused by the nursing task. Later in this article, we further discuss [START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF] in our re-analysis of the hospital case study. More recent literature has foregrounded the role of anxiety in organizational change [START_REF] Carr | Understanding Emotion and Emotionality in a Process of Change[END_REF][START_REF] Vince | Paradox, Defense and Attachment: Accessing and Working with Emotions and Relations Underlying Organizational Change[END_REF], organizational learning [START_REF] Bain | Social Defences against Organizational Learning[END_REF][START_REF] Brown | Organizational Identity and Learning: A Psychodynamic Perspective[END_REF][START_REF] Vince | The Impact of Emotion on Organizational Learning[END_REF], leadership [START_REF] Stein | The Othello Conundrum: The Inner Contagion of Leadership[END_REF], company takeovers [START_REF] Vince | Being Taken over: Managers' Emotions and Rationalizations during a Company Takeover[END_REF], and public health policies [START_REF] Fotaki | Choice Is Yours: A Psychodynamic Exploration of Health Policymaking and Its Consequences for the English National Health Service[END_REF][START_REF] Fotaki | Organizational Blind Spots: Splitting, Blame and Idealization in the National Health Service[END_REF]. Most of these approaches focus on emotions (predominantly anxiety) related to organizational or group life generally, and some show the importance of the containment of anxiety in organizations [START_REF] Gilmore | Anxiety and Experienced-based Learning in a Professional Standards Context[END_REF]. Following [START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF], the anxiety produced by the work task has also become a central aspect of the social defense theory [START_REF] Fraher | A History of Group Study and Psychodynamic Organizations[END_REF][START_REF] Fraher | Systems Psychodynamics: The Formative Years (1895-1967)[END_REF][START_REF] French | Group Relations, Management and Organization[END_REF]. [START_REF] Obholzer | The Unconscious at Work[END_REF] devote many chapters in their edited book to the emotional difficulties in care work. While such researchers acknowledge the positive impact that 'good' work performance can have on the health and maturity of workers (see, for example, Menzies Lyth, 1991: 375, 377), the focus of attention is anxiety. Largely, such studies center more on the defenses that workers establish to cope with difficult emotions produced by the work task and less on the relationship between affect and the work activity. [START_REF] Voronov | Integrating Emotions into the Analysis of Institutional Work[END_REF] adopt a more eclectic psychoanalytic approach to explore affect in institutional work, suggesting that in order for institutions to reproduce themselves, people need to invest affectively in their work. Again, however, affect largely refers to the affective investment in practices-such as engaging in expected behavior or enthusiastically carrying out roles-that maintain or change a given institutional establishment. In recent years, some have drawn on Lacanian psychoanalysis to elaborate on the relation between affect, power, and dominant social organizations. It is affect, [START_REF] Stavrakakis | Peripheral Vision: Subjectivity and the Organized Other: Between Symbolic Authority and Fantasmatic Enjoyment[END_REF] argues, 'that binds subjects to the conditions of their symbolic subordination' (p. 1053). Similarly, [START_REF] Kenny | Someone Big and Important": Identification and Affect in an International Development Organization[END_REF] in her study on identity in a non-profit organization investigates the significance of affect in the subject's relation to power. In contrast to the perspectives inspired by the Tavistock Institute (such as [START_REF] Menzies Lyth | Changing Organizations and Individuals: Psychoanalytic Insights for Improving Organizational Health[END_REF] and followers), Lacanian scholars hold a pessimistic (or ambiguous) position with regards to the role of work in the health of subjects. These researchers focus rather on ideological and fantasmatic discourses on work (see in particular the chapters in the edited book by Cederström and Hoedemaekers, 2010).2 Affective discourses of boundaryless careers, creativity, personal development, self-fulfillment, and freedom subtly control workers, especially when such discourses match objectives of production and efficiency [START_REF] Bloom | Work as the Contemporary Limit of Life: Capitalism, the Death Drive, and the Lethal Fantasy of Work-life Balance[END_REF][START_REF] Bloom | The Sky's the Limit": Fantasy in the Age of Market Rationality[END_REF]. [START_REF] Ekman | Fantasies about Work as Limitless Potential-How Managers and Employees Seduce Each Other through Dynamics of Mutual Recognition[END_REF] understands passion and emotional devotion to work as an ideological fantasy, which combines 'free-market ideals about limitless financial expansion on the one hand and existential ideals about limitless self-realization on the other' (p. 20). Lacanian researchers often demonstrate how discourses that present the work organization and work as the route to freedom disguise work intensification and the 'un-free' nature of workplaces [START_REF] Fleming | You Are Where You Are Not: Lacan and Ideology in Contemporary Workplaces[END_REF][START_REF] Spicer | For the Love of the Organization[END_REF]. Affect is studied in relation to ideological function of fantasies about work and how they shape the affective meaning subjects attach to work, rather than in relation to the actual conduct of work, which refers to the affective experiences of the worker while working. The concept of jouissance [START_REF] Bloom | Work as the Contemporary Limit of Life: Capitalism, the Death Drive, and the Lethal Fantasy of Work-life Balance[END_REF][START_REF] Bloom | The Sky's the Limit": Fantasy in the Age of Market Rationality[END_REF][START_REF] Cremin | Never Employable Enough: The (Im)possibility of Satisfying the Boss's Desire[END_REF] is sometimes used to explore affective investments in work. According to [START_REF] Lacan | Ecrit: A Selection[END_REF], the emergence of the subject in the symbolic order, specifically in language, presupposes that something is lost forever: a pre-symbolic (and fantasmatic) enjoyment the subject will then endlessly try to recover. Jouissance is what comes to substitute for this irremediable loss. It differs from notions of pleasure or satisfaction because, by definition, total enjoyment is impossible. Lacanian researchers describe how work in contemporary capitalism includes a fantasy that full enjoyment-through work-would be possible, a promise supposed to stimulate the involvement of workers. They also expose the subtle forms of power that are exercised, less by traditional authority than by an imperative to enjoy (an imperative of jouissance) and that require the complicity of the subject. In the organizational context, this can be translated into an imperative to work; it 'can lead individuals to reinvest in work as the foremost priority in their life' (Bloom, 2015: 11). [START_REF] Bicknell | Enjoy Your Stress: Using Lacan to Enrich Transactional Models of Stress[END_REF], for example, point out that strain and stress at work may be experienced as painful, but workers can paradoxically enjoy stress (as a form of jouissance) as they respond to the desire of the Other. [START_REF] Contu | Studying Practice: Situating Talking about Machines[END_REF] employ Lacan in their reanalysis of [START_REF] Orr | Talking about Machines: An Ethnography of a Modern Job[END_REF] study on the work practices of copier technicians. Contu and Willmott argue that the technicians improvisational practices investigated by Orr-despite bending the dominant bureaucratic rules of the organization-are shaped by an enjoyable fantasmatic frame that ultimately serves to reproduce the fiction of liberal freedom and the bottom line of the company. Similarly, [START_REF] Kosmala | The Ambivalence of Professional Identity: On Cynicism and Jouissance in Audit Firms[END_REF] explore how playing with the rules provided the auditors in their study with a sense of jouissance. Such transgressions enabled auditors to conduct their work properly and therefore made them compliant. Much of the literature inspired by Lacan does not view the bending of rules in the work process as necessary improvisations in formal, organized work. In much of the Lacanian literature, because the focus is on the relationship between domination and subjectivity in the workplace, the work activity is not theorized as necessary for the emancipation of workers. We illustrate in the following how Dejours' theory is significant because it can help to push further the psychoanalytic perspectives outlined above, by focusing on the affects associated exclusively with the working activity. Working as answering to 'the real' Dejours draws from his decades-long clinical experience with individuals who suffer from workrelated distress and his role as a researcher and consultant in organizations, to generate a theory that explicates the relationship between the subject, work, the material, the social, and the political, and that focuses on the ways in which work has an impact on subjectivity and human life. Dejours' approach is clearly Freudian; he uses the Freudian term 'affect', rather than emotion, because, as we will see, this concept, which denotes the link between the body and the psyche, takes into account the sexual drive and its transformation through the working process. Dejours considers that the Freudian metapsychology did not give enough importance to work; his ambition is hence to complement psychoanalysis by understanding the specific role of work in subjective construction and the connection to the sexual drive. Dejours' theory is really centered on what working does psychically to the subject, how it affects him or her. For Dejours, some level of suffering is inevitable in all types of work (even though, admittedly, some categories of work are more painful than others). Nevertheless, and fundamentally, work can contribute to subjective and social enrichment. The underlying assumption is that human beings generally want to work well, and they gain satisfaction when given the opportunity to do so [START_REF] Dejours | Travail: usure mentale-Essai de psychopathologie du travail[END_REF][START_REF] Dejours | Souffrance en France[END_REF]. [START_REF] Dejours | Subjectivity, Work and Action[END_REF] highlights the working process and the subjective investment required to complete a task: Work is what is implied, in human terms, by the fact of working: gestures, know-how, the involvement of the body and the intelligence, the ability to analyze, interpret, and react to situations. It is the power to feel, to think, and to invent. In other words, for the clinician, work is not above all the wage relation or employment but 'working', which is to say, the way the personality is involved in confronting a task that is subject to constraints (material and social). (p. 72) This theory directs attention toward the objective world that poses a challenge to the subject and limits action. The planned organization of work-prescriptions, guidelines, or instructions-is never the same as the actual reality of the concrete work activity. For Dejours, to work is, first, to experience the 'real', which is not the Lacanian real; Dejours' real does not refer to a register within subjectivity. Rather, it points to the objective aspect in work that obstructs the work process. This may include fatigue, insufficient skills/experience, contradictory or excessive organizational rules or instructions, or the occurrence of unexpected events (e.g. breakdowns of machines, tools, materials and systems, or disruptions that arise due to other colleagues, bosses, or subordinates). The real implies 'the experience of the world's resistance' (Dejours, 2009b: 21). As a consequence, for [START_REF] Dejours | L'évaluation à l'épreuve du réel[END_REF], working consists [for the subject] in bridging the gap between the prescriptive and the real. But what has to be done to bridge this gap cannot be planned in advance. The way to go from the prescribed to the real must always be invented or discovered by the working subject. Hence, for the clinician, work is defined as what the subject must add to the prescriptions to reach the objectives that are assigned to him. (p. 14) In order to conquer the resistance of the world, the subject needs to apply effort: to mobilize intellect and affect and 'give' himself or herself to the task. Work, therefore, consists of three dimensions: the social dimension, which is essentially the formal organizational dimension, including instructions and prescriptions, but also social relations in 'a human world characterized by relations of inequality, power and domination' (Dejours, 2009b: 33); the objective dimension, which manifests itself as resistance of the real; and the subjective dimension, which refers to the affective experiences of the worker at work. Pathos as the first affect at work Dejours points out that the real of work is experienced as a failure-something does not workwhich creates an unpleasant 'feeling of helplessness, even of annoyance, anger, or also disappointment or discouragement. The real makes itself known to the subject always through a bad surprise effect, that is on an affective mode' (Dejours, 2009b: 21, emphasis in original). Hence, this confrontation with the real involves an 'affective suffering' [START_REF] Dejours | Travail: usure mentale-Essai de psychopathologie du travail[END_REF][START_REF] Dejours | Souffrance en France[END_REF][START_REF] Dejours | L'évaluation à l'épreuve du réel[END_REF](Dejours, , 2009a) ) engendered by the 'doing' of work. For [START_REF] Dejours | La clinique du travail entre vulnérabilité et domination[END_REF], subjects are essentially 'vulnerable, prone to psychic conflict and anxiety, [they] have to constantly fight against the risk of psychopathological decomposition' (p. 144). Suffering has two related meanings in [START_REF] Dejours | Souffrance en France[END_REF]Dejours ( , 2009aDejours ( , 2015a)). First, it refers to pathos, the capacity of the subject to be affected by the world and experiencing it in his or her body: 'There is no suffering without a body that can feel' (Dejours, 2009b: 23). Second, suffering implies pain; the fear of not being able to cope. In such circumstances, suffering can become pathological and seriously damage health [START_REF] Dejours | Souffrance en France[END_REF][START_REF] Dejours | La sublimation entre clinique du travail et psychanalyse[END_REF](Dejours, , 2015a)). Although the experience of suffering is inevitable, hope means that one has adequate resources to handle it. Health depends on believing or 'sensing in one's bones so to speak' (Deranty, 2008: 449) that one will be able to cope with one's vulnerable existence. But 'the affective suffering, totally passive, that results from the encounter with the real, as it emphasizes a breakdown or interruption of action, is not the endpoint or the final outcome of the process that relates subjectivity to work. Suffering is also a point of departure' (Dejours, 2009b: 22) because it will set the subject, his or her intelligence, and body in motion. It will be a point of departure for transformation, empowerment, and the overcoming of the initial pathos, which from this viewpoint refers specifically to an affect that is related to an intense feeling of being passive (cf. [START_REF] Gagliardi | The Collective Repression of 'Pathos' in Organization Studies[END_REF]. 'Suffering, as absolute affectivity, is at the origin of this intelligence that goes to explore the world in order to feel, transform, and expand itself' (Dejours, 2009b: 22). Thus, suffering leads to the deployment of one's inventive 'practical intelligence'. Work can, therefore, lead to the expansion of new subjective powers. This highlights the transformative potential of work; when affective suffering can be transformed into pleasure and empowerment, working becomes an emancipatory experience that sustains health. Ordinary sublimation Freud (1930/2002) states that working can be for humans a very efficient way of sublimation because it enables them to transcend the discontents of civilization, to inscribe themselves in the community and to contribute to its development. But Freud refers here more to the Great Work of artists or researchers than to ordinary work, which he believes is avoided and hated by most people and conducted merely to earn a living. According to [START_REF] Dejours | La clinique du travail entre vulnérabilité et domination[END_REF], the process of sublimation occurs also in ordinary work in the form of 'ordinary sublimation' (p. 137), when the worker uses his or her body, intelligence, and subjectivity to overcome the difficulties arising from the occurrence of the real. Ordinary sublimation also indicates how work and sexuality are linked for Dejours (2009a): it is the sexual drive that is at the origin of the desire to move, to act. The drive is at the frontier between the body and the psyche and closely associated with affect (because the drive cannot be 'directly' visible: affect is a translation of the drive into a feeling). The drive has to renounce its sexual component to be transformed into the involvement of the worker in the process of answering to the real. And the affect of suffering (as pathos) refers to the stopping of the movement of the drive when it is suddenly interrupted. For Dejours' (2009a), the working process enables the drive to be transformed and sublimated. While working with tools and technologies, and deploying the body and thought to 'work on' something, the subject is also conducting a kind of 'psychic work' on the drive. However, when ordinary sublimation is not possible (for reasons we will consider below), the passive suffering will develop into what [START_REF] Dejours | Souffrance en France[END_REF][START_REF] Dejours | La clinique du travail entre vulnérabilité et domination[END_REF]Dejours ( , 2015a) ) calls 'pathological suffering', thus creating illness, depression, and pain. The body: central in the affect of suffering at work and its transformation In Dejours' theory, the body plays a central role. Experience in the world entails sensing the restrictions posed by one's body. Work plays such an important role in subjectivity because it is the foremost activity in which the subject is affected by the world and experiences the limits of his or her body [START_REF] Dejours | Souffrance en France[END_REF](Dejours, , 2009a[START_REF] Dejours | La clinique du travail entre vulnérabilité et domination[END_REF]. The body at hand here is the subjective body of psychoanalysis: the erogenous body that constitutes itself out of the biological body, but it is also the lived body, the body that experiences affectivity, love, excitement, sex, helplessness, the body that appropriates the world [START_REF] Henry | La Barbarie[END_REF]. First, working presupposes an intimate familiarization with the reality of work, via an obstinate, bodily confrontation with the obstructing materiality defining the reality of the task at hand: with the tools, the technical objects and rules, but also the inter-personal condition framing the task (with the clients, the other colleagues, the hierarchy). (Dejours and Deranty, 2010: 171) Realizing a task means physically 'touching' the world, getting to know it and appropriating it in the body [START_REF] Dejours | Travail: usure mentale-Essai de psychopathologie du travail[END_REF](Dejours, , 2009a)). Dejours refers to 'embodied intelligence', highlighting the inextricable link between the cognitive and the corporeal faculties involved in the working process (Deranty, 2010: 201). Second, the breakdown of action, the disruption of the way things go, consequences of the occurrence of the real, are experienced in the first place in the body; bodily movement is interrupted. The impulse to act is stopped. Third, the affect of suffering that results from this interruption, and translates the powerlessness of the subject, is also felt in the body. Finally, the ability to transform this initial pathos into empowerment requires the involvement of the body. This centrality of the body in Dejours' theory also explains why the success or the failure of the sublimation of suffering at work ends up either in good health or in illness; at the end, it is the body that is most affected by this process. The work organization: transforming pathic suffering Dejours (2015a) indicates the central role of the work organization in the sublimation of suffering: Depending on the characteristics of the work organization *…+ suffering can in some cases lead to illness, but in other cases it can be transformed into pleasure and become a core element in the construction of mental health. (p. 9) The capacity of the work organization to produce cooperation, instead of coordination, is a decisive factor here. Working effectively implies changing the prescriptions. Workers answer to prescribed coordination by engaging in effective cooperation. While coordination implies a system of domination that artificially imposes how people should relate through their tasks, cooperation implies a 'deontic activity': a collective activity of producing 'work rules' and agreements between workers that enables them to answer to the real of work and most of the time contrasts with the formal rules and prescriptions implied by coordination. By 'deontic activity', a term used by Dejours himself, 'is meant the activity of making rules for work, in order to make work work' (Dejours and Deranty, 2010: 175). In this process, the role of peers, the 'work collective', is essential; it is a place where agreements and compromises between workers concerning the way to operate are found, where priorities are established (because workers cannot answer to all the prescribed rules and have to choose what is the most important; they base their choice on a common sense of their professional identity and mission), where the trickery and the know-how of workers are confronted, discussed, elaborated, tested, and transmitted through 'work rules'. The 'work collective' is also a main source of support for workers; in order to work properly, people often have to choose between contradictory rules (unless they cannot work), do things that are not officially authorized, and sometimes even cheat a little-not necessarily because they enjoy transgressing rules but because they need to slightly change the prescriptions in order to work properly. The work collective is a place where these choices are made collectively, based on professional reasons, and where workers know they will find support when they engage themselves at work; without the support of work collectives, people struggle to deal with the real of work. A worker's choice, for instance, not to apply a prescribed rule because it is not compatible with another rule, can have dramatic consequences in case of problem or failure; it may be interpreted as individual irresponsibility or pure transgression. If, however, they can refer to the rules decided collectively, their choice can be justified and placed within a professional frame and identity. An example with train drivers can help to illustrate this point [START_REF] Clot | Le Travail à coeur[END_REF][START_REF] Fernandez | Nous, conducteurs de train[END_REF]. Rule number 1, the absolute priority in their work collective (for all drivers), as defined by their identity as drivers, is to maintain passenger safety. All drivers will refer to this rule when they refuse to use an insufficiently repaired material that may threaten safety. It will contradict the rule of other departments: the maintenance department that has to be efficient and spend the minimum amount of time on repairs, and the commercial department that prioritizes on-time trains and wishes to prevent any changes in materials leading to delays and unsatisfied passengers. In order to choose not to drive a train the drivers judge inappropriate for safety, they need to be assured of the support of the group of drivers: collective support provides acknowledgment that they have to fight for a good reason and their refusal and the tensions with the other departments will be justified by a collective conception of how to do a good job and by the feeling to defend this conception they are proud of. Otherwise, they will either choose to fight against other departments alone, or they will decide to use the faulty material, knowing that they are doing a bad job and failing in their mission of ensuring safety. In both cases, they are at risk of suffering pathologically: in the first, because they will find no support from other drivers and have to fight alone the intense pressures from the other departments; in the second, because they will be anxious about the risk of safety failure (and feel guilty for taking such a risk), and because they know that they have not done what is considered a 'good job' as defined by their profession. Dejours (2009aDejours ( , 2015a) ) refers to this pathological suffering as 'ethical suffering' and points out its increase in contemporary organizations; a consequence of a work process dominated by top-down, standardized rules that prioritize financial incentives and short-term profitability. The work collective is therefore very important in enabling workers to maintain the ethical dimension of their job by assessing what it is to do 'a good job'. It also entails a 'deontic' aspect, as it is a place where rules and norms are produced related to this ethical dimension. The work collective is also significant in mitigating the suffering experienced at work and turning it into a sublimating experience through 'peer recognition'. According to [START_REF] Dejours | Travail: usure mentale-Essai de psychopathologie du travail[END_REF][START_REF] Dejours | L'évaluation à l'épreuve du réel[END_REF]Dejours ( , 2009a[START_REF] Dejours | Travail vivant-Tome 2: Travail et émancipation[END_REF], two kinds of recognition derived from work are important for workers: (a) recognition of the utility of what they do (economic, social, or technical utility), a contribution that can be judged by the society, the hierarchy, the clients, and so on; and (b) recognition of the 'beauty' of what is done: recognition that the actual work done respects the 'state of the art' and produces a qualitative result. The latter form of recognition is 'based on the quality of the relationship that the worker has maintained with the "real"' (Dejours and Deranty, 2010: 172). Because peers are familiar with the effort required to overcome the real of the work (they themselves face the same difficulties), they are in the best position to grant recognition based on 'doing'. Recognition at work compensates for the renunciation of the sex drive involved in ordinary sublimation (which implies a loss for the subject), and acknowledges the contribution of the subject in the human community. Recognition 'grants meaning to the suffering in work' (Dejours, 2012: 228). The work collective is thus of absolute significance in Dejours' theory. This is not to idealize the work collective but to point out its importance in (a) supporting the individual worker in overcoming the real of work and thus sublimating the drive, (b) taking care of workers, and (c) granting a valuable form of recognition. Implications for organization studies are the following. The importance of the work collective is often described in terms of the emotional support it provides (see, for example, [START_REF] Lewis | Suppression or Expression: An Exploration of Emotion Management in a Special Care Baby Unit[END_REF]. Dejours' perspective suggests that this is secondary. The primary focus should be on the capacity of the work collective to support workers to work properly via cooperation. As such, organization studies should explore the extent to which work collectives play this role-or not-in organizations. Returning to a classic case study Following other scholars in organization studies who have re-analyzed well-known cases from novel theoretical perspectives [START_REF] Contu | Studying Practice: Situating Talking about Machines[END_REF][START_REF] Lok | Identities and Identifications in Organizations: Dynamics of Antipathy, Deadlock, and Alliance[END_REF], we show in this section how Isabel Menzies' (1960) hospital case study can be reinterpreted from the point of view of Dejours. We have chosen this study, first, because it has made significant contributions to the psychoanalytic understanding of organizational life and, second, because it demonstrates how a dysfunctional work organization obstructs the work process and produces affective suffering. It is thus useful in illustrating aspects of Dejours' theory. Despite roots in different psychoanalytic traditions, there are many similarities between Menzies and Dejours; both discuss social defenses as a response to emotions (affects in Dejours' case) produced by the work task, and both are interested in practical solutions to organizational problems. Both perspectives are therefore normative; they are underpinned by an idea of what it means to 'work well' and concerned with installing organizational health. Reinterpreting Menzies using concepts from Dejours helps to explore some similarities, but also to research differences between the two, and highlight how Dejours' perspective can bring out certain issues that are implied, but not made the center of attention by Menzies. In short, Dejours would emphasize much more than Menzies does, the symptoms and problems in the hospital as characteristics of pathological suffering directly related to work: high drop-out rates from the training program, high sickness rates, strong feelings of discontent among staff, withdrawal of duty, and avoidance of responsibility. For Menzies, anxiety is produced by the primary work task and exacerbated by social defenses. As we suggest below, a Dejoursian (2009a, 2015a) approach would acknowledge this, but would insist more on pathological and ethical suffering caused by the inability to work well due to a dysfunctional work organization and the absence of work collectives. Reinterpreting Menzies via Dejours Menzies (1960) studied a teaching hospital in London that was experiencing problems related to the allocation of nurses and a 'high level of tension, distress, and anxiety' among the student nurses (p. 97): Nurses are in constant contact with people who are physically ill or injured, often seriously. The recovery of patients is not certain and will not always be complete. Nursing patients who have incurable diseases is one of the nurse's most distressing tasks. Nurses are confronted with the threat and the reality of suffering and death as few lay people are. Their work involves carrying out tasks which, by ordinary standards, are distasteful, disgusting, and frightening. (pp. 97-98) These 'objective features of her work situation' generates in the nurse 'many of the feelings appropriate to *infantile+ phantasies' (Menzies, 1960: 98-99), and in particular, 'intense and unmanageable anxiety' (p. 100). From Dejours' perspective, while primitive phantasy is not insignificant (important, though, to bear in mind that Dejours is Freudian and not Kleinian), the focus is on the objective features of the work situation and the real of work: exposure to disease and death, tasks that arouse disgust in the nurse, the uncertainty of recovery and so on, are factors that 'resist' the efforts of nurses in conducting tasks according to prescribed rules. For Menzies, the nature of the task alone does not, however, explain the level of anxiety. What exacerbates the anxiety is the social defense system, 'which appear as elements in the structure, culture, and mode of functioning of the organization' (Menzies, 1960: 101). The social defense system includes splitting of the nurse-patient relationship, depersonalization, detachment, ritual task performance, checks and counter checks, and avoidance of responsibility-mechanisms orientated toward helping the nurse to avoid 'anxiety, guilt, doubt, and uncertainty' (Menzies, 1960: 109), but in fact generate a great deal of 'secondary anxiety' (p. 110), and a dysfunctional and 'rigid work organization' (p. 111). While not referencing Menzies, Dejours (1980, 2015b) also discusses collective defenses invoked in some professions as a response to high exposure to accident and risk. Like Menzies, he believes defenses are generally inappropriate and prevent workers from coping. The difference between Menzies and Dejours in this regard, is first, that Dejours considers much more explicitly those 'real' factors of the immediate work situation that obstructs the work process; the importance of working well is a much more central theme. Second, Dejours does not associate defenses with primitive anxiety. Contrary to Menzies and some other perspectives influenced by the Tavistock Institute, which tend to ignore power and the structural inequalities inside and outside organizations [START_REF] Kersten | Organizing for Powerlessness: A Critical Perspective on Psychodynamics and Dysfunctionality[END_REF], Dejours' model makes links between subjective experience at work and broader cultural and political factors including changes in modes of production [START_REF] Deranty | Work and the Experience of Domination in Contemporary Neoliberalism[END_REF]. For example, he emphasizes the relationship between managerialism and changes in work organizations (such as 'lean' production, project based production, the use of sales targets) and increased levels of suffering at work [START_REF] Dejours | Souffrance en France[END_REF](Dejours, , 2015a)). Such changes make it difficult for workers to cope as they increase the discrepancy between the prescribed and real work. Defenses, from Dejours' perspective, are thus a response to fear and risk exacerbated by specific types of work organizations. What Menzies identifies as social defenses would, therefore, from Dejours' viewpoint, be considered as characteristics of the work organization that appear as another aspect of the real for the worker: variation of the work-staff ratio and the number and type of patients, excessive movements of student nurses, ritual task performance are factors that oppose the effort of the nurse; they pose a limit to action. For example, 'the minutely prescribed task performance makes it difficult to adjust work-loads when necessary by postponing or omitting less urgent or important tasks' (Menzies, 1960: 110). Excessive standardization deprives the possibility for nurses to 'accommodate' the prescriptions: 'the nursing service is cumbersome and inflexible. It cannot easily adapt to short-or long-term changes in conditions' (Menzies, 1960: 110). This 'minimizes the exercise of discretion and judgement in the student nurse's organization of her tasks' and leads to underemployment (Menzies, 1960: 112). Furthermore, the splitting of the nurse-patient relationship produces too many movements of student nurses and deprives nurses of 'ordinary job satisfaction' that comes from using one's nursing skills' (Menzies, 1960: 113). While they are told to care for the patient as a whole person, 'the functioning of the nursing service makes it impossible' (Menzies, 1960: 113). For instance, nurses are instructed to wake patients to give them sleeping pills. As a consequence, 'nurses find the limitations on their performance very frustrating' (Menzies, 1960: 112) and they seem to have a constant sense of impending crises. They are haunted by fear of failing to carry out their duties adequately as pressure of work increases. Conversely, they rarely experience the satisfaction and lessening of anxiety that come from knowing they have the ability to carry out their work realistically and efficiently. (p. 110) This excerpt can be analyzed with Dejoursian concepts: the nurses feel incapable to answer to the real of work, to transform the interruption of their action related to the occurrence of the real into good work, and therefore suffer. Importantly, for Dejours, it is not the task in itself, the numerous surprises that happen, or the social defenses that create painful affects, but the fact of not being confident in one's own resources to deal with the real of work. Menzies (1960: 116) did discuss nurses' feelings of helplessness. The 'satisfaction' or 'lessening of anxiety' is, however, not just a by-product of good working; it is associated with the sublimation of the drive that is, according to Dejours, a very powerful process at work. While Menzies (1960: 116) acknowledges the importance of 'sublimatory activities in which infantile anxieties are re-worked in symbolic form and modified' (see also [START_REF] Hirschhorn | The Workplace within: Psychodynamics of Organizational Life[END_REF], Dejours emphasizes the relationship between ordinary sublimation at work and human emancipation. It is worth repeating that Dejours' theory is underpinned by the idea that humans experience a sense of embodied and ethical pleasure when they view themselves in the product of their work. Menzies (1960: 112) noticed that nurses in the hospital suffered immensely when not given the opportunity to observe the recovery of patients 'in a way that she can easily connect with her own efforts' or expressed 'guilt' when they practiced 'what they consider to be bad nursing'. She mentioned the importance of 'applying the principles of good nursing' instead of following prescriptions, insisting on the fact that nurses want to do a good job and have a strong professional sense of their mission, and she describes the painful consequences of being unable to respond to this (p. 116). Thus, Dejours does not contradict Menzies here, but he speaks of the sublimation of the sexual drive, rather than infantile anxiety, because the former involves the desire to move and act. This may be why he underlines much more the suffering of workers when not given the opportunity to work well; when what they produce does not conform to their ethical conception of what should be 'good work', because this obstructs the sublimation of the drive. In sum, the nurses in Menzies case study were prevented from ordinary sublimation because they could not cope with the real of work, and could not 'work well', experiencing ethical suffering as a result. This incapacity to cope was related to the absence of a work collective. When workers are separated from each other and assigned to a fragmented part of the work process on which they are evaluated, work collectives cannot define work rules and use 'work trickery'; they struggle to answer to the real of work. In Menzies' case study, there is evidence of the absence of work collective. For example, workers are isolated and constantly moved between wards. This has profound impacts on the quality of work and on the health of the nurses: Working-groups are characterized by great isolation of their members. Nurses frequently do not know what other members of their team are doing or even what their formal duties are; indeed, they often do not know whether other members of their team are on duty or not. They pursue their own tasks with minimal regard to colleagues. This leads to frequent difficulties between nurses. For example, one nurse, in carrying out her own tasks correctly by the prescription, may undo work done by another nurse also carrying out her tasks correctly by the prescription, because they do not plan their work together and co-ordinate it. (Menzies, 1960: 114) Menzies (1960) also explains that the idealization of the potential nursing recruit, and the belief that 'nurses are born not made', means that there is no supervision of student nurses and no small group teaching (p. 107). This prevents the development of a work collective that would transmit tacit knowledge and collective skills. Furthermore, and significantly, while Menzies underlines that 'gratitude' from the patients is very important for work satisfaction, Dejours would insist on recognition by colleagues. Gratitude from patients is recognition of the utility of the nurses' work, and although essential, the recognition from peers that a given job conforms to what is considered 'a job well done' according to the nursing profession is absolutely crucial in the sublimation of the drive and the affective suffering caused by the nurses' primary tasks. While [START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF] acknowledges the importance of 'work-teams', she tends to emphasize the significance of cohesive teams, rather than the centrality of the collective in the organization of the task (p. 114). [START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF] noted that teams in the hospital are 'notably impermanent', making it 'difficult to weld together a strong, cohesive work-team' (p. 114). However, for her, an efficient team would function 'on the basis of real knowledge of the strengths and weaknesses of each member, her needs as well as her contribution, and [adapt] to the way of work and type of relationship each person prefers' (Menzies, 1960: 114). Dejours' approach would be less concerned with such psychologizing of work relationships; it could even be dangerous and lead to marginalization of vulnerable workers. The function of the work collective is centered on work, on the way tasks should be done; it implies a discussion on/confrontation with work processes, the possibility to reflect collectively on one's professional ethic and to engage in deontic activity. Dejours' perspective downplays interpersonal regulation and adaptation, and focuses on how to construct collective answers to the real of work. Of course, this process can generally lead to good interpersonal knowledge and to 'friendly relations with colleagues' (Menzies, 1960: 114), but this would be a consequence of a more fundamental process of reflecting on work; for [START_REF] Dejours | Souffrance en France[END_REF][START_REF] Dejours | Travail vivant-Tome 2: Travail et émancipation[END_REF]Dejours ( , 2015a)), it is work that creates the first and foremost link between colleagues and enables mutual help and support, not vice versa. This conception of work collectives has strong operational implications. First, Menzies (1960: 107) observed the lack of role definition, boundaries, and containment. While clearly defined roles are not unimportant, from Dejours' viewpoint, they risk leading to a more prescribed organization, thus exacerbating the problems observed. The emphasis should rather be on the role of work collectives in defining work rules, supporting workers to cooperate efficiently and enabling the sublimation of the drive. Second, [START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF] criticizes the impact of the unduly frequent moves that affected nurses who 'grieve and mourn over broken relationships with patients and other nurses ' (p. 111). She suggests that people need emotional stability and advised to 'work *in advance+ on the anticipated trauma of separation' (Menzies, 1960: 111) to alleviate its effects, thus again increasing the risk of rigidity. From the perspective of Dejours, what people need first is to belong to a work collective, which would help them to cope with frequent moves of colleagues and patients. Even if the situation would require such moves, people would not automatically suffer pathologically if they had a sense of belonging to a work collective that would ensure the job is done properly. Third, [START_REF] Menzies | A Case Study in the Functioning of Social Systems as a Defence against Anxiety: A Report on a Study of the Nursing Service in a General Hospital[END_REF] underlines that the 'diffusion of responsibility prevents adequate and specific concentration of authority for making and implementing decisions' (p. 110). Dejours would not advise to concentrate authority but to elaborate work rules that would then allow nurses to make decisions with autonomy and safety when working. Dejours is at pains to emphasize the importance of focusing on work tasks and the role of the work collective because of their impacts on the affective life of the worker. Menzies' study was conducted in the 1950s. It is remarkable the extent to which similar levels of stress and anxiety observed in her hospital can be found in organizations today. It is not surprising that, rather than pleasure and hope, Dejours observes fear and pathological suffering as the main affects in the contemporary workplace [START_REF] Dejours | Souffrance en France[END_REF][START_REF] Dejours | Travail vivant-Tome 2: Travail et émancipation[END_REF](Dejours, , 2015a)). While Menzies and the Tavistock perspective provide useful thoughts on the causes and treatments of such suffering, Dejours' theory should also be considered as a very relevant solution to the suffering that high numbers of workers experience. Menzies' (1960) recommendation for change in the hospital was a radical restructuring of the social defense system, such as removing the task-list system and replacing it with 'some form of patient assignment' (p. 119). From a Dejoursian perspective, such interventions would be fitting if they would reduce the limits to nurses' actions, enabling them to deal with the real of work and thus to sublimate the drive through the development of strong work collectives. Discussion and conclusion This article has contributed to the psychoanalytic approaches to affect and organizations in three ways. First, it has highlighted the affects associated with the work activity. While the Klein/Bion approach has foregrounded the operation of anxiety and its defenses in organizations and in the conduct of the task, by concentrating mainly on anxiety surrounding organizational life generally, it has only a limited theory of the affects related with the experience of work activity. The originality of Dejours' perspective is that it is, to our knowledge, the only one that extensively illustrates the affective and embodied experience of working, by demonstrating how the subject needs to answer to the real at work. Specifically, depending on the work organization, the subject at work may experience pleasure or pathological suffering. The implication of this is vast. It suggests that in order to improve organizational functioning and health-which is, like the Klein/Bion perspectives, the aim of Dejours' theory-containing anxiety, removing defenses against anxiety or altering roles and organizational cultures (see [START_REF] Menzies Lyth | Changing Organizations and Individuals: Psychoanalytic Insights for Improving Organizational Health[END_REF], while significant to some extent in alleviating anxiety, are not sufficient. Rather, the organization of work needs to change to enable workers to work well, via a properly functioning work collective. Our re-reading of Menzies' (1960) study brings out the ethical suffering of nurses produced by the inability to work well (and to sublimate the drive) due to the lack of a work collective. Dejours' theory also provides a different view to-and in some ways extends-Lacanian approaches in organization studies that imply that affective investment in work may reproduce the dominant oppressive ideologies about work. Dejours' perspective suggests that people may invest in their jobs because work is fundamental to human life, and if the context allows, they may derive a sense of pleasure from work. This is not to deny the significance of the ideological context of work; on the contrary, it is to highlight the importance of exploring how broader political factors, such as changes in modes of production affect the subjective and embodied experience of workingand not just what work 'means' to people and the extent to which they invest in work. Discourses of limitless potential, career development, and self-fulfillment are ideologies that primarily function to entice workers and intensify work, and as such, they may be effective precisely because work is central to subjectivity. Nevertheless, Dejours focuses on the 'actual conduct' of work, which refers to the subject's confrontation with the real while working and the affects involved in this process (either pathological suffering or pleasure). Ideological discourses of work may thus also be understood as factors that obstruct working well and hence generate suffering because they increase the burden of work. This does not mean that work is not significant in subjective and communal health. Therefore, our second contribution is to extend critical approaches in organization studies by demonstrating the centrality of work in the affective life of the subject. Our argument is that one does not need to be against work to be critical of work organizations. Indeed, Dejours' theory, despite acknowledging the significance of work, is extremely critical of neoliberal forms of work organizations, which he claims, lead to pathological suffering, mainly due to an ever increasing gap between the prescribed organization and the real of work, and to the absence of well-functioning work collectives. We shall bear in mind that Dejours is an occupational health physician: hence, his political interest in improving health in the workplace. By focusing on the fantasies surrounding work, many Lacanian organizational researchers often present a negative, or an ambiguous, view of work. From our perspective, any critique of work organizations should be founded on the theory of the centrality of work in the affective life of the subject. The theoretical conception of Dejours has thus strong political implications; 'the organization of work constitutes a political issue in itself' (Dejours, 2015a: 17). A Marxist influence on the way he approaches work, health, and subjectivity is clear in his discussion of emancipation and alienation, the latter resulting from the incapacity of workers to use their intelligence, knowledge, body, and capabilities to sublimate the suffering created by working [START_REF] Dejours | Travail: usure mentale-Essai de psychopathologie du travail[END_REF][START_REF] Dejours | Souffrance en France[END_REF][START_REF] Dejours | Aliénation et Clinique du travail[END_REF][START_REF] Dejours | Travail vivant-Tome 2: Travail et émancipation[END_REF](Dejours, , 2015a)). From this perspective, strategies of resistance that do not take into account the centrality of work in workers' emancipation process are inappropriate and lead to a deadlock. Until now, this aspect has not been considered by scholars in organization studies, and more specifically in critical management studies. From the viewpoint of the latter, some resistance strategies at work have been criticized as 'decaffeinated resistance' [START_REF] Contu | Decaf Resistance: On Misbehavior, Cynicism, and Desire in Liberal Workplaces[END_REF], that is, having the appearance of resistance but being totally void of any real subversive power. This critical stance reinterprets certain seemingly subversive behaviors, such as cynicism, parody, or humor [START_REF] Fleming | Working at a Cynical Distance: Implications for Power, Subjectivity and Resistance[END_REF] by showing how they can in fact help to stabilize practices of oppression and prevent any effective change [START_REF] Contu | Studying Practice: Situating Talking about Machines[END_REF]. This approach views other resistance strategies such as worktorule or 'flannelling'-whereby workers excessively identify with orders and prescriptions-as much more effective as they have a devastating impact on the functioning of the work process [START_REF] Contu | Studying Practice: Situating Talking about Machines[END_REF][START_REF] Fleming | You Are Where You Are Not: Lacan and Ideology in Contemporary Workplaces[END_REF][START_REF] Fleming | Looking for the Good Soldier, Svejk: Alternative Modalities of Resistance in the Contemporary Workplace[END_REF]. However, from a Dejoursian standpoint, cynicism, parody, humor as well as work-to-rule-whether or not they are able to affect what is ultimately produced by the system-all suffer from the same weakness; they assume that workers could renounce the possibility of feeling alive at work through ordinary sublimation, without long-term effects on their health. The consequence of Dejours' conception of work is that resistance must concentrate on combating work organizations and social conditions that prevent ordinary sublimation through work. Organized, collective forms of resistance are most appropriate. This brings us to the third contribution of our article, which is to highlight the role of the collective in organizations in supporting individuals to work properly (and hence, sublimate the drive), based on a professional notion of a 'job well done'. The implication of this for organization studies is that it invites researchers to explore the extent to which work collectives play this specific role in organizations. Dejours puts so much emphasis on work because of its ethical, creative, and emancipatory role. Emancipation necessarily implies that workers and those who represent them (unions, work-councils, etc.) fight for conceiving and defining the organization of work. [START_REF] Dejours | Travail vivant-Tome 2: Travail et émancipation[END_REF]Dejours ( , 2015a) ) locates the battlefield at this very precise level and laments how it has been paradoxically neglected in political and trade-union struggles. Therefore, our conclusion is that because work is the path to emancipation and ethical living, the work collective has no choice but to reclaim it if they want to fight for workers' freedom. Notes 1. [START_REF] Dashtipour | Freedom through Work: The Psychosocial, Affect and Work[END_REF] and [START_REF] Guénin-Paracini | Fear and Risk in the Audit Process[END_REF] are, to our knowledge, only publications that draw on Dejours in the English language organization and management studies. 2. For definition of-and further discussion on-this perspective on ideology, see, for example, [START_REF] Bloom | The Sky's the Limit": Fantasy in the Age of Market Rationality[END_REF], [START_REF] Ekman | Fantasies about Work as Limitless Potential-How Managers and Employees Seduce Each Other through Dynamics of Mutual Recognition[END_REF], and [START_REF] Glynos | Lacan and Political Subjectivity: Fantasy and Enjoyment in Psychoanalysis and Political Theory[END_REF].
68,944
[ "2944" ]
[ "114147", "57129" ]
00147449
en
[ "phys" ]
2024/03/04 23:41:46
2007
https://ens-lyon.hal.science/ensl-00147449/file/GaugingAffine7.pdf
Henning Samtleben email: [email protected] Martin Weidner email: [email protected] GAUGING HIDDEN SYMMETRIES IN TWO DIMENSIONS We initiate the systematic construction of gauged matter-coupled supergravity theories in two dimensions. Subgroups of the affine global symmetry groups of toroidally compactified supergravity can be gauged by coupling vector fields with minimal couplings and a particular topological term. The gauge groups typically include hidden symmetries that are not among the target-space isometries of the ungauged theory. The possible gaugings are described group-theoretically in terms of a constant embedding tensor subject to a number of constraints which parametrizes the different theories and entirely encodes the gauged Lagrangian. The prime example is the bosonic sector of the maximally supersymmetric theory whose ungauged form admits an affine e 9 global symmetry algebra. The various parameters (related to higher-dimensional p-form fluxes, geometric and non-geometric fluxes, etc.) which characterize the possible gaugings, combine into an embedding tensor transforming in the basic representation of e 9 . This yields an infinite-dimensional class of maximally supersymmetric theories in two dimensions. We work out and discuss several examples of higher-dimensional origin which can be systematically analyzed using the different gradings of e 9 . Introduction One of the most intriguing features of extended supergravity theories is the exceptional global symmetry structure they exhibit upon dimensional reduction [START_REF] Cremmer | The SO(8) supergravity[END_REF]. Elevendimensional supergravity when compactified on a d-torus T d gives rise to an (11-d)dimensional maximal supergravity with the exceptional global symmetry group E d(d) and Abelian gauge group U (1) q , where q is the dimension of some (typically irreducible) representation of E d(d) in which the vector fields transform. The only known supersymmetric deformations of these theories are the so-called gaugings in which a (typically non-Abelian) subgroup of E d(d) is promoted to a local gauge group by coupling its generators to a subset of the q vector fields. The resulting theories exhibit interesting properties such as mass-terms for the fermion fields and a scalar potential that provides masses for the scalar fields and may support de Sitter and Anti-de Sitter ground states of the theory [START_REF] De Wit | N = 8 supergravity[END_REF]. Recently, gauged supergravities have attracted particular interest in the context of non-geometric and flux compactifications [START_REF] Grana | Flux compactifications in string theory: A comprehensive review[END_REF] where they describe the resulting low-energy effective theories and in particular allow to compute the effective scalar potentials induced by particular flux configurations. A systematic approach to the construction of gauged supergravity theories has been set up with the group-theoretical framework of [START_REF] Nicolai | Maximal gauged supergravity in three dimensions[END_REF][START_REF] De Wit | On Lagrangians and gaugings of maximal supergravities[END_REF]. Gaugings are defined by a constant embedding tensor that transforms in a particular representation of the global symmetry group E d(d) . It is subject to a number of constraints and entirely parametrizes the gauged Lagrangian. E.g. in the context of flux compactifications, all possible higher-dimensional (p-form, geometrical, and non-geometrical) flux components whose presence in the compactification induces a deformation of the low-dimensional theory can be identified among the components of the embedding tensor. Once the universal form of the gauged Lagrangian is known for generic embedding tensor, this reduces the construction of any particular example to a simple group-theoretical exercise. 1Moreover, since the embedding tensor combines the flux components of various higherdimensional origin into a single multiplet of the U-duality group E d(d) , this formulation allows to directly identify the transformation behavior of particular flux components under the action of the duality groups. In particular, this allows to straightforwardly extend the analysis of the effective theories beyond the region in which the parameters have a simple perturbative or geometric interpretation. Gaugings of two-dimensional supergravity (d = 9) have not been studied systematically so far. Yet, this case is particularly interesting, as the global symmetry algebra of the ungauged maximal theory is the infinite-dimensional ĝ = e 9 [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] , the affine extension of the exceptional algebra g = e 8 [START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF] , and the resulting structures are extremely rich. The realization of the affine symmetry on the physical fields requires the introduction of an infinite tower of dual scalar fields, defined on-shell by a set of first order differential equations. Consequently, these symmetries act nonlinearly, nonlocally and are symmetries of the equations of motion only. As a generic feature of two-dimensional gravity theories, the infinite-dimensional global symmetry algebra is a manifestation of the underlying integrable structure of the theory [START_REF] Geroch | A method for generating solutions of Einstein's equations[END_REF][START_REF] Belinsky | Integration of the Einstein equations by the inverse scattering problem technique and the calculation of the exact soliton solutions[END_REF][START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF][START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF][START_REF] Nicolai | Integrability and canonical structure of d=2, N=16 supergravity[END_REF]. In view of the above discussion one may expect that the various parameters characterizing the different higher-dimensional compactifications join into a single infinite-dimensional multiplet of the affine algebra which accordingly parametrizes the generic gauged Lagrangian in two dimensions. We confirm this picture in the present paper. The corresponding multiplet is the basic representation of e 9 [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] . Apart from its intriguing mathematical structure, there are two features of twodimensional supergravity which render the construction of gaugings somewhat more subtle than in higher dimensions. First, the overwhelming part of the affine symmetries present in the two-dimensional ungauged theory, is of the hidden type and in particular on-shell. Only the zero-modes g of the affine algebra ĝ are realized as target-space isometries of the two-dimensional scalar sigma-model and thus as off-shell symmetries of the Lagrangian. In contrast, the action of all higher modes of the algebra is nonlinear, nonlocal and on-shell as described above. Gauging such symmetries is a nontrivial task. Second, in two dimensions there are no propagating vector fields that could be naturally used to gauge these symmetries. It turns out that both these problems have a very natural common solution: introducing a set of vector fields that couple with a particular topological term in the Lagrangian allows to gauge arbitrary subgroups of the affine symmetry group. The resulting gauge groups generically include former on-shell symmetries and thus extend beyond the target-space isometries of the ungauged Lagrangian. The construction in fact is reminiscent of the four-dimensional case where global symmetries that are only on-shell realized can be gauged upon simultaneous introduction of magnetic vector and two-form tensor fields which couple with topological terms [START_REF] De Wit | Magnetic charges in local field theory[END_REF][START_REF] De Wit | The maximal D = 4 supergravities[END_REF]. The structure emerging in two dimensions is the following. In addition to the original physical fields, the Lagrangian of the gauged theory carries vector fields A M µ in a highest weight representation of ĝ. In addition, a finite subset of the tower of dual scalar fields enters the Lagrangian, with their defining first-order equations arising as genuine equations of motion. The gauging is completely characterized by a constant embedding tensor Θ M in the conjugate vector representation and subject to a quadratic consistency constraint. The local gauge algebra is a generically infinite-dimensional subalgebra of ĝ. The result is a Lagrangian that features scalars and vector fields in infinite-dimensional representations of the affine ĝ. However, for every particular choice of the embedding tensor only a finite subset of these fields enters the Lagrangian and only a finite-dimensional part of the gauge algebra is realized at the level of the Lagrangian (with its infinite-dimensional part exclusively acting on dual scalar fields that do not show up in the Lagrangian). We illustrate these structures with several examples for the maximal (N = 16) theory for which the symmetry algebra is e 9 [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] and vector fields and embedding tensor transform in the basic representation and its conjugate, respectively. In addition to the standard minimal couplings within covariant derivatives and the new topological term, the gauging induces a scalar potential whose explicit form is usually determined by supersymmetry. It is specific to two dimensions that in absence of such a potential, the gauging merely induces a reformulation of the original theory. I.e. the field equations imply vanishing field strengths, such that the only nontrivial effect of the newly introduced vector fields is due to global obstructions. In absence of such, the theory reduces to the original one. On the other hand, integrating out the vector fields in this case leads to an equivalent (T-dual) formulation of the original theory in terms of a different set of scalar fields. This procedure is well-known from the study of non-Abelian T-duality [START_REF] Buscher | A symmetry of the string background field equations[END_REF][START_REF] Hull | The gauged nonlinear sigma model with Wess-Zumino term[END_REF][START_REF] De La Ossa | Duality symmetries from nonabelian isometries in string theory[END_REF], however the results here go beyond the standard expressions, as the gaugings generically include non-target-space isometries. In contrast, in presence of a scalar potential, as is standard in supersymmetric theories, the gaugings constitute genuine deformations of the original theory. It is worth to stress that although the construction we present in this paper is worked out for a very particular class of two-dimensional models -the coset space sigma-models coupled to dilaton gravity as the typical class of models obtained by dimensional reduction of supergravity theories -it is by far not limited to this class. The entire construction extends straightforwardly to the gauging of hidden symmetries in arbitrary two-dimensional integrable field theories. The paper is organized as follows. In section 2, we give a brief review of the ungauged two-dimensional supergravity theories and their global symmetry structure. In particular, we give closed formulas for the action of the affine symmetry ĝ on the physical fields. In section 3, we proceed to gauge subalgebras of the affine global symmetry by introducing vector fields in a highest weight representation of ĝ and coupling them with a particular topological term. We present the full bosonic Lagrangian which is entirely parametrized in terms of an embedding tensor transforming under ĝ in the conjugate vector field representation and subject to a single quadratic constraint. In section 4, we discuss various ways of gauge fixing part of the local symmetries by eliminating some of the redundant fields from the Lagrangian. In particular, we show that in absence of a scalar potential the presented construction leads to an equivalent (T-dual) version of the ungauged theory whereas a scalar potential leads to genuinely inequivalent deformations of the original theory. Finally, in section 5 we study various examples of gaugings of the maximal (N = 16) two-dimensional supergravity. Among the infinitely many components of the embedding tensor, we identify several solutions to the quadratic constraint and discuss their higher-dimensional origin. The various gradings of e 9(9) provide a systematic scheme for this analysis. Ungauged theory and affine symmetry algebra The class of theories we are going to study in this paper are two-dimensional G/K coset space sigma models coupled to dilaton gravity. These models arise from dimensional reduction of higher-dimensional gravities: pure Einstein gravity in four space-time dimensions gives rise to the coset space SL(2)/SO(2) while e.g. the bosonic sector of eleven-dimensional supergravity leads to the particular coset space E 8(8) /SO [START_REF] Breitenlohner | On the Geroch group[END_REF]. In this chapter we briefly review the Lagrangian for these theories, their integrability structure, and as a consequence of the latter the realization of the infinite-dimensional on-shell symmetry ĝ, cf. [START_REF] Breitenlohner | On the Geroch group[END_REF][START_REF] Nicolai | Two-dimensional gravities and supergravities as integrable system[END_REF][START_REF] Nicolai | Integrable classical and quantum gravity[END_REF] for detailed accounts. Lagrangian To define the Lagrangian of the theory we employ the decomposition g = k ⊕ p of the Lie algebra g = Lie G into its compact part k and the orthogonal non-compact complement p. For the theories under consideration this is a symmetric space decomposition, i.e. the commutators are of the form [k, k] = k , [k, p] = p , [p, p] = k . (2.1) We denote by t α the generators of g and indicate by subscripts the projection onto the subspaces k and p, i.e. for Λ ∈ g it is Λ = Λ α t α = Λ k + Λ p , Λ k ∈ k , Λ p ∈ p . (2.2) In addition, it is useful to introduce the following involution on algebra elements Λ # = Λ k -Λ p . (2. 3) The (dim Gdim K) bosonic degrees of freedom of the theory are described by a group element V of G which transforms under global G transformations from the left and local K transformations from the right, i.e. the theory is invariant under V → g V k(x) -1 , g ∈ G , k(x) ∈ K . (2.4) It is sometimes convenient to fix the local K freedom by restricting to a particular set of representatives V of the coset G/K, on which the global G then acts as V → g V k g (x) -1 , (2.5) where k g (x) ∈ K depends on g in order to preserve the class of representatives. This defines the nonlinear realization of G on the coset space G/K. The G-invariant scalar currents are defined by V -1 ∂ µ V = Q µ + P µ , Q µ ∈ k , P µ ∈ p . (2.6) The current Q µ is a composite connection for the local K gauge invariance, i.e. it appears in covariant derivatives of all quantities that transform under K, in particular D µ P ν = ∂ µ P ν + [Q µ , P ν ] . (2.7) The integrability conditions for (2.6) are then given by D [µ P ν] = 0 , Q µν ≡ 2∂ [µ Q ν] + [Q µ , Q ν ] = -[P µ , P ν ] . (2.8) The two-dimensional Lagrangian takes the form L = ∂ µ σ ∂ µ ρ -1 2 ρ tr(P µ P µ ) . (2.9) In addition to the scalar current P µ it contains the dilaton field ρ and the conformal factor σ. The latter originates from the two-dimensional metric which has been brought into conformal gauge g µν = e 2σ η µν , such that space-time indices µ in (2.9) are contracted with the flat Minkowski metric η µν . The only remnant of two-dimensional gravity is the first term descending from the two-dimensional (dilaton coupled) Einstein-Hilbert term ρR in conformal gauge. The Lagrangian (2.9) is manifestly invariant under the symmetry (2.4). It is straightforward to derive the equations of motion which take the form2 ∂ + ∂ -ρ = 0 , ∂ + ∂ -σ + 1 2 tr(P + P -) = 0 , D + (ρP -) + D -(ρP + ) = 0 , (2.10) where we have introduced light-cone coordinates x ± = (x 0 ± x 1 )/ √ 2. In addition, the theory comes with two first order (Virasoro) constraints ∂ ± ρ ∂ ± σ -1 2 ρ tr(P ± P ± ) = 0 , (2.11) which might equally be obtained from the Lagrangian before the fixing of conformal gauge. It is straightforward to check that these first order constraints are compatible as a consequence of the equations of motion for ρ and P ± and moreover imply the second order equation for the conformal factor σ. Global symmetry and dual potentials It is well known -starting from the work of Geroch on dimensionally reduced Einstein gravity [START_REF] Geroch | A method for generating solutions of Einstein's equations[END_REF] -that the global symmetry algebra of the coset space sigma model (2.9) is not only the algebra of target-space isometries g, but its affine extension ĝ. We denote the generators of g by t α and those of ĝ by T α,m , m ∈ Z. The latter close into the algebra T α,m , T β,n = f αβ γ T γ,m+n + m δ m+n η αβ K , (2.12) where f αβ γ and η αβ = tr(t α t β ) are the structure constants and the Cartan-Killing form of g, respectively, and K denotes the central extension of the affine algebra. In addition to T α,m and K we will find the Witt-Virasoro generator L 1 to be crucial for the construction of this paper. It obeys [ L 1 , T α,m ] = -m T α,m+1 . (2.13) The central extension K commutes with both T α,m and L 1 . We denote by G ⊃ ĝ the algebra spanned by {T α,m , K, L 1 }. To define the action of G on the fields V, ρ and σ that enter the Lagrangian (2.9) we need to introduce an infinite hierarchy of dual potentials. These are additional scalar fields that are defined as nonlocal functions of V (and ρ), but whose definition is only consistent if one invokes the equations of motion. Therefore G is only realized as an on-shell symmetry on (2.10). To start with, the dilaton ρ is a free field, such that it gives rise to the definition ∂ µ ρ = -µν ∂ ν ρ , ⇐⇒ ∂ ± ρ = ± ∂ ± ρ , (2.14) of its dual ρ. Obviously, the dual of ρ gives back ρ. More interesting are the nonlinear equations of motion for V that can be rewritten as a conservation law ∂ µ I µ = 0 for the current I µ = ρ VP µ V -1 . This allows the definition of the first dual potential Y 1 ∂ ± Y 1 = ∓I ± = ∓ ρVP ± V -1 , (2.15) which is g valued and according to (2.4) transforms in the adjoint representation of the global G. Integrability of these equations is ensured by ∂ µ I µ = 0. From the point of view of higher-dimensional supergravity theories, equations (2.15) constitute nothing but a particular case of the general on-shell duality between p forms and Dp -2 forms (D = 2, p = 0). In two dimensions however, these equations are just the starting point for an infinite hierarchy of dual potentials [START_REF] Brezin | Remarks about the existence of nonlocal charges in two-dimensional models[END_REF] of which the next members Y 2 , Y 3 are defined by ∂ ± Y 2 = ±ρρ + 1 2 ρ 2 VP ± V -1 + 1 2 [Y 1 , ∂ ± Y 1 ] , ∂ ± Y 3 = ∓ 1 2 ρ 3 ∓ ρρ 2 -ρ 2 ρ VP ± V -1 + [Y 1 , ∂ ± Y 2 ] -1 6 [Y 1 , [Y 1 , ∂ ± Y 1 ]]] . (2.16) Again, integrability of these equations is guaranteed by the field equations ∂ µ I µ = 0 and the defining equation (2.15) of the lower dual potentials. A convenient way to encode the definition of all dual potentials (and the action of the affine symmetry) is the linear system [START_REF] Belinsky | Integration of the Einstein equations by the inverse scattering problem technique and the calculation of the exact soliton solutions[END_REF][START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF] which we will describe in the next subsection. In order make the symmetry structure more transparent we will restrict the discussion in the present subsection to the lowest few dual potentials and to the action of the lowest few affine symmetry generators T α,m . We identify the zero-modes T α,0 with the generators t α of the off-shell symmetry g. These zero-mode symmetries do not mix the original scalars and the dual potentials of different levels, i.e. V transforms according to (2.4) and all the Y m (m > 0) transform in the adjoint representation. The fields ρ, ρ, and σ are left invariant by T α,0 . The dual potentials ρ, Y m are defined by (2.14)-(2.16) only up to constant shifts ρ → ρ+λ, Y m → Y m +Λ m . The generators in G corresponding to these shift symmetries are L 1 and T α,m (m > 0), i.e. δ (1) ρ = 1 , δ α,m Y β n = δ β α m = n 0 m > n , (2.17) where δ (1) and δ α,m denote the action of L 1 and T α,m , respectively, and Y m = Y α m t α . Since the definition of the dual potentials also involves ρ and lower dual potentials, it follows that L 1 and T α,m also act nontrivially on the higher dual potentials Y n (m < n), e.g. δ (1) Y 2 = -Y 1 , δ (1) Y 3 = -2Y 2 , Λ α δ α,1 Y 2 = 1 2 [Λ, Y 1 ] , etc. (2.18) None of the shift symmetries L 1 and T α,m (m > 0) acts on the physical fields V, ρ or σ. So far we have thus not introduced any new physical symmetry. The crucial point about the symmetry structure of the model is the existence of another infinite family of symmetry generators T α,m (m < 0). Their action on the physical fields is expressed in terms of the dual potentials and thus nonlinear and nonlocal in terms of the original fields. For the lowest generators, this action is given by Λ α δ α,-1 V = [Λ, Y 1 ]V -ρ V[V -1 ΛV] p , Λ α δ α,-2 V = [Λ, Y 2 ] + 1 2 [[Λ, Y 1 ], Y 1 ] -ρ[Λ, Y 1 ] V + 1 2 ρ 2 + ρ2 V[V -1 ΛV] p . (2.19) The field ρ is left invariant while the action on the dual potentials Y m and on the conformal factor σ follows from (2.11), (2.15). We find for example Λ α δ α,-1 σ = tr(ΛY 1 ) , Λ α δ α,-1 Y 1 = [Λ, Y 2 ] + 1 2 [[Λ, Y 1 ], Y 1 ] + 1 2 ρ 2 V[V -1 ΛV] p V -1 , etc. (2.20) One can easily check that the symmetries defined in (2.17) and (2.19) indeed close according to the algebra (2.12). In particular, it follows that the central extension K acts exclusively on the conformal factor [START_REF] Julia | Infinite Lie algebras in physics[END_REF]: δ (0) σ = -1 . (2.21) In order to define all dual potentials Y m (m > 0) and describe the action of all symmetry generators T α,m in closed form we will in the following introduce the linear system [START_REF] Belinsky | Integration of the Einstein equations by the inverse scattering problem technique and the calculation of the exact soliton solutions[END_REF][START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF] showing the classical integrability of the theory. The linear system A compact way to encode the infinite family of dual potentials and the action of the full symmetry algebra ĝ is the definition of a one-parameter family of group-valued matrices V(γ) according to the linear system [START_REF] Belinsky | Integration of the Einstein equations by the inverse scattering problem technique and the calculation of the exact soliton solutions[END_REF][START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF][START_REF] Breitenlohner | On the Geroch group[END_REF] V-1 ∂ µ V = Ĵµ , with Ĵµ = Q µ + 1 + γ 2 1 -γ 2 P µ + 2γ 1 -γ 2 µν P ν , (2.22) where γ is a scalar function γ = 1 ρ w + ρ -(w + ρ) 2 -ρ 2 , (2.23) of the constant spectral parameter w which labels the family. As γ is a double-valued function of w we will in the following restrict to the branch |γ| < 1, i.e. in particular γ = 1 2 ρ w -1 -1 2 ρρ w -2 + 1 8 ρ 3 + 4ρρ 2 w -3 + . . . , (2.24) around w = ∞. It is straightforward to verify that the compatibility of (2.22) is equivalent to (2.8) and the equations of motion (2.10): 2∂ [µ Ĵν] + [ Ĵµ , Ĵν ] = Q µν + [P µ , P ν ] + 1 + γ 2 1 -γ 2 2D [µ P ν] -µν 2γ 1 -γ 2 ρ -1 D σ (ρP σ ) . (2.25) Expanding V around w = ∞ V = . . . e w -4 Y 4 e w -3 Y 3 e w -2 Y 2 e w -1 Y 1 V , (2.26) defines the infinite series of dual potentials Y n . In particular, the expansion of (2.22) around w = ∞ reproduces (2.15), (2.16). For later use we also give the linear system in light-cone coordinates V-1 D ± V = 1 ∓ γ 1 ± γ P ± . (2.27) Using the matrix V, the action of the symmetry algebra G can be expressed in closed form. To this end, we parametrize the loop algebra of g by a spectral parameter w and identify the generators T α,m with w -m t α . Elements Λ = Λ α,m T α,m of ĝ are represented by g-valued functions Λ(w) = Λ α,m w -m t α , meromorphic in the spectral parameter plane. In terms of Λ(w), the action on the physical fields V, σ can be given in closed form as V -1 δ Λ V = 2γ(w) ρ (1 -γ 2 (w)) Λp (w) w , δ Λ σ = -tr Λ(w) ∂ w V(w) V-1 (w) w . (2.28) Here we have defined the dressed parameter3 Λ(w) = V-1 (w)Λ(w) V(w) = Λk (w) + Λp (w) , (2.29) with the split according to (2.2). In addition, we have introduced the notation f (w) w ≡ dw 2πi f (w) = -Res w=∞ f (w) , (2.30) for an arbitrary function f (w) of the spectral parameter w. The path is chosen such that only the residual at w = ∞ is picked up. For definiteness we will treat the functions f (w) = ∞ m=-∞ f m w m in these expressions as formal power series with almost all {f m |m > 0} equal to zero. Some useful relations for calculating with these objects are collected in appendix A. It is straightforward to check that the transformations (2.28) leave the equations of motion invariant. Since the solution V(w) of the linear system (2.22) explicitly enters the transformation, this is in general not a symmetry of the Lagrangian but only an on-shell symmetry of the equations of motion (2.10). This will be of importance later on. Moreover, it is straightforward to check, that the algebra of transformations (2.28) closes according to (2.12). Relation (A.6) is crucial to verify the action (2.21) of the central extension. The group-theoretical structure of the symmetry (2.28) becomes more transparent if we consider its extension to V(w) and thereby to the full tower of dual potentials [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF]: V-1 δ Λ V(w) = Λ(w) - 1 v -w Λk (v) + γ(v) (1 -γ 2 (w)) γ(w) (1 -γ 2 (v)) Λp (v) v , (2.31) in the above notation. This action may be rewritten as δ Λ V(w) = Λ(w) V(w) -V(w) Υ(γ(w)) , (2.32) with Υ(γ(w)) ≡ 1 v-w Λk (v) + γ(v) (1-γ 2 (w)) γ(w) (1-γ 2 (v)) Λp (v) v , and thus takes the form of an infinite-dimensional analogue of the nonlinear realization (2.5), in which the left action of Λ(w) parametrizing ĝ is accompanied by a right action of Υ(γ) ∈ k(ĝ) in order to preserve a particular class of coset representatives. The algebra k(ĝ) is the infinite-dimensional analogue of k in (2.5), i.e. the maximal compact subalgebra of ĝ, and is defined as the algebra of g-valued functions k(γ), satisfying 4 k # (γ) = k(1/γ) . (2.33) We shall see in the following that the particular set of coset representatives starring in (2.32) are the functions V(γ(w)) regular around w = ∞ in accordance with the expansion (2.26). For illustration, let us evaluate equation (2.32) for the particular transformation Λ(w) = w -m Λ, Λ ∈ g, m ∈ Z . Expanding both sides around w = ∞, it follows directly from (A.5) that for positive values of m, Υ(γ) vanishes, such that the transformation merely amounts to a shift of the dual potentials Y n in the expansion (2.26); for m = 1, 2 this reproduces (2.17), (2.18). These transformations do not act on the physical fields present in the Lagrangian (2.9). For a transformation with negative m on the other hand the second term in (2.32) no longer vanishes but precisely restores the regularity of V at w = ∞ that has been destroyed by the first term [START_REF] Nicolai | On K(E 9 )[END_REF]. These transformations describe the nonlinear and nonlocal on-shell symmetries on the physical fields and the dual potentials which leave the equations of motion and the linear system (2.27) invariant. They are commonly referred to as hidden symmetries, for m = -1 one recovers (2.19). Finally, for m = 0 one recovers the action (2.4) of the finite algebra g acting as an off-shell symmetry on all the fields. Here, the local K freedom in (2.4) has been fixed such that [V -1 δV] k = 0. To summarize, the negative modes T α,m , m < 0 act as nonlocal on-shell symmetries whereas the positive modes T α,m , m > 0 act as shift symmetries on the dual potentials. Only the zero-modes T α,0 are realized as off-shell symmetries on the physical fields of the Lagrangian (2.9). In addition to the affine symmetry algebra ĝ described above, a Witt-Virasoro algebra can be realized on the fields [START_REF] Julia | Conformal internal symmetry of 2d sigma-models coupled to gravity and a dilaton[END_REF] which essentially acts as conformal transformation on the inverse spectral parameter y = 1/w. From these generators we will in the following only need L 1 = -y 2 ∂ y = ∂ w , (2.34) which acts only on the dual dilaton ρ and the dual potentials Y n according to equations (2.17), (2.18) δ (1) ρ = 1 =⇒ δ (1) V = ∂ w V . (2.35) The pair K and L 1 which extends the loop algebra of g to G turns out to be crucial for our construction of the gauged theory in section 3. The distinguished role of L 1 in this construction -as opposed to all the other Virasoro generators that can be realized following [START_REF] Julia | Conformal internal symmetry of 2d sigma-models coupled to gravity and a dilaton[END_REF] -stems from its action on the dual dilaton (2.17). The gaugings we are mainly interested in will carry a scalar potential whose presence in particular deforms the free field equation (2.10) of ρ by some source terms ρ = Q. The only way to maintain a meaningful version of the dual dilaton equation (2.14) in this case is by gauging its shift symmetry ∂ µ ρ = -µν (∂ ν -B ν δ (1) ) ρ while imposing ∂ [µ B ν] = -µν Q. We shall see that this indeed appears very natural in the subsequent construction. In the following we will parametrize a general algebra element of G ≡ T α,m , K, L 1 with a collective label A ∈ {(α, m), (1), (0)} for the generators of G as Λ = Λ A T A = Λ α,m T α,m + Λ (1) L 1 + Λ (0) K ≡ Λ(w) + Λ (1) L 1 + Λ (0) K , (2.36) with Λ(w) ≡ Λ α,m w -m t α . The commutator between two such algebra elements takes the form |[ Λ, Σ ]| = [Λ(w), Σ(w)] + Λ (1) ∂Σ(w) -Σ (1) ∂Λ(w) + K Λ(w) ∂Σ(w) w ,(2.37) where we use the notation |[ , ]| in order to distinguish the general algebra commutator from the simple matrix commutators [ , ]. Let us finally mention, that the symmetry algebra G is equipped with an invariant inner product (T A , T B ) = η AB , given by (T α,m , T β,n ) = η αβ δ m+n-1 , (L 1 , K) = -1 . (2.38) Structure of the duality equations For the following it turns out the be important to analyze in more detail the structure of the duality equations (2.14) and (2.22) which have been used to define the dual fields ρ and V. Let us for the moment consider these dual fields as a priori independent fields and the duality equations as their first order equations of motion relating them to the physical fields ρ and V. In particular, we may define the G-valued current Z µ as Z µ = Z A µ T A = Z µ (w) + Z (1) µ L 1 , (2.39) Z (1) µ ≡ -∂ µ ρ -µν ∂ ν ρ , Z µ (w) ≡ V -V-1 ∂ µ V + Q µ + 1 + γ 2 1 -γ 2 P µ + 2γ 1 -γ 2 µν P ν V-1 -∂ w V V-1 Z (1) µ , which is a particular combination of the duality equations, i.e. on-shell we have Z µ = 0. Under a generic symmetry transformation Λ ∈ G the constituents of Z µ transform according to (2.28), (2.31), and (2.35) and some lengthy computation shows that altogether Z µ transforms as δ Λ Z ± = |[ Λ, Z ± ]| -V 1 v -w V-1 |[ Λ, Z ± ]| V k,v V-1 - 1 ∓ γ 1 ± γ V 1 v -w 1 ± γ 1 ∓ γ V-1 |[ Λ, Z ± ]| V p,v V-1 , (2.40) in light-cone coordinates. In order not to overburden the notation here, all spectral parameter dependent functions within the brackets • v depend on the parameter v which is integrated over, whereas all functions outside depend on the spectral parameter w. In slight abuse of notation, the commutators |[ , ]| represent the full G commutator (2.37) however without the central term K. 5 In particular, (2.40) shows that Z µ transforms homogeneously under Λ -consistent with the fact that Z µ vanishes on-shell. This current will play an important role in the following. Gauging subgroups of the affine symmetry In the previous section we have reviewed how the equations of motion of the ungauged two-dimensional theory are invariant under an infinite algebra G of symmetry transformations. The symmetry action on the physical fields (2.28) is defined in terms of the matrix V which in turn is defined as a solution of the linear system (2.22). As a result, the global symmetry is nonlinearly and nonlocally realized on the physical fields. We will now attempt to gauge part of the global symmetry (2.28), i.e. turn a subalgebra of G into a local symmetry of the theory. This is rather straightforward for subalgebras of g = T α,0 ⊂ G, as g is the off-shell symmetry algebra of the Lagrangian. In fact, since g is already the off-shell symmetry of the three-dimensional ancestor of the theory, the corresponding gaugings are simply obtained by dimensional reduction of the three-dimensional gauged supergravities [START_REF] Nicolai | Maximal gauged supergravity in three dimensions[END_REF][START_REF] De Wit | Gauged locally supersymmetric D = 3 nonlinear sigma models[END_REF]. The gauging of generic subalgebras of G is much more intricate, as their action explicitly contains the matrix V which is defined only on-shell as a nonlocal functional of the physical fields. This is the main subject of this paper. The problem is analogous to the one faced in four dimensions when trying to gauge arbitrary subgroups of the scalar isometry group -not restricting to triangular symplectic embeddings -which has been solved only recently [START_REF] De Wit | Magnetic charges in local field theory[END_REF][START_REF] De Wit | The maximal D = 4 supergravities[END_REF]. We will follow a similar approach here. As a key point in the construction we will introduce the dual scalars ρ and V as independent fields on the Lagrangian level. The duality equations (2.39) relating them to the original fields will naturally emerge as first order equations of motion. Specifically, the field equations obtained by varying the Lagrangian with respect to the newly introduced gauge fields of the theory turn out to be proportional to the current Z µ introduced in section 2.4 which combines the duality equations. Gauge fields and embedding tensor In order to construct the gauged theory, we make use of the formalism of the embedding tensor, introduced to describe the gaugings of supergravity in higher dimensions [START_REF] Nicolai | Maximal gauged supergravity in three dimensions[END_REF][START_REF] De Wit | On Lagrangians and gaugings of maximal supergravities[END_REF]. Its main feature is the description of all possible gaugings in a formulation manifestly covariant under the global symmetry G of the ungauged theory. As a first step we need to introduce vector fields in order to realize the covariant derivatives corresponding to the local symmetry. In contrast to higher dimensions where the vector fields come in some well-defined representation of the global symmetry group of the ungauged theory, in two dimensions these fields do not represent propagating degrees of freedom and are absent in the ungauged theory. 6 We will hence start by introducing a set of vector fields A M µ transforming in some a priori undetermined representation (labeled by indices M) of the algebra G. An arbitrary gauging then is described by an embedding tensor Θ M A that defines the generators X M ≡ Θ M A T A , (3.1) of the subalgebra of G which is promoted to a local symmetry by introducing covariant derivatives D µ = ∂ µ -g A M µ Θ M A T A , (3.2) with a gauge coupling constant g. 7 The way Θ M A appears within these derivatives shows that under G it naturally transforms in the tensor product of two infinitedimensional representations. Gauge invariance immediately imposes the quadratic constraint f BC A Θ M B Θ N C + T B,N P Θ M B Θ P A = 0 , (3.3) on Θ M A , where f BC A denote the structure constants of the algebra (2.12), (2.13), and T B,N P are the generators of G in the representation of the vector fields. Equivalently, this constraint takes the form [X M , X N ] = -X MN K X K , (3.4) with "structure constants" X MN K = Θ M A T A,N K . We will impose further constraints on Θ M A in the sequel. It will sometimes be convenient to expand the covariant derivatives (3.2) according to (2.36) as D µ = ∂ µ -g A α µ (w) t α -g A (1) µ L 1 -g A (0) µ K , (3.5) with the projected vector fields A (1) µ = Θ M (1) A M µ , A (0) µ = Θ M (0) A M µ , A α µ (w) = m=∞ m=-∞ w -m Θ M α,m A M µ .(3.6) While the appearance of the infinite sums (over m and over M) in the definition of A α µ (w) (and thus the appearance of an infinite number of vector fields) looks potentially worrisome, we will eventually impose constraints on Θ M α,m such that only a finite subset of vector fields A M µ enters the Lagrangian. Explicitly, the action of the covariant derivative on the various scalars reads8 D µ ρ = ∂ µ ρ -g A (1) µ , D µ σ = ∂ µ σ + g A (0) µ + g tr A µ (w) ∂ w V(w) V-1 (w) w , V -1 D µ V = V -1 ∂ µ V -g 2γ(w) ρ (1 -γ 2 (w)) µ (w) p w = P µ + Q µ , V-1 D µ V(w) = V-1 ∂ µ V(w) -g A (1) µ V-1 ∂ w V(w) -g µ (w) + g 1 v -w [ µ (v)] k + γ(v) (1 -γ 2 (w)) γ(w) (1 -γ 2 (v)) [ µ (v)] p v , (3.7) with µ (w) = V-1 (w)A µ (w) V(w). The Lagrangian As a first step towards introducing the local symmetry on the level of the Lagrangian, we consider the covariantized version of (2.9) L kin = ∂ µ ρ D µ σ -1 2 ρ tr(P µ P µ ) , (3.8) with covariant derivatives according to (3.7). Obviously, (3.8) cannot be the full answer since the equations of motion for the newly introduced vector fields will pose unwanted (and in general inconsistent) first order relations among the scalar fields. Likewise, according to (3.7) the P µ now carry the dual potentials ρ and V which are to be considered as independent fields. Variation with respect to these fields then gives rise to even stranger constraints. Remarkably, all these problems can be cured by adding to the Lagrangian what we will refer to as a topological term L top = -g µν tr µ V-1 (∂ ν V -∂ w V ∂ ν ρ) -Q ν - 1 + γ 2 1 -γ 2 P ν w -A (0) µ ∂ ν ρ -1 2 g 2 µν A (0) µ A (1) ν -1 2 g 2 µν tr 1 v -w [ µ (w)] k [ Âν (v)] k v w (3.9) -1 2 g 2 µν tr (γ(v) -γ(w)) 2 + (1 -γ(v)γ(w)) 2 (v -w)(1 -γ 2 (v))(1 -γ 2 (w)) [ µ (w)] p [ Âν (v)] p v w , which is made such that the vector field equations of motion precisely yield (a projection of) the covariantized version of the duality equations (2.14), (2.22). Explicitly, the variation of the Lagrangian L 0 = L kin + L top with respect to the vector fields reads δL 0 = -g η AB Θ M A µν Z B µ δA M ν , (3.10) where Z µ is the properly covariantized version of the G-valued current defined in (2.39) above. It contains the covariantized versions of the duality equations (2.14) and (2.22) that render ρ dual to ρ and V dual to V, respectively. As vector field equations in the gauged theory we thus find a Θ-projection of Z µ = 0 : g Θ M A η AB Z B µ = 0 . (3.11) In the limit g → 0 back to the ungauged theory these equations consistently decouple. The fact that the higher order g terms of (3.11) can be consistently integrated to the variation (3.10) is nontrivial and puts quite severe constraints on the construction. Namely, it requires the following constraint tr A µ (w) δA ν (w) w -A (1) µ δA (0) ν -A (0) µ δA (1) ν = 0 , (3.12) on the variation with respect to the projected vector fields. Fortunately, this condition translates directly into the G covariant constraint Θ M A Θ N B η AB = 0 , (3.13) for the embedding tensor Θ M A . For consistency, this constraint must thus be imposed together with the quadratic constraint (3.3) ensuring gauge invariance. As in higherdimensional gaugings [START_REF] De Wit | On Lagrangians and gaugings of maximal supergravities[END_REF], we expect that the latter constraint (3.13) should eventually be a consequence of (3.3). This is one motivation for the ansatz Θ M A = T B,M N η AB Θ N , (3.14) for the embedding tensor parametrized by a single conjugate vector Θ M . In terms of G representations this means that Θ M A does not take arbitrary values in the tensor product of the coadjoint and the conjugate vector field representation, but only in the conjugate vector field representation contained in this tensor product. This is the analogue of the linear representation constraint that is typically imposed on the embedding tensor in higher dimensions [START_REF] Nicolai | Maximal gauged supergravity in three dimensions[END_REF][START_REF] De Wit | On Lagrangians and gaugings of maximal supergravities[END_REF]. Indeed, it is straightforward to verify that the ansatz (3.14) reduces the quadratic constraints (3.3) and (3.13) to the same constraint for Θ M : η AB T A,M P T B,N Q Θ P Θ Q = 0 . (3.15) Further support for the ansatz (3.14) comes from the fact that all the examples of gauged theories in two dimensions (presently known to us) turn out to be described by an embedding tensor of this particular form. In particular, in all examples originating by dimensional reduction from a higher-dimensional gauged theory, the constraint (3.14) is a consequence of the corresponding linear constraint in higher dimensions. We will come back to this in section 5. This shows that (3.14) describes an important class of if not all the two-dimensional gaugings. Before proceeding with the proof of gauge invariance of the Lagrangian, we will in this subsection closer analyze this quadratic constraint imposed on the embedding tensor. It can be skipped on first reading. We have shown above that the linear ansatz (3.14) for Θ M A reduces the quadratic constraints (3.3) and (3.13) to the same constraint η AB T A,M P T B,N Q Θ P Θ Q = 0 , (3.18) for the tensor Θ M . This exhibits an interesting representation structure underlying the quadratic constraint. Formally, the constraint (3.18) lives in the twofold symmetric tensor product of the conjugate vector field representation. In particular, if Θ M transforms in a level k highest weight representation, the constraint transforms in an (infinite) sum of level 2k highest weight representations. As we are dealing with infinite-dimensional representations, these are most conveniently described in terms of the associated characters. Let us denote by χ Θ the character of the conjugate vector field representation, and by χ i the characters associated with the different level 2k representations R i of g. They are extended to representations of the Virasoro algebra by means of the standard Sugawara construction. In terms of these characters, the decomposition of the product Θ M Θ N takes the form χ Θ ⊗ sym χ Θ = i χ vir i • χ i , (3.19) where the sum is running over the level 2k representations of g and the coefficients χ vir i encoding the multiplicities of these representations carry representations of the Virasoro algebra associated with the coset model [START_REF] Goddard | Unitary representations of the Virasoro and Supervirasoro algebras[END_REF] g k ⊕ g k g 2k . (3.20) For simplicity, we restrict to simply-laced Lie algebras g in the following. With the central charge of the Virasoro algebra on g k given by c k = k dim(g)/(k + g ∨ ) in terms of the dual Coxeter number g ∨ of g, the coset CFT has central charge 2k 2 dim(g) (k + g ∨ )(2k + g ∨ ) . ( 3 .21) The coset Virasoro generators acting on (3.19) are given by L coset m = L b g k ⊕b g k m -L b g 2k m , (3.22) in terms of the Virasoro generators induced by g k ⊕ g k and g 2k , respectively. A brief calculation reveals that they take the explicit form (L coset m ) MN PQ = 2 k + g ∨ (L m ) (M (P δ Q) N ) - ∞ n=0 η αβ (T α,m+n ) (M (P (T β,-n ) N ) Q) . In particular, we thus obtain (L coset 1 ) MN PQ = - 1 k + g ∨ η AB T A,M (P T B,N Q) , (3.23) which shows that the quadratic constraint (3.18) can be rewritten in strikingly compact form as L coset 1 (Θ ⊗ Θ) = 0 . (3.24) The quadratic constraint thus takes the form of a projector on the product decomposition (3.19) which acts on the multiplicities χ vir i . Only those components within Θ whose products induce a quasi-primary state in the coset CFT (3.20) give rise to a consistent gauging. While this CFT formulation of the quadratic constraint is certainly very appealing we do at present have no good interpretation for the appearance of this structure. We will show explicitly in the next subsection that (3.24), alias (3.18), is a sufficient constraint for gauge invariance of the Lagrangian. Gauge invariance of the Lagrangian The Lagrangian (3.8), (3.9) was determined above by requiring that variation with respect to the vector fields yields a properly covariantized version of the scalar duality equations. In particular, this uniquely fixes all higher order g couplings. In the rest of this section we will show that this Lagrangian is indeed invariant under the local action of the generators (3.1) δ Λ ρ = gΛ (1) , δ Λ σ = -g tr Λ(w) ∂ w V(w) V-1 (w) w -gΛ (0) , V -1 δ Λ V = g 2γ(w) ρ (1 -γ(w) 2 ) Λp (w) w , V-1 δ Λ V(w) = g Λ(w) + g Λ (1) V-1 ∂ w V -g 1 v -w Λk (v) + γ(v) (1 -γ 2 (w)) γ(w) (1 -γ 2 (v)) Λp (v) v , (3.25) where Λ = Λ M (x) Θ M A T A = Λ(w; x) + Λ (1) (x) L 1 + Λ (0) (x) K , (3.26) now is a space-time dependent element of G induced by the gauge parameter Λ M (x). In addition, the action of the generators on the vector fields needs to be properly implemented. To this end, we first compute the variation of L 0 = L kin + L top under generic variation of vector and scalar fields. A somewhat tedious but beautiful computation and ∆ Λ A M ± = D ± Λ M + (gΛ(w) -gkΛ M Θ M L 1 ) A T A,N M A N ± (3.32) -g V 1 v -w Λ k,v V-1 + 1 ± γ 1 ∓ γ V 1 v -w 1 ∓ γ 1 ± γ Λ p,v V-1 A T A,N M A N ± . Again, we use the short-hand notation according to which all spectral parameter dependent functions within the brackets • v depend on the parameter v which is integrated over, whereas all functions outside depend on the spectral parameter w. Plugging all the variations into the Lagrangian, one obtains after some lengthy computation and up to total derivatives δ Λ L 0 = -1 2 g Θ M A η AB Λ M µν X B µν , (3.33) with X µν ≡ 2D [µ Z ν] + |[ Z µ , Z ν ]| + 2 D [µ J ν] -|[ J µ , J ν ]| -g F µν M Θ M A (T A • V) V-1 , J µ ≡ V Q µ + 1 + γ 2 1 -γ 2 P µ + 2γ 1 -γ 2 µν P ν V-1 . (3.34) The calculation makes use of the covariantized version of (2.25) for Ĵµ = V-1 J µ V. The subtle part in calculating (3.33) is the check that the various terms arising from the different variations arrange into the correct covariant derivatives, as the Lagrangian and the variations have no manifest covariance. E.g. the extra A M µ contributions from (3.32) are precisely the ones needed in order to complete the correct covariant derivatives D µ on Z ν in X µν . For this it is important to note that due to the extra contributions of order g 0 in (3.31) the variation of Z µ changes with respect to the ungauged theory (2.40) to δ Λ Z ± = F (Λ, Z) -V 1 v -w V-1 F (Λ, Z) V k,v V-1 - 1 ∓ γ 1 ± γ V 1 v -w 1 ± γ 1 ∓ γ V-1 F (Λ, Z) V p,v V-1 , with F (Λ, Z) A ≡ -g Λ M (Z B µ Θ N B ) T A M N , (3.35) where indices A, B are lowered and raised with η AB and its inverse. Indeed, this is precisely consistent with the fact that in the gauged theory only the projection Z B µ Θ N B vanishes on-shell as a set of first order equations of motion for the dual potentials (3.11) -accordingly, it must transform homogeneously under gauge transformations. It remains to show that X µν vanishes. In order to do so, we first note that with the definition (3.30) of generalized covariant derivatives D µ , we find for the dual fields ρ, V D µ ρ = -µν ∂ ν ρ , D µ V V-1 = J µ , (3.36) with J µ from (3.34), changing drastically the previous expressions (3.7). 9 Now, the fact that X µν = 0 is a direct consequence of (3.36) and [ D µ , D ν ] V = H A µν T A • V , (3.37) where H µν is the field strength associated with the full connection (3.30). Summarizing, we have shown that under gauge transformations (3.25), (3.31) the Lagrangian L 0 = L kin + L top remains invariant up to total derivatives. The local gauge algebra is spanned by generators X M (3.1) and is a subalgebra of the global symmetry algebra G of the ungauged theory. In particular, the gauge algebra may include hidden symmetries which in the ungauged theory are realized only on-shell. Gauge fixing In the previous section we constructed the deformation of the ungauged Lagrangian (2.9) that is invariant under the local version of a subalgebra of the affine symmetry algebra G of (2.9). The gauged Lagrangian has been obtained by coupling vector fields with minimal couplings in covariant derivatives (3.8) and adding a topological term (3.9). The gauging is entirely parametrized in terms of the embedding tensor Θ M which in particular encodes the local gauge algebra with generators (3.1). With the new gauge fields and a number of dual scalar fields the gauged Lagrangian contains more fields than the original one, however as the new fields couple topologically only they do not introduce new degrees of freedom. More specifically, these fields arise with the first order field equations (4.3) below, such that the additional local symmetries precisely eliminate the additional degrees of freedom. In this section, we illustrate the various ways of gauge fixing the action and discuss the resulting different equivalent formulations of the theory. Before that, we describe the generic properties of the scalar potential which completes the construction of the bosonic sector of gauged supergravity. Scalar potential and equations of motion An important additional feature of gauged supergravity theories is the presence of a scalar potential V which is enforced in order to maintain supersymmetry of the deformed Lagrangian. Its explicit form depends on the particular ungauged theory, in particular on the number of supercharges. It must thus be computed case by case in the various supersymmetric theories and we leave this for future work. Here we will just summarize the generic properties of this potential and discuss their consequences for the gauged theory. As a general property, the potential arises quadratic in the coupling constant g, i.e. the deformed Lagrangian is supplemented by a term L pot = -g 2 V where V is bilinear in Θ M , and generically depends on all scalar fields ρ, ρ, V, V, and σ. This dependence is constrained in order that its variation takes the specific form δV = δV δρ δρ + δV δσ δσ + tr δV δΣ [V -1 δV] p + δV δ ΣA δ ΣA , (4.1) with δV δΣ ∈ p, δ ΣA ∈ G from (3.28). Furthermore the various variations of V are constrained such that (4.1) vanishes for gauge transformations (3.25), i.e. the scalar potential is separately gauge invariant. In particular, no further constraints on the embedding tensor will arise from its presence. The total Lagrangian of the gauged theory then reads L = L kin + L pot + L top (4.2) = ∂ µ ρ D µ σ -1 2 ρ tr(P µ P µ ) -g 2 V -g µν tr µ V-1 (∂ ν V -∂ w V ∂ ν ρ) -Q ν - 1 + γ 2 1 -γ 2 P ν w -A (0) µ ∂ ν ρ -1 2 g 2 µν A (0) µ A (1) ν -1 2 g 2 µν tr 1 v -w [ µ (w)] k [ Âν (v)] k v w -1 2 g 2 µν tr (γ(v) -γ(w)) 2 + (1 -γ(v)γ(w)) 2 (v -w)(1 -γ 2 (v))(1 -γ 2 (w)) [ µ (w)] p [ Âν (v)] p v w . It gives rise to the following equations of motion: ∂ µ ∂ µ ρ = -g 2 δV δσ , D µ D µ σ = -1 2 trP µ P µ -g 2 δV δρ , D µ (ρP µ ) = g 2 δV δΣ , T A,M N Θ N Z A µ = 0 , T A,M N F M µν Θ N = -2g δV δ ΣA . (4. 3) The duality equation T A,M N Θ N Z A µ = 0 is not affected by the presence of the scalar potential while all other equations change. In particular, a vanishing field strength is in general no longer compatible with the field equations, i.e. the gauge fields have a nontrivial effect despite the fact that they are non-propagating in two dimensions. Note further, that the full covariant derivatives D µ defined in (3.30) contain nontrivial Z A µ contributions even on-shell, as only the Θ-projection of Z A µ vanishes by the equations of motion. Gauge fixing As anticipated above, the new fields V, A M µ entering the gauged Lagrangian induce first order equations of motion (4.3). Together with the additional local symmetry this implies that no new degrees of freedom are present in the gauged Lagrangian. In order to make this manifest, it may be useful to gauge-fix the local symmetry. Also in order to make contact with the theories arising from particular compactification scenarios, it will often be required to fix part of the extra local gauge symmetry, thereby effectively reducing the number of fields. In this subsection we will discuss various ways of gauge fixing the action (4.2). Let us first illustrate the relevant structures with an extremely simple toy example, we consider the Lagrangian L = -1 2 ∂ µ ϕ ∂ µ ϕ , (4.4) of a free scalar field. The global shift symmetry ϕ → ϕ + c can be gauged by introducing covariant derivatives D µ ϕ ≡ ∂ µ ϕ -gA µ . The analogue of the full gauged Lagrangian (4.2) then carries a gauge field A µ as well as a dual scalar field χ and is of the form L = -1 2 D µ ϕ D µ ϕ -g 2 V (χ) -g µν A µ ∂ ν χ , (4.5) with the three terms representing the kinetic, the potential, and the topological term, respectively. This action is obviously invariant under δϕ = gλ(x) , δA µ = ∂ µ λ(x), in particular, this restricts the potential V to depend on the dual scalar field χ only. The equation of motion derived from (4.5) are ∂ µ D µ ϕ = 0 , D µ ϕ = µν ∂ ν χ , F µν = g µν V (χ) , (4.6) where the first equation consistently coincides with the integrability condition of the second equation. There are (at least) three different ways of fixing the gauge freedom in (4.5). i) In the case of a vanishing potential V = 0, and on a topologically trivial background, the vector field is pure gauge and may be put to zero, yielding the original Lagrangian (4.4). In this case, the deformation (4.5) thus is just a reformulation of the original model. ii) For arbitrary potential V , the duality equation can be used to express A µ in terms of scalar currents. On the Lagrangian level this leads to a theory expressed exclusively in terms of the dual scalar field χ L (1) = -1 2 ∂ µ χ ∂ µ χ -g 2 V (χ) . (4.7) According to the reasoning of i), in the absence of a scalar potential this provides a dual formulation of the original model (4.4). This is (trivial) T-duality for the free scalar field. For more complicated systems the very same procedure yields the known T-duality rules in the Abelian and the non-Abelian case [START_REF] Buscher | A symmetry of the string background field equations[END_REF]. For nonvanishing potential, we obtain an equivalent formulation of the 'gauged' theory (4.5) in which the kinetic term is replaced by a T-dual version in terms of dual scalar fields, in which no gauge fields are present. The theory is in general no longer equivalent to the the original Lagrangian (4.4) due to the presence of the scalar potential in order g 2 . iii) For a quadratic potential V (χ) = V 0 + 1 2 m 2 χ 2 , i.e. considering the lowest order expansion around a stationary point, the equations of motion may be used to replace mgχ = F µν . Simultaneously fixing the gauge freedom by setting ϕ = 0, one arrives at a Lagrangian m 2 L (2) = -1 4 F µν F µν -1 2 g 2 m 2 A µ A µ -g 2 m 2 V 0 , (4.8) of a massive vector field which now carries the degree of freedom of the system. This is the standard Higgs mechanism in two dimensions. Gauge fixing of the general Lagrangian (4.2) is considerably more complicated due to the high nonlinearity of the system, but schematically follows precisely the same pattern. In applications to describe the effective actions of concrete compactifications with non-vanishing cosmological constant, the last procedure iii) will be often the most appropriate one in order to identify the correct distribution of the degrees of freedom among different supermultiplets. From a systematic point of view, the gauge fixing according to ii) is the most interesting. In the context of the full model (4.2) it extends to the following: the duality equations T A,M N Θ N Z A µ = 0 can be solved as algebraic equations for the vector fields Θ M A A M µ . The explicit formulas may be arbitrarily complicated of course. Plugging this back into the Lagrangian leads to an equivalent formulation of the model in which the vector fields have been completely removed from the action. As in ii) this exchanges the kinetic term by a T-dual version in terms of dual scalar fields. In this formulation the only effect of the gauging is the scalar potential which remains unaffected by the gauge fixing. We conclude that for every gauging in two dimensions there is a formulation in a T-dual frame, i.e. a formulation in terms of a combination of original and dual scalars, in which no gauge fields enter the Lagrangian and the only effect of the gauging is the scalar potential. (In general, this will not be the most convenient frame to identify a particular higher-dimensional origin.) Let us consider as an example a gauging in which a subalgebra of the zero-modes of ĝ, i.e. of the algebra of target-space isometries g is gauged. According to (3.9) this will induce a topological term which couples the gauge fields to the (algebra-valued) dual potentials Y 1 . No higher dual potentials enter the Lagrangian. Apart from some additional subtleties related to the coset structure of (3.8), the resulting couplings are precisely of the type considered in [START_REF] Hull | The gauged nonlinear sigma model with Wess-Zumino term[END_REF]. Integrating out the vector fields in absence of a scalar potential gives rise to a dual formulation of the model and reproduces the known formulas of non-Abelian T-duality [START_REF] Buscher | A symmetry of the string background field equations[END_REF][START_REF] Hull | The gauged nonlinear sigma model with Wess-Zumino term[END_REF][START_REF] De La Ossa | Duality symmetries from nonabelian isometries in string theory[END_REF][START_REF] Giveon | On nonabelian duality[END_REF][START_REF] Alvarez | Some global aspects of duality in string theory[END_REF][START_REF] Alvarez | On nonabelian duality[END_REF][START_REF] Mohammedi | On non-abelian duality in sigma models[END_REF]. In particular, since (in contrast to the simplified example (4.6)) the duality equations in this case carry the vector fields on both sides, the procedure gives rise to antisymmetric couplings µν ∂ µ Y 1 α ∂ ν Y 1 β B [αβ] among the dual scalar fields in the new frame. For maximal supergravity, an example of different scalar frames has been worked out in [START_REF] Fré | The general pattern of Kac-Moody extensions in supergravity and the issue of cosmic billiards[END_REF]. As discussed above, the gauge groups appearing in our construction (4.2) will in general go beyond the off-shell symmetry of the ungauged theory, i.e. beyond the target-space isomorphisms of the original σ-model. They will thus naturally lead to a far broader class of equivalent formulations of the kinetic sector, obtained after integrating out the vector fields. The proper framework to systematically incorporate these different formulations is presumably Lie-Poisson T-duality, see [START_REF] Klimcik | Dual nonAbelian duality and the Drinfeld double[END_REF][START_REF] Klimcik | Poisson-Lie T-duality[END_REF][START_REF] Sfetsos | Canonical equivalence of non-isometric sigma-models and Poisson-Lie T-duality[END_REF][START_REF] Stern | T-duality for coset models[END_REF]. We defer a systematic treatment to future work. Let us stress once more that due to the presence of a scalar potential, the gaugings (4.2) describe genuinely inequivalent deformations of the ungauged Lagrangian (2.9). Maximal supergravity One of the richest examples in two dimensions is the theory obtained by dimensional reduction from eleven-dimensional supergravity giving rise to maximal N = 16 supergravity with scalar coset space G/K = E 8(8) /SO(16) as a particular case of the integrable structures introduced above [START_REF] Julia | Kac-Moody symmetry of gravitation and supergravity theories[END_REF][START_REF] Nicolai | The integrability of N=16 supergravity[END_REF][START_REF] Nicolai | The structure of N=16 supergravity in two-dimensions[END_REF][START_REF] Nicolai | Integrability and canonical structure of d=2, N=16 supergravity[END_REF]. The symmetry of the ungauged theory is the affine algebra e 9(9) ≡ e 8 [START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF] . In this section we will illustrate with a number of examples the general construction of gaugings in two dimensions starting from the maximal theory. In subsection 5.2 we describe gaugings that are naturally formulated in the e 8 grading of e 9 [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] . These have a natural interpretation as reductions from threedimensional supergravity theories. In subsection 5.3 we describe gaugings in the sl(9) grading of e 9 , these include the SO(9) gauging corresponding to an S 8 compactification of the ten-dimensional IIA theory as well as flux gaugings from eleven dimensions. Gaugings with type IIB origin are discussed in subsection 5.4. The basic representation of E 9 In order to construct the gaugings of the maximal E 8(8) /SO [START_REF] Breitenlohner | On the Geroch group[END_REF] theory the first task is the choice of representation of vector fields used in the gauging. Extrapolating the representation structures from higher dimensions it turns out that the relevant representation for the gauge fields is the basic representation of e 9(9) , i.e. the unique level 1 representation of this affine algebra. In the following we will see more specifically that the basic representation reproduces precisely the structures expected from dimensional reduction; the complete proof will ultimately include consistency with the supersymmetric extension. Branching the basic representation of e 9( 9) under e 8 , the vector fields hence transform as basic (5.1) where the subscript denotes the L 0 charge of the associated Virasoro algebra. The embedding tensor Θ M transforms in the conjugate vector field representation, i.e. its components carry L 0 charges opposite to (5.1). Counting the L 0 charge in powers of a variable y, the character of the basic representation of e 9 is given by the famous McKay-Thompson series χ ω 0 (y) = j 1/3 (y) = 1 + 248 y + 4124 y 2 + 34752 y 3 + 213126 y 4 + 1057504 y 5 + . . . , (5.2) in terms of the modular invariant j(y) [START_REF] Kac | An elucidation of: "Infinite-dimensional algebras, Dedekind's η-function, classical Möbius function and the very strange formula". E (1) 8 and the cube root of the modular invariant j[END_REF][START_REF] Lepowsky | Euclidean Lie algebras and the modular function j[END_REF]. The symmetric product (3.19) takes the form [START_REF] Di Francesco | Conformal field theory[END_REF] χ ω 0 (y) ⊗ sym χ ω 0 (y) = χ vir (1,1) (y) χ 2ω 0 (y) + χ vir (2,1) (y) χ ω 7 (y) , → 1 0 ⊕ 248 -1 ⊕ (1⊕248⊕3875) -2 ⊕ (1⊕ 2•248 ⊕3875⊕30380) -3 ⊕ (2•1 ⊕ 3•248 ⊕ 2•3875 ⊕30380⊕27000⊕147250) -4 ⊕ . . . , where χ 2ω 0 and χ ω 7 denote the characters of the level 2 representations starting from a 1 and a 3875 of e 8 , respectively. As discussed in section 3. denote the lowest c = 1/2 Virasoro representations. Consistent gaugings of twodimensional maximal supergravity thus correspond to components within the expansion (5.2) such that their two-fold symmetric product is sitting in a quasi-primary state of (5.4) on the r.h.s. of (5.3). In principle, all gaugings can be determined this way. In the next subsections we work out a few examples. Gaugings in the E 8 grading According to (3.14), the embedding tensor Θ transforms in the conjugate vector field representation. It describes the couplings of vector fields to e 9(9) symmetry generators according to (3.2) D µ = ∂ µ -g A M µ Θ M A T A . (5.5) It is instructive to visualize these couplings as in Figure 1. The e 9(9) symmetry generators are plotted horizontally with the L 0 charge increasing from left to right, the vector fields are plotted vertically. The diagonal lines represent the couplings induced by each component of Θ. The figure shows that every gauging defined by a particular component of Θ involves only a finite number of hidden and zero-mode symmetries and an infinite tower of unphysical shift symmetries. As discussed above this implies in particular that only the finite number of vector fields coupled to the physical symmetries appears in the Lagrangian. The simplest gauging in this description is defined by the lowest Θ component in the basic representation, i.e. by the highest weight singlet 1 0 in (5.1). According to Figure 1 this is a gauging of only shift symmetries. As a consequence, the quadratic constraint is automatically satisfied as can be seen from its form (3.13), such that this component indeed represents a consistent gauging. Moreover, as only unphysical symmetries are involved, the gauging will be invisible in the kinetic and topological part L kin +L top of the Lagrangian. Its only contribution to the total Lagrangian (4.2) is via the scalar potential V . This gauging has in fact a simple higher-dimensional origin descending from dimensional reduction of the three-dimensional maximal ungauged theory [START_REF] Marcus | Three-dimensional supergravity theories[END_REF]. With the ansatz e m a = δ α µ e λ ρB µ 0 ρ , m, a ∈ {1, 2, 3} , µ, α ∈ {1, 2} , (5.6) for the three-dimensional vielbein in terms of a conformal factor λ, dilaton ρ and Kaluza-Klein vector field B µ , the three-dimensional Einstein field equations give rise to ∂ µ (ρ 3 λ -2 ∂ [µ B ν] ) = 0 , (5.7) which is solved by ∂ [µ B ν] = ρ -3 λ 2 C µν with a constant C. The ungauged twodimensional theory is obtained by setting C = 0. In contrast, keeping a non-vanishing C and thus a non-vanishing field-strength of the Kaluza-Klein vector field precisely corresponds to the singlet gauging induced by the lowest components of Θ. In accordance with the above observations the only effect of C in the Lagrangian is the creation of a scalar potential ρ -3 λ 3 C 2 descending from the kinetic term L B ∝ ∂ [µ B ν] ∂ µ B ν . As discussed after equation (2.35) the effect of this scalar potential is a deformation of the free field equation satisfied by the dilaton ρ which necessitates gauging of the L 1 shift symmetry by the Kaluza-Klein vector field B µ . This is precisely the lowest coupling exhibited in Figure 1. At the next level in Θ comes the 248 1 . According to Figure 1, the corresponding gaugings involve apart from the infinite tower of unphysical symmetries a single generator of the e 8 zero-modes which couples to the Kaluza-Klein vector field. Again one verifies that the quadratic constraint is automatically satisfied. These are precisely the Scherk-Schwarz gaugings [START_REF] Scherk | How to get masses from extra dimensions[END_REF][START_REF] Andrianopoli | Gauging of flat groups in four dimensional supergravity[END_REF][START_REF] De Wit | On Lagrangians and gaugings of maximal supergravities[END_REF] obtained from three dimensions, singling out one among the generators of the global symmetry algebra e 8 in three dimensions. At the third level, Θ has three components 1 2 , 248 2 , 3875 2 . As can be seen from Figure 1, the gaugings induced by the 248 2 for the first time involve the hidden symmetries T α,-1 coupled to the Kaluza-Klein vector field. Those gaugings described by the 1 2 ⊕ 3875 2 on the other hand involve only the e 8 zero-mode symmetries coupled to the 248 -1 vector fields. These are the theories obtained by dimensional reduction of the three-dimensional maximal gauged theories described by an embedding tensor in precisely this representation [START_REF] Nicolai | Maximal gauged supergravity in three dimensions[END_REF]. For all these theories there is a nontrivial quadratic constraint to be satisfied by the components of Θ. To summarize, all the gaugings with three-dimensional origin are naturally identified within Figure 1. The lowest components of the vector fields in the expansion (5.1) correspond to the Kaluza-Klein vector field 1 0 and the vector fields 248 -1 descending from the three-dimensional vector fields, respectively. Higher components of the embedding tensor involve higher hidden symmetries and increasingly nontrivial quadratic constraints. A priori, it is not clear if there are nontrivial solutions of the quadratic constraint that involve arbitrarily high components of Θ in the expansion (5.1). The higher-dimensional origin of the associated gaugings remains to be elucidated. Gaugings in the SL(9) grading By far not all gaugings of two-dimensional maximal supergravity have a natural place in Figure 1. Although all of them can be identified among the components of the expansion (5.2) of the embedding tensor Θ M , the major part will be hidden at higher levels and in linear combinations of these components. In some cases it may however be possible to naturally identify them within other gradings of the affine algebra. As an example we will present in this section the theory obtained by dimensional reduction of the IIA theory on a (warped) eight-sphere S 8 [START_REF] Boonstra | The domain wall/QFT correspondence[END_REF][START_REF] Nicolai | A U(1) x SO(9) invariant compactification of D = 11 supergravity to two dimensions[END_REF][START_REF] Bergshoeff | The domain walls of gauged maximal supergravities and their M-theory origin[END_REF], which plays a distinguished role in (a low dimensional version of) the AdS/CFT correspondence [START_REF] Itzhaki | Supergravity and the large N limit of theories with sixteen supercharges[END_REF][START_REF] Boonstra | The domain wall/QFT correspondence[END_REF][START_REF] Youm | Generalized conformal quantum mechanics of D0-brane[END_REF]. Its gauge group contains an SO(9) as its semisimple part. Closely related are the compactifications on the non-compact manifolds H p,8-p that result in gauge groups SO(p, 9p). We will identify the embedding tensors Θ M that define these theories. These gaugings are most conveniently described in the sl(9) grading of e 9 [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] . The intersection of zero-modes of this grading and the e 8 grading of the previous section is given by e 8(8) ∩ sl(9) = sl(8) ⊕ gl(1) . (5.8) the lowest components 9 0 , 36 1/3 , 126 2/3 correspond to nontrivial fluxes associated with the vector fields in the reduction from eleven dimensions. As manifest in the figure, these gaugings involve only shift symmetries in the sl(9) grading. We will be interested by the gaugings induced by the 45 4/3 . With a little effort one may show that an embedding tensor in this representation automatically satisfies the quadratic constraint (3.13). Namely, working out the couplings induced by this 45 4/3 in Figure 2, it follows from the sl(9) representation structure that the lowest symmetry generators which are involved in the gauging are sitting in the 80 0 , the 84 2/3 , and the 80 1 . In particular, the latter couple only to the 45 -4/3 of the vector fields. 10 The form of the quadratic constraint (3.13) then shows that its only nontrivial contribution can sit in the component where M and N take values in the 36 1/3 and the 45 4/3 , respectively, i.e. live in the sl(9) tensor product 36 ⊗ 45 = 630 ⊕ 990 . Since there is no overlap with the representations actually present in the square of this embedding tensor (45 ⊗ sym 45 = 495 ⊕ 540 ), the quadratic constraint is automatically satisfied. We have thus shown that an embedding tensor in the 45 4/3 defines a consistent gauging in two dimensions. This representation can be parametrized by a symmetric 9×9 matrix Y . By fixing part of the SL(9) symmetry this matrix can be brought into the form with p + q + r = 9. Such an embedding tensor gauges a subalgebra cso(p, q, r) of the zero-mode algebra sl(9) in (5.10). The corresponding gauge fields come from the 36 -1/3 . For r = q = 0 this is the SO(9) gauging corresponding to the IIA S 8 compactification mentioned above. In addition there is the infinite tower of shiftsymmetries accompanying this gauging, starting from the full 84 +2/3 , a 44 inside the 80 +1 , etc. Y = diag( 1, . . . , p -1, . . . , q 0, . . . It is instructive to visualize this SO(9) gauging within the e 8 grading of Figure 1. In that table, the SO(9) singlet component of Θ which defines the gauging is a linear combination of the two SO(8) singlets appearing in the branching of the 3875 2 and the 147250 4 under SO [START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF]. In the e 8 grading this gauging thus involves a number of hidden and zero-mode symmetries. More precisely, the gauge group appearing in the Lagrangian (4.2) is of the non-semisimple form G = SO(8) (R 28 + × R 8 + ) 0 × (R 8 + ) -1 , (5.13) with the (R 28 + × R 8 + ) 0 , and (R 8 + ) -1 corresponding to zero-mode symmetries and hidden symmetries from level -1, respectively. From this perspective it is thus not at all obvious that an SO(9) gauge group is realized. Instead, the "off-shell gauge group" involves the maximal Abelian (36-dimensional) subalgebra of the zero-mode e 8 . Other gradings The SO [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] example presented in the last section already shows that particular gaugings may be far more transparent within one grading than within another. It will thus be interesting to analyze the gaugings manifest in the different gradings of e 9 [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] . A table of the 112 maximal rank subalgebras of e 8 corresponding to the zero-mode algebras in the different gradings can be found in [START_REF] Hollowood | The 112 breakings of E 8[END_REF]. Of particular interest may be the so [START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF][START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF] grading giving rise to a decomposition adj → . . . ⊕ (128 s ) -1/2 ⊕ 120 0 ⊕ (128 s ) 1/2 ⊕ 120 1 ⊕ . . . , basic → 16 0 ⊕ (128 c ) -1/2 ⊕ (16 ⊕ 560) -1 ⊕ (128 c + 1920 s ) -3/2 ⊕ . . . , (5.14) of the adjoint and the basic representation, respectively. This grading is particularly adapted to identify the transformation behavior of the different Θ components (e.g. fluxes, twists, etc.) under the SO [START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF][START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF] Conclusions and outlook In this paper, we have presented the construction of gaugings of two-dimensional supergravity. We have shown how to consistently gauge subalgebras of the affine global symmetry algebra G of the ungauged theory by coupling vector fields in a highest weight representation of the affine algebra with a particular topological term (3.9). The gaugings are described group-theoretically in terms of a constant embedding tensor Θ M in the conjugate vector representation and subject to the quadratic consistency constraint (3.15). This tensor parametrizes the different theories, defines the gauge algebra and entirely encodes the gauged Lagrangian (4.2). The resulting gauge algebras are generically infinite-dimensional and include hidden symmetries which are on-shell and not among the target-space isometries of the ungauged theory. Yet, only a finite part of the gauge symmetry is realized on the Lagrangian level (with its infinite-dimensional tail exclusively acting on dual scalar fields that are not present in the Lagrangian) and only a finite number of gauge fields enters the Lagrangian. As a main result, we have shown that the total Lagrangian (4.2) is invariant under the action (3.25), (3.31) of the local gauge algebra. In absence of a scalar potential, particular gauge fixing shows that the gauging, merely amounts to a (T-dual) reformulation of the ungauged theory. A scalar potential on the other hand induces a genuine deformation of the original theory. We have worked out a number of examples for maximal (N = 16) supergravity in two dimensions which illustrate the generic structure of the gaugings. In particular, we have discussed the gaugings corresponding to those components of the embedding tensor with lowest charge with respect to several gradings of e 9(9) which allow for a straightforward higher-dimensional interpretation. The presented construction opens up a number of highly interesting questions concerning its applications as well as possible generalizations. E.g. we have motivated the particular ansatz (3.14) for the embedding tensor by the observation that it reduces the quadratic consistency constraints (3.3) and (3.13) to the same equation (3.15). Moreover, it seems in line with the findings in higher-dimensional theories that the embedding tensor transforms in the dual representation of the (D -1)-forms in a given dimension D. Yet, it would be interesting to study, if the present construction could be generalized to more general choices of the embedding tensor. A related question is the particular choice of the vector field representation. While the general bosonic construction seems to yield no preferred representation for the gauge fields (and thus for the embedding tensor) it is presumably consistency with the supersymmetric extension that puts severe constraints on this choice. The analysis of this paper has been performed for a general two-dimensional bosonic coset space sigma-model. Above all, it remains to extend the presented construction to the fermionic sector of the various supersymmetric theories. Of particular interest is the maximal (N = 16) supergravity theory. As the integrable structures of the ungauged bosonic theory naturally extend to the full theory [START_REF] Nicolai | The integrability of N=16 supergravity[END_REF][START_REF] Nicolai | The structure of N=16 supergravity in two-dimensions[END_REF][START_REF] Nicolai | Integrability and canonical structure of d=2, N=16 supergravity[END_REF] the construction should straightforwardly extend. In particular, this should elucidate the role of the basic representation which we have found relevant for the maximal theory. The construction will fix the fermionic mass terms and yield the specific form of the scalar potential. A crucial ingredient will be the representation structure of the infinite-dimensional subalgebra k(e 9 ) of e 9 [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] under which the fermions transform [START_REF] Nicolai | On K(E 9 )[END_REF][START_REF] Paulot | Infinite-dimensional gauge structure of d = 2, N = 16 supergravity[END_REF][START_REF] Kleinschmidt | K(E 9 ) from K(E 10[END_REF]. What we have only started in section 5 of this paper is the study of the various resulting two-dimensional theories; this analysis needs to be addressed systematically and completed. In particular, at present it remains an open question if among the infinitely many parameters of the embedding tensor -combining higher-dimensional fluxes, torsion, etc. -there remain infinitely many inequivalent solutions of the quadratic constraint (3.15). Likewise, it will be interesting to analyze the possible higher-dimensional origin of higher-charge components of the embedding tensor in the various gradings. Finally, we have seen in this paper and in particular in the examples discussed, how the algebraic structures exhibited in higher-dimensional maximal gaugings are naturally embedded into infinite-dimensional representations of the affine algebra e 9 [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] . E.g. Figure 1 shows how the general formulas of this paper can reproduce in particular all the properties and constraints of maximal three-dimensional gaugings. It is moreover interesting to note that reducing in dimensions, the two-dimensional theory is the first one in which the global (and subsequently gauged) symmetry e d(d) combines -via the central extension of e 9(9) -an action on the scalar matter sector with an action on the (non-propagating) gravitational degrees of freedom. It would be highly interesting to identify the higher-dimensional ancestor of this mechanism. 11 From this unifying point of view, it would of course be of greatest interest to push the construction of gauged supergravities further down to even lower dimensions, embedding these structures into the group theory of the exceptional groups E 10 [START_REF] Julia | Kac-Moody symmetry of gravitation and supergravity theories[END_REF][START_REF] Damour | E 10 and a 'small tension expansion' of M theory[END_REF] and E 11 [START_REF] West | E 11 and M theory[END_REF][START_REF] Riccioni | The E 11 origin of all maximal supergravities[END_REF][START_REF] Bergshoeff | E 11 and the embedding tensor[END_REF]. Figure 1 : 1 Figure 1: Couplings induced by different components of the embedding tensor Θ M . Figure 2 : 2 Figure 2: Couplings induced by different components of the embedding tensor Θ M . 10 This can be seen as follows. According to (3.2) and (3.14) the vector fields couple to generators asA M µ (T B,M N η AB Θ N ) T A . Since η AB is invariant under L 1 ,indices in the range A ∈ 80 1 couple to B ∈ 80 0 , i.e. in this case T B is just the SL(9). Since (5.11) is a decomposition into irreducible SL(9) components and the indices ' N ' are in the range N ∈ 45 4/3 (as this is the only non-vanishing Θ-component) the range of indices ' M ' is restricted to M ∈ 45 -4/3 . 3 above, the multiplicities χ vir (1,1) , χ vir (2,1) carry representations of the coset CFT with central charge given by (3.21), which in this case yields c = 1/2, i.e. the Ising model. Accordingly χ vir (1,1) (y) = 1 + y 2 + y 3 + 2y 4 + 2y 5 + . . . , χ vir (2,1) (y) = 1 + y + y 2 + y 3 + 2y 4 + 2y 5 + . . . , duality group.Another grading of interest is the one w.r.t. sl(8) × sl(2)adj → . . . ⊕ (28 , 2) -1/4 ⊕ ((63, 1)⊕(1, 3)) 0 ⊕ (28, 2) 1/4 ⊕ (70, 1) 1/2 ⊕ (28 , 2) 3/4 ⊕ ((63, 1)⊕(1, 3)) 1 ⊕ . . . , , 1 ⊕ 3) ⊕ (216, 1)) -1 ⊕ ((216 , 2) ⊕ 2•(8, 2)) -5/4 ⊕ . . . ,(5.15)related to the ten-dimensional IIB theory, with sl(8) and sl(2) reflecting the torus T 8 and the IIB symmetry, respectively. By regarding the representation content, it is easy to verify that the lowest entries of the basic representation in this grading correspond to the gaugings induced by IIB p-form and geometric fluxes on T 8 . basic → (8 , 1) 0 ⊕ (8, 2) -1/4 ⊕ (56, 1) -1/2 ⊕ (56 , 2) -3/4 ⊕ ((8 Still, the explicit calculation of the various couplings from the closed formulas may pose a considerable task. Our space-time conventions are η µν = diag(+, -), 01 = -01 = 1; i.e. η ±∓ = 1, ±∓ = ∓. For notational simplicity we use here and in the following the notation V(w) ≡ V(γ(w)), even though by definition globally V is a function of γ and thus on the double covering of the complex wplane. We will however be mainly interested in its local expansion around w = ∞ on the sheet (2.24). Note that k(ĝ) = k. Inclusion of this term would presumably require the extension of Z µ by a K-valued term proportional to the Virasoro constraints(2.11). This is in accordance with the generalized linear system proposed in[START_REF] Bernard | Twisted self-duality of dimensionally reduced gravity and vertex operators[END_REF]. For the purpose of this paper however this would complicate things unnecessarily. Also in three dimensions it is most convenient to start from a formulation of the ungauged theory in which no vector fields are present[START_REF] Nicolai | Maximal gauged supergravity in three dimensions[END_REF][START_REF] De Wit | Gauged locally supersymmetric D = 3 nonlinear sigma models[END_REF]. In contrast to the present case, however, the vector fields in three dimensions are dual to the scalar fields and thus naturally come in the adjoint representation of the scalar isometry group. The coupling constant g always comes homogeneous with the embedding tensor and could simply be absorbed by rescaling Θ M A . We will keep it explicitly to have the deformation more transparent. Comparing (3.7) to(2.6) one notices that Q µ ≡ [V -1 D µ V] k = Q µ doesnot depend on the coupling constant g. This is due to our particular SO[START_REF] Breitenlohner | On the Geroch group[END_REF] gauge choice in equation (2.28). In fact, equations(3.36) suggest to think of Z µ as some composite connection within the full affine algebra. The explicit form of(3.16) suggests that in higher dimensions this corresponds to gaugings defined by an embedding tensor of the particular form Θ M A = η AB t B,M N θ N , Θ M 0 = θ M , parametrized in terms of a θ M in the conjugate vector field representation, where the global symmetry algebra t A Acknowledgments We wish to thank B. de Wit, O. Hohm, A. Kleinschmidt, H. Nicolai, M. Roček, I. Runkel, S. Schäfer-Nameki, and M. Trigiante for very helpful comments and discussions. It is useful to give the projected vector fields (3.5) using (3.14) .16) This further suggests that the vector fields A M µ transform in some irreducible highest weight representation of G. Namely, in that case there is for any given M an integer M such that (T β,m ) N M = 0 , for all m > M . (3.17) Formula (3.16) then shows that for every gauging defined by an embedding tensor Θ M with only finitely many non-vanishing entries, the projected vector fields A α µ (w) carry only finitely many positive powers of w. As a consequence, only finitely many of the A M µ enter the Lagrangian (3.8), (3.9), which is certainly indispensable for a meaningful action. Moreover, it follows from (2.26) that the terms ∂ w V V-1 and VZ µ (w) V-1 have expansions in 1/w starting with w -2 and w -1 , respectively. From the variation (3.10) we thus find that the positive mode vector fields A α,m µ , m > 0, do not enter the Lagrangian at all. I.e. a gauging of the shift symmetries of the dual potentials is not visible in the Lagrangian. From the Lagrangian itself this fact is not obvious since the quadratic constraint was used to derive (3.10). Only a truncation of the full gauge group is thus manifest in the Lagrangian. We will see this realized in explicit examples in section 5. In the rest of this section, we will show that every embedding tensor of the form (3.14) with Θ M satisfying (3.15) defines a gauge invariant Lagrangian. The quadratic constraint Let us pause for a moment and reconsider the present construction. We have constructed the gauged Lagrangian (3.8), (3.9) by covariantizing the ungauged theory and adding a topological term such that variation with respect to the new gauge fields yields the scalar duality equations. The gauging is entirely parametrized in terms of the embedding tensor Θ M . At first sight the formalism of the embedding tensor may seem unnecessarily heavy in two dimensions. As the new gauge fields enter the Lagrangian only in the contracted form A A µ ≡ A M µ Θ M A , could we not have started right away from a set of vector fields A A µ in the adjoint representation rather than introducing A M µ in some yet undetermined representation, and Θ M A separately? The answer is no. Consistency of the construction essentially depends on the quadratic constraint (3.15) on the embedding tensor which in particular implies that not all components of the projected A A µ are independent. This is most conveniently taken care of by explicitly introducing Θ M A . shows that this variation may be cast in the following compact form The quadratic constraint (3.15) on Θ M is essential in the derivation of this result. In expressing the generic variation we have introduced the "covariantized" variations and generalized field strength and covariant derivatives according to These expressions differ from the standard definitions of field strength and covariant derivatives by the appearance of the current Z µ containing the duality equations of the ungauged theory. Recall that in the gauged theory only its Θ-projection (3.11) is zero by the equations of motion. Its natural appearance in (3.29) motivates the introduction of generalized covariant derivatives D Note that as Z µ contains only negative powers of w, it only couples to shift symmetry generators in the covariant derivatives. Thus, for all physical fields ρ, V, there is no difference between the full covariant derivative D and (3.2) defined above. In view of (3.27), (3.29), a natural ansatz for the transformation of the vector fields is Indeed, the main result we establish in this section is the invariance of the full Lagrangian L 0 = L kin + L top under the combined action (3.25), (3.31) of the local gauge algebra. We now give a sketch of the proof. Computing the covariantized variations (3.28) for the gauge transformations (3.25) yields Denoting by e8 and sl 9 the charges associated with the e 8 and the sl(9) grading, respectively, they are related by sl 9 = e8 + q , (5.9) where q ∈ 1 3 Z is the charge associated with the gl(1) factor in (5.8). E.g. the level in the e 8 grading of the adjoint representation decomposes as under sl [START_REF] Maison | Are the stationary, axially symmetric Einstein equations completely integrable?[END_REF] where the subscript on the r.h.s. indicates sl 9 . This shows in particular that the sl( 9) algebra building the zero-modes in this grading is composed out of the 8 , 1 ⊕ 63, and 8 with e8 charges -1, 0, and 1, respectively. The adjoint representation in the sl( 9) grading takes the well known form adj → . . . ⊕ 80 -1 ⊕ 84 -2/3 ⊕ 84 -1/3 ⊕ 80 0 ⊕ 84 1/3 ⊕ 84 2/3 ⊕ 80 1 ⊕ . . . . It is instructive to note that the parts with coinciding ( sl 9 mod 1) in (5.11) constitute the three irreducible representations under the sl(9) subalgebra of (5.10) (this can be inferred, for example, from the decompositions given in [START_REF] Kac | Decompositions of representations of exceptional affine algebras with respect to conformal subalgebras[END_REF]). With the vector fields decomposed as (5.11), it is straightforward to identify the eleven-dimensional origin of the lowest components. These are the Kaluza-Klein vector (9 0 ), the vector fields that originate from the three-form (36 -1 ) and the vector fields coming from the dual six-form (126 -2 ) of eleven-dimensional supergravity. A priori, a possible eleven-dimensional origin of the higher components remains unclear. Note however, that we have already identified a higher-dimensional origin for different vector fields than in the reduction from three dimensions discussed in the previous section. Analysis of more complicated dimensional reductions may disclose a higher-dimensional origin of yet other vector fields within the basic representation of e 9 [START_REF] Korotkin | Yangian symmetry in integrable quantum gravity[END_REF] . The embedding tensor Θ M transforms in the conjugate vector field representation. Accordingly, we may try to identify the gaugings associated with the various components of Θ in the expansion conjugate to (5.11). The induced couplings are schematically depicted in Figure 2. Similar to the discussion in the previous section, A The algebra G -useful relations The algebra G extending the affine algebra ĝ by L 1 is generated by generators T α,m , L 1 , K , with commutation relations and all other commutators vanishing. We parametrize an arbitrary algebra element as Another relation that we will repeatedly make use of is has been extended by the generator t (0) defining the global (on-shell) scaling symmetry of metric and p-forms. These theories have not yet been considered in [START_REF] Nicolai | Maximal gauged supergravity in three dimensions[END_REF][START_REF] De Wit | On Lagrangians and gaugings of maximal supergravities[END_REF] and belong to the class of supergravities without actions whose nine-dimensional members have been studied in [START_REF] Bergshoeff | Non-)Abelian gauged supergravities in nine dimensions[END_REF].
92,884
[ "837381", "840037" ]
[ "13", "35882" ]
01474498
en
[ "info" ]
2024/03/04 23:41:46
2017
https://hal.science/hal-01474498/file/CraciunDeschaudGouletteSIM_13Nov16.pdf
Daniela Craciun Jean-Emmanuel Deschaud Francois Goulette Automatic Ground Surface Reconstruction from Mobile Laser Systems for Driving Simulation Engines Keywords: surface reconstruction, LiDAR, driving simulator engines, road network, mobile laser systems Driving simulation engines represent a cost effective solution for vehicle development, being employed for performing feasibility studies, tests failure and for assessing new functionalities. Nevertheless, they require geometrically accurate and realistic 3D models in order to allow drivers training. This paper presents the Automatic Ground Surface Reconstruction (AGSR) method, a framework which exploits 3D data acquired by Mobile Laser Scanning (MLS) systems. They are particularly attractive due to their fast acquisition at terrestrial level. Nevertheless, such a mobile acquisition introduces several constraints for the existing 3D surface reconstruction algorithms. The proposed surface modeling framework produces a regular surface and recovers sharp depth features within a scalable and detail-preserving framework. Experimental results on real data acquired in urban environments allow us to conclude on the effectiveness of the proposed method. Introduction Driving simulation engines require geometrically accurate and realistic 3D models of urban environments. Nowadays, such 3D models are computed manually by graphic designers who combine a wide variety of data ranging from GPS car maps to aerial images, passing through GIS data [START_REF] Despine | Realistic road modelling for driving simulators using GIS data[END_REF]. However, the resulted 3D models lack geometrical accuracy and photorealism, limiting therefore drivers training in real conditions. A more difficult task is represented by the road modeling process as it requires very accurate geometrical information in order to supply drivers perception for cars maneuverability. In order to overcome the limitations of the existing 3D road modeling methods, several research projects [2] are directed towards the use of MLS systems which allow sensing the environment of their surroundings with high sampling rates at high vehicle velocities. MLS systems provide geometrically accurate 3D measurements at terrestrial level over large scale distances. Nevertheless, such mobile acquisition results in a high amount of data which requires a fully automatic road surface reconstruction framework. When dealing with the surface reconstruction problem using 3D point clouds acquired by MLS, several key issues must be addressed, such as ensuring scalability while preserving sharp depth changes and geometrical details, often sensible to smoothing operations. The research work reported in this paper aims at exploiting 3D data acquired by a MLS system for automatic generation of geometrically accurate surface reconstruction in urban environments for driving simulation engines. In this paper we propose a fully automatic surface reconstruction framework for roads and sidewalks which copes with the aforementioned constraints imposed by MLS systems, while fulfilling the requirements of driving simulation engines. The paper is organized as following. Section 2 introduces our method for improving perceptive realism from 3D data acquired by MLS systems and the implementation of ground 3D models within the simulator software. The next section presents the existing solutions for ground surface reconstruction from 3D point clouds acquired by MLS. Section 5 provides an overview of our framework which is driven by a ground segmentation module presented in Section 6. The ground points are exploited along with a novel surface reconstruction pipeline described in Section 7. Section 8 evaluates the performances of the proposed framework, while Section 9 presents quantitative results obtained over large scale distances. Section 10 summarizes the obtained results and presents future extensions of our method. Perceptive Realism from 3D Point Clouds acquired by MLS Systems Driving simulation engines represent a cost effective alternative for improving vehicle development with minimum costs. Such systems allow the simulation of a wide variety of traffic scenarios with visually enriched environments for develop-ing vehicle dynamics, driving assistance systems and car lighting. Perceptive realism from scanned reality. Driving simulation engines fuse visual, audio and motion senses within a global architecture composed by several modules. A detailed description of the functional structure of a simulator engine can be found in [START_REF] Bouchner | Car dynamics model design for interactive driving simulation use[END_REF]. The spatiotemporal coherence in a driving simulation engine is a major concern. It is related to the proprioceptive integration, i. e.: humans' sensibility to delay and perception incoherence (depth, motion) [START_REF] Petit | visualization et évaluation d'images (HDR). Application la simulation de conduite nocturne[END_REF], [START_REF] Brémond | La visibilite routière: une approche pluri-disciplinaire[END_REF]. If they are not treated accordingly, they can lead to severe misperception, headaches and accidents. A major concern in car manufacturing is represented by the use of realistic data and driving scenarios for designing adapted functional units. This requires consistent resources for collecting real-time traffic information such as vibrations, visual data bases, sounds and traffic incidents. As presented in [START_REF] Chaperon | The new PSA Peugeot-Citroen advanced driving simulator: Overall design and motion cue algorithm[END_REF], realistic restitution of longitudinal and lateral acceleration improves realism during driving simulation. A critical component in generating a suitable visual layer for driving simulation engines is represented by the realism of the 3D model which must be correlated to both, cars vibrations [START_REF] Bolling | Shake: an approach for realistic simulation of rough roads in a moving base driving simulator[END_REF] and sound component. Visual layer from MLS data. The use of GIS (Geographic Information Systems) data within driving simulator engines provides an effective testbed for vehicle developing. The visual layer is composed by two main ingredients: 3D environment models and the road network supplied by GIS datasets. Nowadays, such 3D models are created by graphic designers through the use of manual frameworks. In presence of occlusions, missing data is filled with synthetic information extracted from similar non-occluded areas. Such workflows do not provide a real model, producing drivers' misperception. In addition, continuous changing in urban planning requires up-to-date 3D models and GIS datasets. This calls for automatic procedures capable to survey and generate 3D models over large distances in a relatively short time. Furthermore, the cost for generating manually 3D models represents in average a third of the overall expenses required by a driving simulation engine. From MLS data to scalable road networks via logical description. In order to overcome the fastidious processing of manual methods, the design of automated 3D modeling frameworks becomes a must. In addition, with the new advancements in mobile mapping systems, it is now possible to acquire real data, at terrestrial level while driving in normal traffic conditions. This allows acquiring real data and generating 3D models over large distances within a cost effective methodology. Nevertheless, such a mobile acquisition results in a high amount of data which requires automated 3D modeling frameworks. The workflow presented is paper was developed within an ongoing research project, [2] which is focusing on the generation of geo-specific 3D models for driving simulation engines in order to allow vehicle design and drivers' training with minimal costs. The project is mainly concerned with the design of an automatic framework capable to generate geometrically accurate 3D models from MLS data acquired over large distances. The reconstructed ground surfaces generated by our algorithm are further exploited via a logical description for road networks encoded in different file formats, such as CityGML [8] or RoadXML [START_REF] Roadxml | [END_REF], accepted by driving simulation engines. They are widely employed to supply the software of driving simulation engines. A good example is SCANeR TM [START_REF] Oktal | [END_REF] which provides a complete description of roads network for a variety of driving simulation engines. Open Issues for Ground Surface Reconstruction from MLS datasets Mobile mapping systems (MMS) equipped with active 3D sensors are well adapted for acquiring densely sampled 3D measurements of the underlying surface, while driving in normal traffic conditions. Nevertheless, such a discrete representation must be further exploited in order to build a continuous surface through means of 3D modeling. The existing surface reconstruction systems have reached a maturity level when dealing with stop-and-go mapping systems. However, when the input data is a 3D point cloud delivered by the latest mobile mapping systems, new key issues must be addressed in order to cope with several constraints such as mobile acquisition, scalability and detail-preserving capabilities required for the surface reconstruction algorithms. The mobile acquisition introduces new challenges to the existing surface reconstruction algorithms. They are represented by internal and external calibration steps, the accuracy of the sensor localisation and other parameters related to the acquisition (distance to the scanned surface, incident angle, surface geometry, etc.) which must be carefully identified and modeled correspondingly. MMS must be embedded with 3D modeling frameworks capable to scale-up over large distances, while preserving geometrical details such as sharp features and depth changes. This is required by the capacity to process big data sets in a fully automatic fashion and to design consistent level of detail (LOD) for multi-resolution mapping of geo-specific 3D model data bases. In addition, scalability issues must be addressed in order to deal with real-time rendering of big data sets acquired over large distances. This paper is concerned with the ground surface reconstruction problem, which in man-made outdoors environments corresponds to the road, sidewalk and ramp access areas. These are structured areas, including sharp depth changes and geometrical details, which need to be preserved in order to cope with the accuracy required by the visual layer of driving simulation engines. This requires noise smoothing procedures able to deal with MLS datasets in order to eliminate noise while preserving sharpness. This is an open issue which must be addressed in order to provide a highly accurate surface of road borders, ramp access and other geometric details. This paper presents a fully automatic algorithm designed for supplying ground surface reconstruction from MLS datasets. The proposed framework addresses the aforementioned constraints, being able to preserve geometric details, while being scalable over large distances. State-of-the-Art on Ground Surface Reconstruction Systems Existing systems on ground surface reconstruction can be classified with respect to ground modeling and surface reconstruction algorithms. This section reviews main methods belonging to each class. Ground modeling systems proceed either by first building a mesh and then segmenting the ground from the mesh based on different criterions, or by first extracting the ground from the point cloud and then reconstruct the surface. In [START_REF] Carlberg | Fast surface reconstruction and segmentation with ground-based and airborne lidar range data[END_REF], the ground is segmented from the reconstructed mesh based on a proximity criterion applied over the triangles. In [START_REF] Wiemann | Automatic construction of polygonal maps from point cloud data[END_REF], authors propose a ground modeling procedure for indoor environments which allows floor, wall and ceiling segmentation based on planar clustering procedure. The aforementioned methods proceed by meshing the entire point cloud, resulting in extra-time computation for reconstructing the non-ground objects and for eliminating them. When only the ground is required, it is more efficient to first extract the ground and then to build the mesh. The ground surface reconstruction workflow presented in this paper separates first the ground from the non-ground objects and proceeds by reconstructing the 3D point cloud belonging to the ground. This allows to apply adapted surface reconstruction frameworks, consistent with planar (road, floor) or non-planar (complex) objects. The state of the art on ground modeling systems can be roughly classified in semiautomatic [START_REF] Yu | 3D reconstruction of road surfaces using an integrated multi-sensory approach[END_REF] and automatic frameworks. In [START_REF] Yu | 3D reconstruction of road surfaces using an integrated multi-sensory approach[END_REF], authors introduce a high-resolution surface reconstruction algorithm for road areas selected by human operator. While improving the state of the art with a highly accurate ground reconstruction method, the proposed framework is not adapted for automatic processing over large scale distances. Mesh-based methods reported in [START_REF] Carlberg | Fast surface reconstruction and segmentation with ground-based and airborne lidar range data[END_REF], [START_REF] Jaakkola | Retrevial algorithms for road surface modelling using laser-based mobile mapping[END_REF] present the advantage of being automatic and thus adapted for on-line data acquisition and processing over large distances. Nevertheless, several key issues of the existing techniques stand in the capability to reconstruct sharp depth features and geometrical details, while being scalable over long distances. The second main processing block required by a ground modeling system is represented by surface reconstruction method for which reported frameworks can be classified in two major classes: implicit and explicit methods. The first class of algorithms [ 1 5 ] , [ 1 6 ] proceed by polynomial fitting, while the second class [START_REF] Edelsbrunner | Shape reconstruction with Delaunay complex[END_REF], [START_REF] Gopi | Surface reconstruction based on lower dimensional localized Delaunay triangulation[END_REF] reconstruct the surface by triangulating directly the 3D points, resulting in a surface very close to the acquired 3D point cloud. In this research work we are interested in generating a triangular mesh as close as possible to the scanned surface and thus, an explicit method is more adapted to our application. The results obtained are compared to two well known surface reconstruction methods belonging to each class: Poisson [START_REF] Kazhdan | Poisson surface reconstruction[END_REF] and Greedy projection [START_REF] Marton | On fast surface reconstruction methods for large and noisy point clouds[END_REF] techniques. Overview of the Automatic Ground Surface Reconstruction (AGSR) Algorithm The proposed framework does not exploit any assumption for the acquisition setup, being therefore suitable for 3D point clouds acquired by a large variety of MMS [START_REF] Bohren | Litte ben: The ben franklin racing team's entry in the 2007 DARPA urban challenge[END_REF]. The algorithm presented in this paper was tested on various datasets supplied by two different MLS systems which are illustrated in Figure 1. The proposed ground surface reconstruction algorithm comes together with a parallel scheme designed for supplying massive 3D point cloud processing acquired by MLS systems. Figure 2 depicts the global architecture of the AGSR method. The main input is a massive 3D point cloud which is first sliced into 3D chunks, with N Mpts (Million points) per chunk. The length of a 3D chunk varies with the vehicle speed. In Figure 3 (a) it can be observed an example of a 3D chunk extracted from a survey performed over an urban area located in Paris, France. In this research work, the surface reconstruction method exploits a 3D point cloud segmentation and classification algorithm [ 2 2 ] for the ground extraction phase. This procedure assigns semantic labels to each 3D measurement, providing a classification output with different classes: ground (composed by the road, sidewalks and ramp access), buildings, urban furniture and cars. Such a semantic labeling scheme provides two advantages: (i) it gives the possibility to parallelize the surface reconstruction at class level, while adapting the surface reconstruction method with respect to its geometric properties; (ii) in dynamic environments, when similar objects are detected, the already computed model can be inserted within a global reference scene. Fig. 2 The global architecture of the AGSR method and its integration within a parallel computation scheme dedicated to massive 3D point cloud processing. On the right side are illustrated the outputs corresponding to three procedures composing the workflow: ground extraction -green, surface reconstruction -orange, merged decimated meshes -blue. The second phase of the algorithm exploits the 3D point cloud corresponding to the ground to build a triangular mesh. The algorithm starts by a planar Delaunay triangulation process which is followed by a smoothing phase in order to reduce noise, generating thus a regular and drivable ground surface. In order to cope with scalability issues over large scale distances, a decimation stage is applied to the smoothed mesh. In a final step, a global referential frame is updated with each mesh corresponding to each 3D chunk. The proposed workflow is designed to be applied in parallel to each 3D chunk. The following two sections are dedicated to a detailed description of the two main processing blocks of the AGSR framework: the automatic ground extraction step and the surface reconstruction phase. Automatic Ground Extraction This section describes ground extraction procedure which represents the first step in the 3D modeling process. In this work we employ the segmentation algorithm proposed in [START_REF] Serna | Detection, segmentation and classification of 3D urban objects using mathematical morphology and supervised learning[END_REF]. It exploits elevation images along with Mathematical Morphology [START_REF] Matheron | Random sets and integral geometry[END_REF] [24] tools. The algorithm is composed of three processing blocks: the first one is dedicated to the projection of the 3D point cloud onto an elevation image. In a second step, ground and object segmentation is processed by analyzing discontinuities over the elevation images. Facades segmentation is performed by identifying highest vertical structures. The final procedure back-projects each pixel of the elevation image to the 3D space. Each 3D point is labelled, allowing to recover 3D points corresponding to the ground. Figure 3 (b) presents the segmentation and classification result corresponding to the input point cloud depicted in Figure 3 (a). Figure 3 (c) depicts the 3D point cloud representing the ground composed by roads, sidewalks and accessibility ramps. For a complete description of the 3D point cloud classification procedure, the reader may refer to [START_REF] Serna | Detection, segmentation and classification of 3D urban objects using mathematical morphology and supervised learning[END_REF]. The 3D points belonging to the ground are identified with respect to the labeling provided by the classification procedure and exploited along with the surface reconstruction process which is described in the following section. The ground surface reconstruction module transforms a 3D point cloud labelled as ground ( illustrated in Figure 3 (c)), into a continuous and scalable surface representation. The proposed framework is composed by several steps which are illustrated in Figure 2 and described through the following sections. First, the 3D point cloud representing the ground is triangulated in the (x, y) plane using a Delaunay triangulation algorithm [START_REF] Shewchuk | Triangle: Engineering a 2D quality mesh generator and Delaunay triangulator[END_REF] which provides points connectivity. Then, we apply a mesh cleaning process to eliminate long triangles. In order to provide a continuous and regular surface model of the road, we apply the Sinc Windowed smoothing algorithm [START_REF] Taubin | Optimal surface smoothing as filter design[END_REF] which eliminates high frequencies, while preserving sharp depth features and avoiding surface shrinkage. In a final step, a progressive decimator [ 2 7 ] , [START_REF] Hoppe | Progressive meshes[END_REF] is applied to the smoothed mesh in order to cope with scalability constraints when performing surface reconstruction over large scale distances. The decimation phase provides surface representation with low memory usage, enabling efficient data transmission and visualization. In addition, the decimation procedure enables progressive rendering in order to deal with real-time constraints imposed by driving simulation engines. Point Cloud Triangulation Let us note with P = {x i , y i , z i |i = 1, .., Np } the 3D point cloud corresponding to the ground, where Np denotes the number of points. We apply the Triangle algorithm [START_REF] Shewchuk | Triangle: Engineering a 2D quality mesh generator and Delaunay triangulator[END_REF] to the 3D point cloud P to generate a planar constraint Delaunay triangulation with angles no smaller than 30° . Let us note with M DT the resulting ground mesh, which has Nt ≈ 2Np triangles. Long Triangles Elimination In order to eliminate long triangles from non-uniform boundary points, we perform statistics over the edge lengths and identify those with maximum length, noted emax . We identified that long edges correspond to emax ≈ δe ¯, where e ¯ denotes the mean length computed over all edges e j belonging to the mesh M DT , i.e. over all triangles t j ∈ M DT , j = 1, .., Nt and for its corresponding edges e j , i = {1, 2, 3}. The term δ denotes a proportionality factor . A triangle t j is eliminated if any of its edges e j > emax , i = {1, 2, 3}. This criterion ensures that only long triangles belonging to the boundary are eliminated; moreover, since small triangles are not eliminated, holes can not be generated within the mesh. In practice, for several datasets acquired in urban areas by different MLS systems [START_REF] Paparoditis | Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualization and 3D metrology[END_REF], [START_REF] Goulette | An integrated on-board laser range sensing system for on the way city and road modelling[END_REF], we found that a coefficient δ = 20 results in a cleaned mesh, i.e. without long triangles, which we note M C . Building a Regular Surface As illustrated in Figures 4 (a) and (b), the triangulation of noisy 3D measurements results in high frequency peaks. Since we want to inject the ground surface model in driving simulator engines, an important issue which needs to be addressed is the geometrical accuracy. The 3D model must be distortion-free and regular. In order to obtain a regular surface, the Sinc Windowed smoothing procedure [START_REF] Taubin | Optimal surface smoothing as filter design[END_REF] is applied which approximates low-pass filters by polyhedrons in order to eliminate high frequency peaks. Figures 4 (c) and (d) illustrate the resulted smoothed mesh, noted M S ; it can be observed that the Sinc Windowed smoothing technique provides a regular surface, while preserving roads and sidewalk borders sharpness. Scalability The smoothed mesh has a high number of triangles, being redundant and causing high memory usage. Moreover, in order to merge several mesh segments into a global scene, the mesh resolution must be drastically reduced. To this end, we apply the progressive decimation method [ 2 7 ] , [START_REF] Hoppe | Progressive meshes[END_REF], mainly the default implementation available in the VTK library [START_REF] Schroeder | The visualization toolkit[END_REF]. The mesh resolution r(M D ) is controlled by the reduction factor, noted f D . The algorithm proceeds as follows: first, each vertex is classified and inserted in a priority queue for further processing. The priority is set in order to minimize the distance to the original mesh caused by the vertex elimination and by the re-triangulation of the resulting hole. As stated in [START_REF] Schroeder | Decimation of triangle meshes[END_REF], following the vertex type (simple, interior, boundary, etc.), a different distance criterion is computed (distance to plane, distance to edge). Let us note with M D the decimated mesh, and with N D the corresponding number of triangles. Figure 5 illustrates the result obtained for the input point cloud depicted in Figure 3 (c) reducing f D = 90% of the entire mesh. The remaining number of triangles corresponds to a mesh resolution of r(M D ) = 10% of the original mesh. It can be observed that the decimation algorithm preserves the reconstruction of the road, sidewalk borders and accessibility ramps. In order to emphasize the detail-preserving capability of the decimation algorithm, Figure 6 illustrate the speed bump reconstruction after applying a maximal mesh reduction factor of f D = 90%. Accuracy of the decimated mesh. As in [START_REF] Turnet | Watertight planar surface meshing of indoor points clouds with voxel carving[END_REF] we evaluate the accuracy of the decimated mesh by measuring the distance between the original point cloud P and the vertices of the decimated mesh, M D . We choose to compute the Hausdorff distance [START_REF] Cignoni | Measuring error on simplified surfaces[END_REF] , we study both, the mean and the root mean squared (RM S H ) distance for different mesh resolutions r(M D ). We observed that the mean is less sensible to the decimation process, while the RM S H varies with a higher amplitude, although negligible (±10 -3 m ). This let us conclude that the memory usage can be reduced by a maximal factor of f D = 90%, without sacrificing the accuracy of the model. Table 1 Accuracy evaluation of the ground surface reconstruction with respect to ground truth (GT) data for Urban ♯2, (acquired over Cassette road situated in Paris, France) illustrated in Figure 6. Dataset Urban ♯2 H sidewalk Hramp W road GT Reconstruction 10.5 (cm) 10.1 (cm) 2.5 (cm) 2.3 (cm) 3.5 (m) 3.514 (m) Performance Evaluation We evaluate the performances of the proposed framework in terms of accuracy, memory usage and computation time. Accuracy evaluation. We quantify the accuracy of the reconstructed surface with respect to several ground truth measurements which were performed manually on site (Cassette road, situated in Paris, France), mainly: the height of the sidewalk border, the height of the access and the road width, noted H sidewalk , Hramp and W road , respectively. Table 1 illustrates the ground truth and the reconstructed dimensions for dataset Cassette (Urban ♯2) illustrated in Figure 6. It can be observed the reachable accuracy is better than 1.5 cm. Computation time. We evaluate our algorithm on a 64b Linux machine, equipped with 32 Gb of RAM memory and an Intel Core i7 running at 3.40 GHz. Our method is implemented in C/C++ and exploits PCL [START_REF] Rusu | 3D is here: Point Cloud Library (PCL)[END_REF] and VTK [START_REF] Schroeder | The visualization toolkit[END_REF] libraries. Table 2 illustrates the computation time obtained for the dataset Urban ♯2. We can observe that the decimation step is the most expensive phase, being related to the decimation factor f D . In this example, a maximum decimator factor was used f D = 90% for a mesh with 2 MTriangles, which results in 9 sec of computation time. Memory usage. Table 3 illustrates the memory usage for each surface reconstruction step. It can be observed that the mesh representation is more efficient than the point-based one, allowing to reduce the memory usage 3 times for the full resolution mesh and 20 times for a resolution mesh of r(M S ) = 10%. These results show that the proposed surface reconstruction framework provides a memory efficient surface representation, while preserving geometric details. Visual rendering. The frame frequency, measured in frames per second (FPS), allows to quantify the quality of a 3D model with respect to the visual rendering capability. The second row of Table 3 illustrates the frame frequency, noted νrate and measured using Cloud Compare [START_REF]Cloud Compare[END_REF] for different surface representations (discrete and continuous). It can be observed that the pointbased representation detains faster rendering capabilities than the full resolution mesh, which does not cope with real-time rendering requirements. In contrast, the decimated mesh exhibits realtime frame rates, while providing a continuous surface representation. Although the decimation step is the most computationally expensive processing block of the proposed surface reconstruction framework, it enables real-time rendering of a continuous surface over large scale scenes, while preserving geometric details. Ground surface comparison. We evaluate the results of the proposed frame-work, entitled Automatic Ground Surface Reconstruction (AGSR), with two well known surface reconstruction techniques. The first method is based on implicit functions [START_REF] Kazhdan | Poisson surface reconstruction[END_REF], while the second is an explicit method [START_REF] Marton | On fast surface reconstruction methods for large and noisy point clouds[END_REF] which proceeds by a greedy projection. Figure 7 and Table 4 illustrate the results obtained by applying each reconstruction algorithm to the point cloud P corresponding to the ground depicted in Figure 3 (c), acquired over Assas road (Paris, France), noted dataset Urban ♯1. By visually inspecting Figures 7 (a) and 7 (b), it can be observed that although Poisson method provides a watertight surface, it results in mesh shrinkage around the sidewalk borders. Moreover, it reduces the number of points considerably, introducing thus inaccuracies between the point cloud geometry and the final surface. In contrast, the Greedy projection method keeps all the measurements provided by the acquisition. Nevertheless, it results in discontinuity and high frequency peaks. The third row of Table 4 illustrates the computation time obtained using PCL implementations. It can be observed that the proposed method increases the performances not only in terms of accuracy, as showed in Figure 7, but also in terms of computation time. More precisely, it allows to decrease the runtime5 times when compared to Poisson method, and 16 times with respect to the Greedy projection technique. Both methods, Poisson and Greedy, are computationally expensive due to the normal computation step. When comparing final results, it can be observed that, although the proposed technique includes a computationally expensive decimation phase, beside the detail-preserving rendering capability, it features real-time surface reconstruction on parallel processing units. Scaling-Up Detail-Preserved 3D Ground Meshes The proposed surface reconstructed algorithm was tested on several datasets acquired by two different MMS: Stereopolis [ 2 9 ] and L3D2 [START_REF] Goulette | An integrated on-board laser range sensing system for on the way city and road modelling[END_REF] equipped with Riegl and Velodyne sensing devices, respectively. Following the 3D sensor device, different smoothing parameters were used. Two examples of the surface reconstruction result obtained for several datasets are depicted in Figure 8. For the dataset Cassette, the computation time for one chunck with 3 Mpts acquired along 50 m, the algorithm performs the surface reconstruction in about 17 s. For 100 chunks, the algorithm processes 100 Mpts representing the ground in about 28 min. When applied to long distance surveys, for 100 km non-stop driving and data-acquisition, the MMS acquires 6 Billion points and the ground surface reconstruction can be computed in about 10h. In this research work, we focus mainly on providing an accurate and scalable surface reconstruction algorithm, time scalability being beyond the scope of the paper. Nevertheless, an upgrade of computational resources by a factor of 10, results in real-time surface reconstruction capabilities. When such an upgrading scheme is adopted, the algorithm can deliver the entire road network for 10000 km length in about 5 days, non-stop driving, data acquisition and processing at 90 km/h. Conclusions and Research Perspectives The present research work introduces the Automatic Ground Surface Reconstruction algorithm designed to supply scalable and detail-preserving ground surface reconstruction in a fully automatic fashion. The proposed technique generates accurate 3D models of outdoor environments adapted for driving simulator engine Fig. 1 1 Fig. 1 Mobile mapping systems employed in the present research work: (a) L3D2 MMS prototyped by the Robotic Lab of the Mines ParisTech [30], equipped with a Velodyne 3D sensing device, (b) STEREOPOLIS prototype designed by the French Mapping Agency [29]. Fig. 3 3 Fig. 3 Ground extraction results: (a) example of 3D chunk acquired over Assas road located in Paris (France): approximative length 82m, with N = 3 Mpts, color coded with respect to elevation values, (b) 3D point cloud segmentation and classification results: facades -dark blue, road, sidewalks and ramp access -black, non-ground objects -light blue, (c) the 3D point cloud corresponding to the ground: 1.27 Mpts. Fig. 4 4 Fig. 4 The result of the Sinc windowed smoothing procedure [26] obtained on dataset Cassette acquired over the Cassette road situated in Paris, France. (a) the output of the Delaunay triangulation procedure, (b) zoom-in view on the area selected in the rectangle illustrated in Figure (a), (c) smoothed mesh output, (d) zoom-in view on the area selected in the blue rectangle illustrated in Figure (c). Fig. 5 5 Fig. 5 The decimation results obtained for the dataset Assas: (a) smoothed mesh: Nt = 2.54 Mpts, (b) zoom-in view in of the area selected in the blue rectangle depicted in Figure (a), (c) decimated mesh, wire-frame view: decimation factor f D = 90%, N : 254 kTriangles, (d) zoom-in view of the area selected by the blue rectangle illustrated in Figure(c). Fig. 6 6 Fig. 6 The final output of the proposed surface reconstruction method obtained on the dataset Cassette, with 51 m length. (a) Google Maps view of the surveyed area, (b) Np = 1.01 Mpts, Nt = 2.026 MTriangles, decimated mesh with f D = 90%, N D = 203 kTriangles. Fig. 7 7 Fig. 7 Comparison of surface reconstruction results obtained for the dataset Assas (Urban ♯1): (a) Poisson technique, (b) zoom-in view of the rectangular area selected in Figure(a), (c) Greedy projection technique, (d) zoom-in view of the rectangular area selected in Figure (c), (e) proposed AGSR technique, (f ) zoom-in view of the rectangular area selected in Figure (e). Fig. 8 8 Fig. 8 Surface reconstruction results obtained for several datasets. (a) Google Maps view of two areas surveyed by the Sterepolis MMS: Assas road (Paris, France) -indicated by the green arrow, Cassette road (Paris, France) -indicated by the red arrow, (b) surface reconstruction results obtained for 5 scans segments, overall length: 319 m, (c) zoom-in view of the blue rectangular area presented in Figure (b), illustrating the accuracy of the reconstructed ramp access and sidewalks, (d) surface reconstruction results obtained for 4 chunks, overall length: 217 m, (e) zoom-in view of the blue rectangular area presented in Figure (d), illustrating the accuracy of the reconstructed ramp access and sidewalks. Table 2 . 2 Computation time for dataset Urban ♯2 illustrated in Figure6where P S denotes the point cloud segmentation phase for the ground extraction; each column gives the runtime corresponding to each step of the algorithm. The overall computation time is about 17s for Np =1.01 Mpts and Nt =203 kTriangles. Steps P S M DT M C M S M D CPU (s) 2 2.14 0.18 3 9 Table 3 3 Memory usage and frame frequency measures corresponding to the input chunk P and to the main outputs of the algorithm for dataset Urban ♯2 illustrated in Figure6. Urban ♯2 P M DT M S M D Memory (Mb) 14.856 81.61 37.600 3.7 νrate (FPS) 267.74 10.273 12.448 131.96 Table 4 . 4 Comparison between surface reconstruction methods: results obtained by running the algorithms on the dataset Urban ♯1 illustrated in Figure3(c): N out and N out denote the p t or for being embedded onboard mobile plateforms for autonomous navigation applications. The reported technique addresses several open issues of the currently existing surface reconstruction techniques, such as: accurate reconstruction of sharp depth features in presence of noisy datasets, scalability and memory usage. Research perspectives of the present research work are focusing on the photorealist surface reconstruction problem through the jointly use of laser reflectance and RGB cameras. A second research perspective is related to the facade surface reconstruction and ground-faade merging within a global referential frame. Daniela
37,621
[ "744814", "858", "6956" ]
[ "27997", "27997", "27997" ]
01474550
en
[ "info" ]
2024/03/04 23:41:46
2016
https://inria.hal.science/hal-01474550/file/main.pdf
Simon Castellan Pierre Clairambault Causality vs. interleavings in concurrent game semantics Keywords: 1998 ACM Subject Classification F.3.2 Denotational Semantics Keywords and phrases Game semantics, concurrency, causality, event structures come Causality vs. interleavings in concurrent game semantics Introduction Game semantics present a program as a representation of its behaviour under execution, against any execution environment. This interpretation is computed compositionally, following the methodology of denotational semantics. Game semantics and interactive semantics in general have been developed for a variety of programming language features. They are an established theoretical tool in the foundational study of logic and programming languages, with a growing body of research on applications to various topics, e.g. model-checking [START_REF] Abramsky | Applying game semantics to compositional software modeling and verification[END_REF][START_REF] Ong | On model-checking trees generated by higher-order recursion schemes[END_REF], hardware [START_REF] Ghica | Geometry of synthesis: a structured approach to VLSI design[END_REF] or software [START_REF] Schöpp | On the relation of interaction semantics to continuations and defunctionalization[END_REF] compilation, for higher-order programs. These works exploit the ability of game semantics to provide compositionally a clean and elegant presentation of the operational behaviour of a program, which can then give an invariant for program transformations, or be exploited for analysis. One subject where game semantics particularly shine is for reasoning about program equivalence. Indeed, game semantics models are often fully abstract: they characterise programs up to contextual equivalence, meaning that two programs behave in the same way in all contexts if and only if the corresponding strategies have the same plays. Concurrent languages are no exception: Ghica and Murawski's games model for IPA [START_REF] Ghica | Angelic semantics of fine-grained concurrency[END_REF] is fully abstract wrt. may-testing. Although, in this language, contextual equivalence is undecidable even for second-order programs, decidability can be recovered for a restricted language [START_REF] Ghica | Syntactic control of concurrency[END_REF]. But Ghica and Murawski's model represents concurrent programs with interleavings, so whether one works in a decidable fragment or simply uses non automated tools, reasoning on the fully abstract model requires one to explore all possible interleavings. This is the so-called state explosion problem familiar in the verification of concurrent systems [START_REF] Godefroid | Partial-Order Methods for the Verification of Concurrent Systems -An Approach to the State-Explosion Problem[END_REF]. Partial order methods provide good tools to alleviate this problem. They provide more compact representations of concurrent programs, avoiding the enumeration of all interleavings. For IPA, recent advances in partial-order based game semantics [START_REF] Rideau | Concurrent strategies[END_REF][START_REF] Castellan | The parallel intensionally fully abstract games model of PCF[END_REF] allow us to restate Ghica and Murawski's model based on partial orders or event structures. But can we get back full abstraction this way? Since the interleaving model is fully abstract, the question is: can we give a clean, compact, presentation of the interleaving games model of IPA via partial orders? As it is, the interpretation of IPA in e.g. [START_REF] Castellan | The parallel intensionally fully abstract games model of PCF[END_REF] is certainly not fully abstract since it retains intensional information (such as the point of non-deterministic branching) invisible up to may-testing. But can we rework it so it yields canonical partialorder representatives for strategies in the interleaving model? In this paper, we show that already in an affine setting, the answer is no. Our contributions are the following. We describe an affine variant of IPA -it is mostly there to provide illustrations and an operational light. For this affine IPA, we give two new categories of games. The first is an affine version of Ghica and Murawski's model. The second draws inspiration from Rideau and Winskel's category of strategies as event structures, without the information on the point of non-deterministic branching, which is irrelevant up to may-testing. Via a collapse of the causal model into the interleaving one, we show that the latter is the observational quotient of the former. We describe several causal reconstructions from an interleaving strategy, aiming for minimality. Finally, we show that interleaving strategies have in general no canonical minimal causal representation. On the game semantics front, our two models are arena-based, in the spirit of HO games [START_REF] Hyland | On full abstraction for PCF: I, II, and III[END_REF]. They both operate on a notion of arenas enriched with conflict, which is required in an affine setting. Our interleaving model is not fully abstract for affine IPA. Indeed, we have omitted well-bracketing (as well as bad variables and semaphores) in an effort to make the presentation lighter. These aspects are orthogonal to the problem at hand, and our developments would apply just as well with those. Apart from well-bracketing, our interleaving model is fully compatible with Ghica and Murawski's -strategies in our sense can easily be read as strategies in their sense, as pointers can be uniquely recovered. Affine IPA and its interleaving game semantics In this section we introduce affine IPA, and the category GM of interleaving strategies. Affine IPA ▸ Definition 1. The types of affine IPA are A, B ∶∶= B com A ⊸ B ref r ref w . We have types for booleans, commands, and a linear function space. Finally we have two types ref r and ref w for read-only and write-only variables (this splitting of ref is necessary to make the variables non-trivial in an affine setting). The terms of affine IPA are the following: M, N ∶∶= x M N λx. M tt ff if M N 1 N 2 skip M ; N newref v in M M ∶= tt !M M ∥ N References are considered initialized to ff . As they can only be read once, the only useful value to write is tt, hence the restricted assignment command. Typing rules are standard, (com ⊸ com) ⊸ B q - run + done - ff + (com ⊸ com) ⊸ B q - run + run - done + done - tt + Figure 1 Maximal plays of the alternating game semantics of strict we only mention a few. Firstly, affine function application and boolean elimination. Γ ⊢ M ∶ A ⊸ B ∆ ⊢ N ∶ A Γ, ∆ ⊢ M N ∶ B Γ ⊢ M ∶ B ∆ ⊢ N 1 ∶ A ∆ ⊢ N 2 ∶ A Γ, ∆ ⊢ if M N 1 N 2 ∶ A Crucially the first rule treats the context multiplicatively, making the language affine. Secondly, here are the rules for reference manipulation. Γ, r ∶ ref r , r ∶ ref w ⊢ M ∶ B Γ ⊢ newref r in M ∶ B Γ ⊢ M ∶ ref r Γ ⊢ !M ∶ B Γ ⊢ M ∶ ref w Γ ⊢ M ∶= tt ∶ com Splitting between the read and write capabilities of the variable type is necessary for the variables to be used in a non-trivial way. For example, the following term is typable: strict = λf com⊸com . newref r in (f (r ∶= tt)); !r ∶ (com ⊸ com) ⊸ B The language is equipped with the same operational semantics as in [START_REF] Ghica | Angelic semantics of fine-grained concurrency[END_REF] -we skip the details. The operational semantics yields an evaluation relation: for ⊢ M ∶ B, we write M ⇓ may b to mean that M may evaluate to the boolean b, or just M ⇓ may to mean that M may converge. From the combination of concurrency and state, affine IPA is a nondeterministic language. Arenas In game semantics, one interprets a program as a set of interactions, usually called plays, with its execution environment. For instance, some maximal plays of the interpretation strict of the term strict ∶ (com ⊸ com) ⊸ B defined above are displayed in Figure 1. Those diagrams are read from top to bottom, and moves have polarity either Player (+, Program) or Opponent (-, Environment). In the first play of Figure 1 Opponent behaves like a constant, where in Figure 1 he is strict. Although the programs are stateful, plays do not carry state: instead, we only see how the state influences Player's behaviour. To make this formal, we first extract from the type the computational events on which plays such as the above are formed. These are organized into arenas. ▸ Definition 2. An event structure with polarities is a tuple (A, ≤ A , ♯A, pol A ) where A is a set of moves or events, ≤ A is a partial order on A such that for any a ∈ A, [a] = {a ′ ∈ A a ′ ≤ A a} is finite, ♯A is an irreflexive symmetric conflict relation such that for all a ♯A a ′ , for all a ′ ≤ A a ′ 0 , we also have a ♯A a ′ 0 . Finally, pol A ∶ A → {-, +} is a polarity function. Apart from the fact that we only have binary conflict, this is the same notion of event structures with polarities as in [START_REF] Rideau | Concurrent strategies[END_REF]. A configuration of A, written x ∈ C (A), is a finite C O N C U R 2 0 1 6 x ⊆ A which is down-closed (if a ∈ x and a ′ ≤ A a, then a ′ ∈ x as well) and consistent (for all a 1 , a 2 ∈ x, ¬(a 1 ♯A a 2 )). For a 1 , a 2 ∈ A, we say that a 1 immediately causes a 2 , written a 1 a 2 , when a 1 < A a 2 and for all a 1 ≤ a ≤ a 2 we have either a 1 = a or a = a 2 . We also write a 1 ∼ a 2 if a 1 and a 2 are in immediate conflict, meaning a 1 ♯A a 2 and for all a ′ 1 ≤ A a 1 , a ′ 2 ≤ A a 2 (with at least one of them strict), we have ¬(a ′ 1 ♯A a ′ 2 ). Finally, we write min(A) for the set of minimal events of A. Arenas are certain event structures with polarities: ▸ Definition 3. An arena is an event structure with polarities such that ≤ A is a forest (for all a 1 , a 2 ≤ A a, either a 1 ≤ A a 2 or a 2 ≤ A a 1 ), is alternating (for all a 1 a 2 , pol A (a 1 ) ≠ pol A (a 2 )), and race-free (if a 1 ∼ a 2 , then pol(a 1 ) = pol(a 2 )). Although our formulation is slightly different, our arenas are very close to the standard notion of [START_REF] Hyland | On full abstraction for PCF: I, II, and III[END_REF]: the three differences is that we have no Question/Answer distinction, our arenas are not necessarily negative, and we have a conflict relation. ▸ Example 4. We display below the arenas for some types of IPA. com = run - done + B = q - r Ø v " 2 tt + ff + (com ⊸ com) ⊸ B = q - H t t | ¨ 0 @ run + G s s { tt + ff + run - done - done + On com , Opponent may start running the command (run -), which may or may not terminate (done + ). On B , Opponent may interrogate the boolean (q -), and Player may or may not answer. If he does, it will be with exactly one of the incompatible tt + and ff + . We will see later on how to systematically interpret types of IPA as arenas. For now on though, we give two simple constructions on arenas. ▸ Definition 5. Let A be an arena. Its dual, written A ⊥ , has the same data as A but polarity reversed. If A and B are arenas, then their parallel composition A ∥ B, also written A ⊗ B for the tensor, has components: Events/moves. the disjoint union {1} × A ∪ {2} × B, Causality, conflict. Inherited from A and B. In this paper, we will define two categories GM and PO with arenas as objects. Interleaving-based game semantics on arenas Now, we define a compact closed category of games called GM, by reference to Ghica and Murawski's model of IPA [START_REF] Ghica | Angelic semantics of fine-grained concurrency[END_REF]. Our category will be much simpler though, as it will be an affine version of theirs, without bracketing conditions. Firstly, we need to define plays. ▸ Definition 6. Let A be an arena. A play s on A, written s ∈ P A , is a total order s = ( s , ≤ s ) of moves of A such that s ∈ C (A), and for any a, b ∈ s, if a ≤ A b then a ≤ s b. We write s ⊑ t for the usual prefix ordering on plays. In [START_REF] Ghica | Angelic semantics of fine-grained concurrency[END_REF], strategies are closed under some saturation conditions: for instance, if sa + b -∈ σ and b does not actually depend on a in the game, then σ can always delay a until after b was played. In other words, we have sba ∈ σ as well. In our affine variant, we will have a slightly different formulation of saturation. First we define an order on plays. com ⊥ ∥ com ⊥ ∥ com run - run + done - run + done - done + com ⊥ ∥ com ⊥ ∥ com run - run + done - run + done - done + com ⊥ ∥ com ⊥ ∥ com run - run + run + done - done - done + Figure 2 Some plays in ∥ GM ▸ Definition 7. Let s, t ∈ P A for A an arena. Then we say that s ⪯ t iff s ⊆ t , and: If a + 1 ≤ s a - 2 , then a 1 ≤ t a 2 . For a + 2 ∈ s , if a 1 -≤ t a + 2 , then a 1 ∈ s and a 1 ≤ s a 2 . Clearly, ⪯ is a partial order on P A . Intuitively, going upwards in ⪯ corresponds to strengthening causal information by pushing Opponent moves behind Player moves, hence implying that those Opponent moves were not true dependencies for the Player moves. The partial order ⪯ is generated by elementary permutations, as in the saturation conditions in [START_REF] Ghica | Angelic semantics of fine-grained concurrency[END_REF], along with the prefix ordering. We now define: ▸ Definition 8. A GM-strategy on arena A, written σ ∶ A, is a set σ ⊆ P A which is: Saturated: if s ∈ σ and t ⪯ s, then t ∈ σ as well, Receptive: if s ∈ σ and s ⊂ s ∪ {a -} ∈ C (A), then sa ∈ σ as well. ▸ Example 9. The GM-strategy ∥ GM ∶ com ⊥ ∥ com ⊥ ∥ com comprises all plays on com ⊥ ∥ com ⊥ ∥ com such that: If run + appears on either occurrence of com ⊥ , then run -must appear before, If done + appears, then both done -must appear before. Figure 2 displays several plays of ∥ GM . In total, ∥ GM has six maximal plays. As usual in play-based game semantics, operations on GM-strategies rely crucially on a notion of restriction of plays. Consider A an arena, s ∈ P A , and B some sub-component on A (we leave the notion of sub-component intentionally somewhat vague: for instance A is a subcomponent of A ⊗ B, and A 1 ∥ B 1 is a sub-component of A 1 ⊗ A 2 ∥ B 1 ⊗ B 2 ). The restriction s ↾ B ∈ P B is the subsequence of s of moves in component B, in the same order. Using that, we can now define the copycat strategy on A to be: c c A = {s ∈ P A ⊥ ∥A ∀s ′ ⊑ s, ∀a ∈ s ′ ↾ A ⊥ , pol A ⊥ (a) = + ⇒ a ∈ s ′ ↾ A , ∀a ∈ s ′ ↾ A , pol A (a) = + ⇒ a ∈ s ′ ↾ A ⊥ } It is a GM-strategy. Using the usual parallel composition plus hiding mechanism, we can also define composition. Given σ ∶ A ⊥ ∥ B and τ ∶ B ⊥ ∥ C, first define their interaction τ ⊛ σ = {u ∈ P A∥B∥C u ↾ A ∥ B ∈ σ & u ↾ B ∥ C ∈ τ }. The composition τ ⊙ σ ∶ A ⊥ ∥ C is obtained through hiding, by τ ⊙ σ = {u ↾ (A ∥ C) u ∈ τ ⊛ σ}. Altogether: ▸ Proposition 1. There is a compact closed category GM with arenas as objects, and as morphisms from A to B, GM-strategies σ ∶ A ⊥ ∥ B. We also write σ ∶ A GM → B. Proof. The operation ⊗ on arenas is extended to GM-strategies by setting, for Causality vs. interleavings in concurrent game semantics For now we do not show how to interpret affine IPA in GM -for that one actually needs a symmetric monoidal closed subcategory of negative arenas, which seems difficult to define without appealing to PO. However, we illustrate this interpretation by revisiting Figure 1. σ 1 ∶ A 1 GM → B 1 and σ 2 ∶ A 2 GM → B 2 , σ 1 ⊗ σ 2 = {s ∈ P (A1⊗A2) ⊥ ∥B1⊗B2 s ↾ A ⊥ 1 ∥ B 1 ∈ σ 1 & s ↾ A ⊥ 2 ∥ B 2 ∈ σ 2 }. (com ⊸ com) ⊸ B q - run + run - done - done + tt + (com ⊸ com) ⊸ B q - run + run - done - done + ff + (com ⊸ com) ⊸ B q - run + run - done + done - tt + ▸ Example 10. The GM-strategy corresponding to strict will contain, among others, the maximal plays described in Figure 3. Although strict is a sequential program, the fact that in GM, Opponent may not be sequential (and, in this case, non well-bracketed either) allows us to observe new behaviours from strict. For instance, in the first two plays of Figure 3, Opponent concurrently answers and asks for the argument on com ⊸ com. This triggers a race between the subterms r ∶= tt and !r of strict. As a consequence, one can observe both tt and ff as final results of the computation. However, if Opponent was to answer only after r ∶= tt was evaluated (as in the third play of Figure 3), the only possible final result would be tt. There are, in total, ten maximal non-alternating plays in the GM-strategy for strict. Causal game semantics for affine IPA We give a causal variant of GM, where plays are partial orders. This yields a category PO, close to the category of concurrent games of Rideau and Winskel [START_REF] Rideau | Concurrent strategies[END_REF] -the main difference is that strategies in PO omit information about the point of non-deterministic branching. Po-plays and po-strategies First, we define the notion of partially ordered play. ▸ Definition 11. A partially ordered play (po-play) on arena A is a partial order q = ( q , ≤ q ) where q ∈ C (A), and q satisfies the following properties: Respects the game: for a 1 , a 2 ∈ q , if a 1 ≤ A a 2 then a 1 ≤ q a 2 , Is courteous: if a + 1 q a 2 then a 1 A a 2 , and if a 1 q a - 2 , then a 1 A a 2 . We write P © A for the set of po-plays on arena A. Unlike usual (alternating or non-alternating) plays, po-plays are not chronologically ordered, but carry causal information about Player's choices. Hence, a po-play cannot express that an Opponent event happens after a given event, unless that dependency is already present in the arena. In fact, a po-play cannot force a dependency between two Player moves either: such a dependency may be broken by an asynchronous execution environment. Although one po-play may carry information about many interleavings, representing a GM-strategy might take several. Indeed, a po-play is by itself only able to represent a Simon Castellan and Pierre Clairambault 32:7 com ⊥ ∥ com ⊥ ∥ com run - 7 o o u D r r z run + run + done - % ) A G done - $ 6 D done + (a) A po-play for parallel composition process which is deterministic up to the choice of the scheduler (note that parallel composition is indeed deterministic up to the choice of the scheduler, it is only via its interaction with e.g. a shared memory that non-determinism arises). For instance, the GMstrategy coin ∶ B = { , q -, q -tt + , q -ff + } can only be represented via two maximal po-plays: q - tt + and q - ff + . It features actual non-determinism, independent from the scheduler. To express such non-determinism, Rideau and Winskel [START_REF] Rideau | Concurrent strategies[END_REF] formalize strategies as event structures rather than partial orders. Our causal notion of strategies builds on their work; but since the present paper is only interested in relating causal with interleaving game semantics (therefore with may-testing), we drop the explicit non-deterministic branching point and consider po-strategies to be certain sets of partial orders. For that we first define: ▸ Definition 12. Let q, q ′ be two partial orders. We say that q is rigidly included in q ′ , or that q is a prefix of q ′ , written q ↪ q ′ , if we have the inclusion q ⊆ q ′ , for any a 1 , a 2 ∈ q we have a 1 ≤ q a 2 iff a 1 ≤ q ′ a 2 , and q is down-closed in q ′ . We are now in position to define PO-strategies. (com ⊸ com) ⊸ B q - I t t } run + E s s z run - # ' 9 F done - " 4 B done + tt + (com ⊸ com) ⊸ B q - I t t } run + E s s z run - done - E s s z " 4 B done + ff + ( ▸ Definition 13. A PO-strategy on A, written σ ∶∶ A, is a non-empty prefix-closed σ ⊆ P © A , which is additionally receptive: for all q ∈ σ, if q ∈ C (A) extends to q ∪ {a -} ∈ C (A), then there is q ↪ q ′ ∈ σ such that q ′ = q ∪ {a}. It follows by courtesy that q ′ is necessarily unique: the immediate dependency of a in q ′ is forced by its immediate dependency in A. Clearly, the set of prefixes of the po-play of Figure 4a gives a PO-strategy. For a nontrivial non-deterministic example, we give in Figure 4b the two maximal (up to prefix / rigid inclusion) po-plays of the PO-strategy corresponding to strict. This gives a quite compact representation of all of the ten maximal plays of the GM-strategy for strict of Example 10. The compact closed category PO To construct PO we start with the causal copycat, which is -configuration-wise -as in [START_REF] Rideau | Concurrent strategies[END_REF]. ▸ Definition 14. Let A be an arena. We define a partial order ≤ C C A on A ⊥ ∥ A: ≤ C C © A = ({((1, a), (1, a ′ )) a ≤ A a ′ } ∪ {((2, a), (2, a ′ )) a ≤ A a ′ }∪ {((1, a), ( 2 - F s s { tt + ok + r - # 5 C wtt - ff + ok + ref ⊥ ∥ (com ⊸ com) ⊸ B q - G s s { run + C r r y run - 8 o o v done - 7 o o u r + wtt + b - ( * B I ok - % 7 D done + b + Figure 5 cell ∶∶ ref and λf com⊸com . f (r ∶= tt); !r ∶∶ ref ⊥ ∥ (com ⊸ com) ⊸ B . We will see in Proposition 4 that this is indeed a causal version of c c A ∶ A ⊥ ∥ A. Now, we define composition of PO-strategies. We first define composition of po-plays (via interaction plus hiding, essentially as in [START_REF] Rideau | Concurrent strategies[END_REF]), before lifting it component-wise to PO-strategies. ▸ Definition 15. Two dual po-plays q ∈ P © A , q ′ ∈ P © A ⊥ such that q = q ′ are causally compatible if (≤ q ∪ ≤ q ′ ) * is a partial order, i.e. is acyclic. Then we write q∧q ′ = ( q , ≤ q∧q ′ ) for the resulting partial order. If q and q ′ are causally compatible po-plays on dual games as above, the events of q ∧ q ′ have no well-defined polarity, so it is not a po-play. If q ∈ P © A ⊥ ∥B and q ′ ∈ P © B ⊥ ∥C are not dual but composable, we say that they are causally compatible if q = x A ∥ x B , q ′ = x B ∥ x C , plus (q ∥ x C ) and (x A ∥ q ′ ) are causally compatible (where x A , x C inherit the order from A, C -in particular, x A is regarded as a member of P © A , and x C as a member of P © C ⊥ ), we define their open interaction q ′ ⊛ q = (q ∥ x C ) ∧ (x A ∥ q ′ ). In that case we define q ′ ⊙ q ∈ P © A ⊥ ∥C as the projection q ′ ⊛ q ↓ A ⊥ ∥ C, with events those of q ′ ⊛ q that are in A or C, and partial order as in ≤ q ′ ⊛q . This being a po-play is a variation on the stability by composition of courtesy in [START_REF] Rideau | Concurrent strategies[END_REF] (there called innocence). ▸ Definition 16. Let σ ∶∶ A ⊥ ∥ B and τ ∶∶ B ⊥ ∥ C be PO-strategies. Their composition is τ ⊙ σ = {q ′ ⊙ q q ′ ∈ τ & q ∈ σ causally compatible}. Then, τ ⊙ σ ∶∶ A ⊥ ∥ C is a PO-strategy. The construction is a simplification of [START_REF] Rideau | Concurrent strategies[END_REF]: po-plays are certain concurrent strategies, and their composition is close to the composition of concurrent strategies with the simplification that events of po-plays are those of the games rather than only labeled by the game. Proof. The tensor q 1 ⊗ q 2 of q 1 ∈ P ▸ Example 17. Consider ref r ⊗ ref w = r - c z z Ô & 7 wtt - tt + ff + ok , © A ⊥ 1 ∥B1 and q 2 ∈ P © A ⊥ 2 ∥B2 is the obvious inherited partial order on (A 1 ∥ A 2 ) ⊥ ∥ (B 1 ∥ B 2 ). The tensor σ 1 ⊗ σ 2 of PO-strategies σ 1 ∶∶ A ⊥ 1 ∥ B 1 and σ 2 ∶ A ⊥ 2 ∥ B 2 is defined component-wise. Structural morphisms are copycat PO-strategies. PO simplifies (omitting explicit non-deterministic branching information) the bicategory of concurrent games [START_REF] Rideau | Concurrent strategies[END_REF], whose compact closed structure is established with details in [START_REF] Castellan | Concurrent games[END_REF]. ◂ Interpretation of affine IPA For completeness, we succinctly describe how one can define the interpretation of affine IPA in PO. In fact, affine IPA will not be interpreted directly in PO, which does not support weakening of variables as the empty arena 1, unit for the tensor, is not terminal (since PO-strategies can have minimal positive events, there are in general several PO-strategies on A ⊥ ∥ 1 as soon as A has at least one minimal negative event). We have to restrict to a proper subcategory of PO, defined as follows. ▸ Definition 18. An event structure with polarities A is negative if pol(min(A)) ⊆ {-}. The category PO -is the subcategory of PO with objects negative arenas, and morphisms the negative PO-strategies whose po-plays are all negative. The empty arena 1 is terminal in PO -: if A is negative then A ⊥ ∥ 1 has no negative minimal event. Therefore a negative σ ∶∶ A ⊥ ∥ 1 must be empty, as a potential minimal event would be in particular minimal in A ⊥ ∥ 1. However, restricting to PO -has a price: we lose the closure A ⊥ ∥ B, which is in general not negative and hence not an object of PO -. Thus we build a negative version, where the minimal events of A depend on those of B. If A, B are conflict-free and B has a unique minimal event, then A ⊸ B coincides with the usual arrow arena construction in Hyland-Ong games [START_REF] Hyland | On full abstraction for PCF: I, II, and III[END_REF]. In general if B has a unique minimal event, then A ⊸ B does not introduce new conflicts or copies of A, and only differs from A ⊥ ∥ B by the fact that events of A ⊥ now depend on the minimal event of B -see Example 4 for such an arrow arena. However, if B has several minimal events, then multiple copies of A are created; fortunately we can use conflict to maintain linearity. The arena A ⊸ B does not yet give a closure with respect to the tensor. The issue is that there are more PO-strategies in A ⊸ B than in A ⊥ ∥ B. Indeed, consider a PO-strategy σ ∶∶ B ⊥ ∥ (B ⊗ B), that plays q + in the left hand side occurrence of B whenever Opponent plays q -in both right hand side occurrences of B. Then on B ⊸ (B ⊗ B) there are two ways to replicate this, as they are two copies of the left hand side B in the arena. To get back a closed structure, we need to restrict the category further. ▸ Definition 20. A negative PO-strategy σ ∶∶ A is well-threaded iff, for any q ∈ σ, q has at most one minimal event. Copycat is well-threaded and well-threaded PO-strategies are stable under composition -they form a subcategory PO - wt of PO -. Up to renaming of events, negative well-threaded strategies on (A ∥ B) ⊥ ∥ C exactly coincide with those on A ⊥ ∥ B ⊸ C. Leveraging the compact closed structure of PO, it follows that PO - wt is symmetric monoidal closed (where the monoidal unit 1 is terminal). As such, it supports the interpretation of the affine λ-calculus: any term x 1 ∶ A 1 , . . . , x n ∶ A n ⊢ C O N C U R 2 0 1 6 32:10 Causality vs. interleavings in concurrent game semantics M ∶ B is interpreted as a PO-strategy M ∶ A 1 ⊗. . .⊗ A n PO - wt → B . Along with the POstrategy with unique po-play that of Figure 4a for parallel composition, the interpretation of the newref construct as sketched in Example 17, and the obvious PO-strategies for the other affine IPA combinators, we get an interpretationof affine IPA into PO - wt , which is a subcategory of PO. Standard techniques entail: ▸ Proposition 3. The interpretationis sound and adequate for affine IPA, i.e. for ⊢ M ∶ com, we have M ⇓ may iff M contains a positive event. 4 From PO to GM and back We finally enter the final section of this paper, and relate the two semantics. Forgetting causality We start with the easy part: that PO can be embedded into GM. As partial orders are more informative than plays, it is easy to move from the former to the latter. ▸ Definition 21. Let q ∈ P © A . A play in q is s ∈ P A such that s ⊆ q , and such that for all a 2 in s , if a 1 ≤ q a 2 , then a 1 ∈ s and a 1 ≤ s a 2 . We write Plays(q) for the set of plays in q. From courtesy of q it follows that Plays(q) satisfies the saturation condition of Definition 8. For σ ∶∶ A a PO-strategy, we have Plays(σ) = ⋃{Plays(q) q ∈ σ} a GM-strategy, as receptivity follows from receptivity of σ. In fact, we have: ▸ Proposition 4. There is an identity-on-object functor Plays ∶ PO → GM. This is a direct verification. As in Section 2.2 we have by anticipation defined the compact closed structure of GM to be the image of that of PO through Plays, this functor preserves the compact closed structure by construction. Combined with the interpretationof affine IPA in PO, this gives a sound and adequate interpretation Plays ○of affine IPA in GM. Providing a direct sound interpretation to GM without PO would be awkward, as it is unclear how to define well-threaded GM-strategies with no access to causality. As emphasized in the introduction, the interpretation Plays ○is not fully abstract for affine IPA. However, let us emphasize again that we are not interested in full abstraction for affine IPA; rather this serves as a simpler setting in which to study the relationship between the fully abstract model for IPA [START_REF] Ghica | Angelic semantics of fine-grained concurrency[END_REF] and its causal variant in e.g. [START_REF] Castellan | The parallel intensionally fully abstract games model of PCF[END_REF]. Recovering causality We now investigate how one can recover a PO-strategy from a GM-strategy. A naive causal reconstruction As a first step, we simply reverse the construction of Definition 21. ▸ Definition 22. A causal resolution σ ∶ A is any q ∈ P © A such that Plays(q) ⊆ σ. Because some GM-strategies (such as coin ∶ B) are inherently non-deterministic, it is hopeless to try to describe them with a unique maximal causal resolution. A first rough causal reconstruction for a GM-strategy consists simply in taking all causal resolutions. ▸ Proposition 5. Let σ ∶ A be a GM-strategy. Then, Caus(σ) = {q ∈ P © A Plays(q) ⊆ σ} is a PO-strategy such that Plays(Caus(σ)) = σ. Moreover, this yields a lax functor Caus ∶ GM → PO, i.e. we have c c © A ⊆ Caus( c c A ) and Caus(τ ) ⊙ Caus(σ) ⊆ Caus(τ ⊙ σ) for all σ ∶ A ⊥ ∥ B and τ ∶ B ⊥ ∥ C (but neither of the other inclusions hold). Proof. Each causal resolution is courteous by definition; receptivity and closure under prefix are immediate. Each play s ∈ σ appears in a causal resolution q s , whose plays are exactly those t ⪯ s obtained by saturation from s. Finally, lax functoriality is straightforward. To see why Caus(-) is only lax functorial, take A = {a -}, C = {c -} and B = 1. Take the PO-strategy σ ∶∶ A ⊥ ∥ B to have as only non-empty po-play the singleton a + , while τ ∶∶ B ⊥ ∥ C has only non-empty po-play the singleton c -. Then the GM-strategy Plays(τ ) ⊙ Plays(σ) admits c - a + as a causal resolution, which is therefore a po-play of Caus(Plays(τ ) ⊙ Plays(σ)). On the other hand, Caus(Plays(τ )) ⊙ Caus(Plays(σ)) = τ ⊙ σ has only one maximal po-play, with causally independent c -and a + . ◂ In particular, each GM-strategy is definable as a PO-strategy. Along with Proposition 4, and the fact that (just as in [START_REF] Ghica | Angelic semantics of fine-grained concurrency[END_REF]) two distinct GM-strategies can always be distinguished by a GM-strategy, this entails that GM is the observational quotient of PO, in the sense that for σ 1 , σ 2 ∶∶ A, Plays(σ 1 ) = Plays(σ 2 ) iff for all α ∶∶ A ⊥ ∥ com, α ⊙ σ 1 = α ⊙ σ 2 . There are in general many PO-strategies corresponding to one GM-strategy, as GMstrategies only remember the observable behaviour. Some PO-strategies are more succinct than others for a fixed GM-strategy; and the causal reconstruction Caus(-) is not very economical as it constructs the biggest such causal representation. For instance, the POstrategy Caus( ∥ GM ) not only comprises the po-play of Figure 4a, but also the linear po-play of sequential command composition. Extremal causal resolutions As we have seen, the construction Caus(-) presented above does not yield a satisfactory causal representation of a GM-strategy because it is not minimal. Seeking a minimal canonical causal representation of a GM-strategy, we now investigate when certain causal resolutions are subsumed by others, and hence can be removed without changing Plays(σ). For q 1 , q 2 ∈ P © A with q 1 = q 2 , considering q 1 subsumed by q 2 when Plays(q 1 ) ⊆ Plays(q 2 ) is a bit too naive. Indeed, consider cell ∶∶ ref of Figure 5. We have: Plays( r - ¨ 0 @ wtt - ok + ) ⊆ Plays( r -wtt - ok + ) However, moving from the former to the latter does not preserve the future: namely, whereas any play in the left hand side can only be extended with ff + , there are plays in the right hand side that can be extended with tt + as well. So, the left hand side has to be kept. To address this relaxation of causality while taking account of the future, for q 1 = q 2 with Plays(q 1 ) ⊆ Plays(q 2 ), we will say that q 2 relaxes q 1 if the inclusion of plays is automatically transferred to all possible rigid extensions of q 1 . More formally: ▸ Definition 23. We define a partial order called relaxation coinductively, by q 1 q 2 iff q 1 = q 2 , Plays(q 1 ) ⊆ Plays(q 2 ), and for all q 1 ↪ q ′ 1 , there exists q 2 ↪ q ′ 2 such that q ′ 1 q ′ 2 . For σ ∶ A a GM-strategy and q ∈ Caus(σ), we say that q is extremal in σ iff q is -maximal. Let Extr(σ) be denote the set of extremal po-plays in σ. The operation Extr(-) performs well on many examples: for instance, it recovers the proper PO-strategies for all the examples of GM-strategies in this paper until now. It also properly reverses Plays(-) for deterministic PO-strategies, with only one maximal poplay. In that case, it matches the previously known correspondence between Rideau and Winskel's deterministic concurrent strategies [START_REF] Winskel | Deterministic concurrent strategies[END_REF] and Melliès and Mimram's category of receptive ingenuous strategies [START_REF] Melliès | Asynchronous games: Innocence without alternation[END_REF]. In the general case however, Extr(-) is not even lax functorial. But more importantly, it turns out that Extr(σ) is still not necessarily a minimal causal representation of σ. We present an example outside of the interpretation of affine IPA as it is more succinct, but it is easy to find similar examples within the interpretation. ▸ Example 24. Let A be a non-negative arena, with two concurrent events ⊖ and ⊕. Consider the GM-strategy σ ∶ A 1 ∥ A 2 with plays (annotations are for disambiguation): σ = Plays( ⊖ 1 ⊖ 2 ⊕ 1 ⊕ 2 ) ∪ Plays( ⊖ 1 ~ % 6 ⊖ 2 ⊕ 1 ⊕ 2 ) ∪ Plays( ⊖ 1 ⊖ 2 d z z Õ ⊕ 1 ⊕ 2 ) All three po-plays are extremal in σ. However, despite being extremal, the first po-play is redundant: it can be removed, yielding the same GM-strategy. Indeed, call the three poplays above q 1 , q 2 , q 3 ; and take s ∈ Plays(q 1 ). If s ∈ Plays(q 2 ), then ⊕ 2 ≤ s ⊖ 1 as this is the only constraint in q 2 . Likewise, s ∈ Plays(q 3 ) means that ⊕ 1 ≤ s ⊖ 2 . But these constraints, put together with those of q 1 , yield a contradiction. Therefore s ∈ Plays(q 2 ) ∪ Plays(q 3 ). The two extremal po-plays q 2 , q 3 yield a smaller representation of σ. In the example above, {q 2 , q 3 } is the unique minimal causal representation for σ. But can we always reach such a canonical representation by removing redundant extremals? Causally ambiguous GM-strategies Until this point, and including Example 24, all the examples of GM-strategies considered in this paper have a unique minimal causal representation, i.e. a unique set of extremal po-plays with minimal cardinality. They are all causally unambiguous: ▸ Definition 25. For A a finite arena, a GM-strategy σ ∶ A is causally ambiguous if there are (at least) two distinct sets of extremal po-plays of minimal cardinality X = {q 1 , . . . , q n } and Y = {q ′ 1 , . . . , q ′ n }, such that σ = ⋃ 1≤i≤n Plays(q i ) = ⋃ 1≤i≤n Plays(q ′ i ). To conclude this paper, we show the following result. ▸ Theorem 26. There is a term of affine IPA: ⊢ M ∶ ((com ⊸ com ⊸ com ⊸ com ⊸ com ⊸ com) ⊸ com) ⊸ com such that M GM is causally ambiguous. Proof. We first exhibit a causally ambiguous GM-strategy outside of the interpretation of affine IPA, and then sketch how the same phenomenon can be replicated via a term. Figure 6 displays five po-plays q 1 , . . . , q 5 , generating a GM-strategy σ = ∪ 1≤i≤5 Plays(q i ) -the game A is the same as in Example 24. A rather tedious but direct verification ensures that they are all extremal: for that, it suffices to check that for each of these po-plays, dropping any of the causal links unlocks a play not yet in σ. For instance, dropping the diagonal immediate causal link in q 1 unlocks the play ⊖ 4 ⊕ 4 ⊖ 2 ⊕ 2 ∈ σ. A 1 ∥ A 2 ∥ A 3 ∥ A 4 ⊖ 1 # 5 C ⊖ 2 ⊖ 3 ⊖ 4 ⊕ 1 ⊕ 2 ⊕ 3 ⊕ 4 A 1 ∥ A 2 ∥ A 3 ∥ A 4 ⊖ 1 v " 2 ⊖ 2 ⊖ 3 v " 2 ⊖ 4 ⊕ 1 ⊕ 2 ⊕ 3 ⊕ 4 A 1 ∥ A 2 ∥ A 3 ∥ A 4 ⊖ 1 ⊖ 2 ⊖ 3 r Ø ⊖ 4 ⊕ 1 ⊕ 2 ⊕ 3 ⊕ 4 A 1 ∥ A 2 ∥ A 3 ∥ A 4 ⊖ 1 ⊖ 2 ⊖ 3 r Ø ⊖ 4 r Ø ⊕ 1 ⊕ 2 ⊕ 3 ⊕ 4 A 1 ∥ A 2 ∥ A 3 ∥ A 4 ⊖ 1 ⊖ 2 r Ø ⊖ 3 r Ø v " 2 ⊖ 4 ⊕ 1 ⊕ 2 ⊕ 3 ⊕ 4 Then, we note that q 2 is redundant. Indeed, Plays(q 2 ) ⊆ Plays(q 1 ) ∪ Plays(q 3 ): as in Example 24, we cannot have at the same time ⊕ 4 ≤ s ⊖ 1 and ⊕ 2 ≤ s ⊖ 3 in s ∈ Plays(q 2 ). Perhaps less obviously, q 3 is redundant as well: we have Plays(q 3 ) ⊆ Plays(q 2 )∪Plays(q 4 )∪ Plays(q 5 ). Indeed, take s ∈ Plays(q 3 ). If s ∈ Plays(q 4 ), then ⊕ 3 ≤ s ⊖ 4 . If s ∈ Plays(q 5 ), then either ⊕ 1 ≤ s ⊖ 2 or ⊕ 4 ≤ s ⊖ 3 , but the latter is incompatible as the constraints we already have on ⊖ 3 , ⊕ 3 , ⊖ 4 , ⊕ 4 yield a cycle. Thus ⊕ 1 ≤ s ⊖ 2 . But then if s ∈ Plays(q 2 ), then ⊕ 2 ≤ s ⊖ 1 or ⊕ 4 ≤ s ⊖ 3 , but both possibilities yield a cycle; absurd. None of q 1 , q 4 , q 5 are redundant: only q 2 and q 3 . Removing both q 2 and q 3 leads to the loss of the play ⊖ 3 ⊕ 3 ⊖ 4 ⊕ 4 ⊖ 1 ⊕ 1 . There are two distinct minimal sets of extremals {q 1 , q 3 , q 4 , q 5 } and {q 1 , q 2 , q 4 , q 5 }, both generating σ -so σ is causally ambiguous. We replicate this in affine IPA. First, we replace each A with com. However, q 4 and q 5 do not have the causal link ⊖ 4 ⊕ 4 ; so we need five occurrences of com, organised as com 1 ∥ com 2 ∥ com 3 ∥ com 4 ∥ com ′ 4 , where run ′ 4 , done 4 play the role of ⊖ 4 , ⊕ 4 and ⊕ ′ 4 is ignored. This yields σ ′ ∶ com 1 ∥ com 2 ∥ com 3 ∥ com 4 ∥ com ′ 4 causally ambiguous. This is not a type of affine IPA (and σ ′ is not well-threaded), so instead we lift σ ′ to: σ ′′ ∶ ((com ⊸ com ⊸ com ⊸ com ⊸ com ⊸ com) ⊸ com) ⊸ com Using variables, one can implement in affine IPA each of the po-plays corresponding in this type to the q i s above. It is also easy to define a non-deterministic choice operation in affine IPA, using which these are put together to define M such that M GM = σ ′′ . ◂ Conclusions The phenomenon presented here is fairly robust, and causally ambiguous strategies would most likely emerge as well in other concurrent programming languages. Since interleaving games models are inherently related with observational equivalence as they exactly capture the observable behaviour of programs, it seems that unfortunately we cannot use the causal model presented here or those of e.g. [START_REF] Rideau | Concurrent strategies[END_REF][START_REF] Castellan | The parallel intensionally fully abstract games model of PCF[END_REF] to give canonical compact representations of concurrent programs up to contextual equivalence. Causal structures are however still very relevant for other purposes (e.g. model-checking, error diagnostics, weak memory models, . . . ), and constructing them compositionally from programs remains an interesting challenge. Rather than detailing explicitly the rest of the structure, we will inherit it from the forthcoming category PO. All laws will then follow from Proposition 4.◂C O N C U R 2 Figure 3 3 Figure 3 Some maximal plays of the non-alternating game semantics of strict Figure 4 4 Figure 4 Some po-plays , a)) pol A (a) = +} ∪ {((2, a), (1, a)) pol A (a) = -}) + where (-) + denotes the transitive closure of a relation. Then, c c © A ∶∶ A ⊥ ∥ A comprises all x ∥ y ∈ C (A ⊥ ∥ A) down-closed for ≤ C C © A , with the induced partial order. for the type of references. By abuse of notation, we write ref for ref w ⊗ ref r . The PO-strategy interpreting strict is the composition of the PO-strategy with maximal po-play at the right hand side of Figure 5 (interpreting r ∶ ref w , r ∶ ref r ⊢ λf com⊸com . f (r ∶= tt); !r following Section 3.3), and cell ∶∶ ref for the memory cell (with maximal po-plays at the left hand side of Figure 5). Performing composition as above produces the two maximal po-plays of Figure 4b. ▸ Proposition 2. There is a compact closed category PO with arenas as objects, and POstrategies σ ∶∶ A ⊥ ∥ B as morphisms from A to B, also written σ ∶ A PO ▸ Definition 19. Let A, B be two negative arenas. The arena A ⊸ B has: Events/polarity: (∥ b∈min(B) A ⊥ ) ∥ B. Causality: (∥ b∈min(B) A ⊥ ) ∥ B, enriched with ((2, b), (1, (b, a))) for a ∈ A and b ∈ min(B). Conflict: (∥ b∈min(B) A ⊥ ) ∥ B, plus those inherited by (1, (b 1 , a)) ∼ (1, (b 2 , a)) for b 1 ≠ b 2 . ▸ Proposition 6 . 6 For any σ ∶ A, we have Extr(σ) ∶∶ A such that Plays(Extr(σ)) = σ. Figure 6 6 Figure 6 Extremal generators q1, q2, q3, q4 and q5 of a causally ambiguous GM-strategy. Acknowledgements. This work was partially supported by the LABEX MILYON (ANR-10-LABX-0070), and by the ERC Advanced Grant ECSYM. We are also grateful to Andrzej Murawski for interesting discussions on the topic. C O N C U R 2 0 1 6 32:14 Causality vs. interleavings in concurrent game semantics
41,084
[ "771792", "21545" ]
[ "35418", "35418", "458310" ]
01474668
en
[ "spi" ]
2024/03/04 23:41:46
2017
https://enpc.hal.science/hal-01474668/file/Towards_improved_HS_bounds-Post-print.pdf
Sébastien Brisard email: [email protected] Towards improved Hashin-Shtrikman bounds on the effective moduli of random composites Keywords: elasticity, homogenization, bounds, effective properties, local volume fraction The celebrated bounds of Hashin and Shtrikman on the effective properties of composites are valid for a very wide class of materials. However, they incorporate only a very limited amount of information on the microstructure (volume fraction of each phase in the case of isotropic microstructures). As a result, they are generally not tight. In this work, we present an attempt at improving these bounds by incorporating explicitly the local volume fraction to the set of local descriptors of the microstructure. We show that, quite unexpectedly, the process fails in the sense that the classical bounds are retrieved. We further show that this negative result applies to so-called weakly isotropic local descriptors of the microstructure (to be defined in this paper). This suggests that improved bounds may be obtained with anisotropic descriptors. Introduction Bounds on the effective properties of composites are very useful tools, as they provide exact safeguards for more elaborate estimates. Among all available bounds, those of Hashin and Shtrikman are probably the most useful, as they only require the volume fractions of the phases, and apply to a wide class of composites (namely, isotropic microstructures). The price to pay for this simplicity and generality is, of course, the fact that these bounds are usually relatively slack. That they are insensitive to relative sizes of the inclusions constitutes another major shortcoming. Sharper bounds have been produced, which improve on the bounds of Hashin and Shtrikman; see e.g. [START_REF] Milton | New bounds on effective elastic moduli of two-component materials[END_REF]. However, they generally involve complex statistical descriptors of the microstructure which are difficult to measure. Besides, it is not possible to chose these statistical descriptors, as they merely are an outcome of the whole optimization process. In this paper, we present an attempt at improving the classical bounds of Hashin and Shtrikman. To do so, we carry out the same optimization process as in the classical approach, with an enriched trial field. This is a potentially very flexible approach, since any local descriptor can be used as enrichment. As a first step, we use local volume fractions as supplementary local descriptors of the microstructure. This was suggested by previous work by Widjajakusuma et al. [START_REF] Widjajakusuma | Quantitative prediction of effective material properties of heterogeneous media[END_REF], and by the fact that such descriptors effectively introduce a length-scale (the size of the sliding window). The resulting bounds were expected to be sensitive to the relative size of the inclusions. The somewhat unexpected outcome of this approach is the fact that the resulting bounds coincide with those of Hashin and Shtrikman. In other words, the supplementary microstructural information was ignored by the optimization process. We were able to extend this negative result to the class of weakly isotropic local descriptors of the microstructure, that will be defined more precisely below. This now suggests to explore the class of anisotropic local descriptors. The present paper is organized as follows. The improved bounds on the macroscopic properties of composites that we seek in this work are derived by means of polarization techniques within the framework of linear elasticity. In Sec. 2, we provide a brief account of these techniques; in particular, we introduce the energy H of Hashin and Shtrikman [START_REF] Hashin | On some variational principles in anisotropic and nonhomogeneous elasticity[END_REF] (see also [START_REF] Willis | Bounds and self-consistent estimates for the overall properties of anisotropic composites[END_REF] for a modern presentation). In Sec. 3, we construct enriched trial fields which incorporate supplementary local descriptors of the microstructure. We then carry out the optimization process presented in [START_REF] Willis | Bounds and self-consistent estimates for the overall properties of anisotropic composites[END_REF] to derive bounds of the macroscopic properties, and show that these bounds fail to improve on the classical bounds of Hashin and Shtrikman [START_REF] Hashin | A variational approach to the theory of the elastic behaviour of polycrystals[END_REF]. This negative result is then extended to weakly isotropic local descriptors of the microstructure. Sec. 4 closes this paper with a few thoughts on how to overcome the limitation highlighted in Sec. 3. It should be noted that this paper makes use of the classical terminology of apparent stiffness and statistical volume element (SVE) [START_REF] Huet | Application of variational concepts to size effects in elastic heterogeneous bodies[END_REF][START_REF] Ostoja-Starzewski | Material spatial randomness: From statistical to representative volume element[END_REF]. The standard presentation of these techniques requires the use of the Green operator for strains of a bounded domain, which is generally unknown. Following Willis [START_REF] Willis | Bounds and self-consistent estimates for the overall properties of anisotropic composites[END_REF], it is usually replaced with the Green operator for strains of the whole space R d by means of a heuristic approximation, which was only recently justified by Brisard et al. [START_REF] Brisard | New boundary conditions for the computation of the apparent stiffness of statistical volume elements[END_REF], as summarized below. We consider a linearly elastic heterogeneous material occupying the d-dimensional domain Ω characterized by its indicator function χ χ(x) =        1 if x ∈ Ω, 0 otherwise. ( 1 ) For x ∈ Ω, C(x) denotes the local elastic stiffness of the composite, while C 0 denotes the (as yet unspecified) elastic stiffness of the so-called reference material. The modified Lippmann-Schwinger equation The modified Lippmann-Schwinger equation ( 2) requires the fourth-order Green operator for strains Γ ∞ 0 of the unbounded domain R d , associated with the reference material C 0 . In a prestressed, unbounded, homogeneous material with stiffness C 0 , it relates the local strain to the applied (possibly inhomogeneous) prestress. A more precise definition of this operator can be found elsewhere (e.g. [START_REF] Willis | Bounds and self-consistent estimates for the overall properties of anisotropic composites[END_REF][START_REF] Brisard | New boundary conditions for the computation of the apparent stiffness of statistical volume elements[END_REF][START_REF] Korringa | Theory of elastic constants of heterogeneous media[END_REF][START_REF] Zeller | Elastic constants of polycrystals[END_REF][START_REF] Kröner | On the physics and mathematics of self-stresses[END_REF]). The following modified Lippmann-Schwinger equation is introduced [START_REF] Brisard | New boundary conditions for the computation of the apparent stiffness of statistical volume elements[END_REF], with unknown τ (the stress polarization), supported in Ω (C -C 0 ) -1 : τ + Γ ∞ 0 τ -χτ = E, (2) where the loading parameter E is a symmetric, second-order tensor. In the remainder of this paper, overlined quantities denote volume averages over the domain Ω τ = 1 vol Ω x∈Ω τ(x) dx. ( 3 ) From the solution τ to Eq. ( 2), it is possible to construct a strain (resp. stress) field ε (resp. σ) as follows ε = E -Γ ∞ 0 [τ -χτ], (4a) σ = C 0 : ε + τ = C : ε, (4b) and it can be shown [START_REF] Brisard | New boundary conditions for the computation of the apparent stiffness of statistical volume elements[END_REF] that σ thus constructed is divergence-free in Ω and that, provided the domain Ω is ellipsoidal, ε = E. In other words 1. the loading parameter E coincides with the macroscopic strain, 2. ε is a compatible strain field, 3. σ is an equilibrated stress field, 4. ε and σ are associated through the local constitutive law of the heterogeneous material. Therefore, Eqs. ( 2) and ( 4) provide the solution to a new auxiliary problem (elastic equilibrium of the SVE) from which the apparent stiffness C app (C 0 ) can be defined σ = C app (C 0 ) : ε = C app (C 0 ) : E. (5) It should be noted that the apparent stiffness introduced above depends on the stiffness of the reference material, C 0 . It can be shown [START_REF] Brisard | New boundary conditions for the computation of the apparent stiffness of statistical volume elements[END_REF] that it is positive definite, and bounded from below (resp. above) by the apparent stiffness relating to static (resp. kinematic) uniform boundary conditions (defined in e.g. [START_REF] Kanit | Determination of the size of the representative volume element for random composites: statistical and numerical approach[END_REF]). As a consequence, the apparent stiffness defined through Eq. ( 5) is consistent in the homogenization sense: for statistically homogeneous and ergodic materials, it tends to the effective stiffness as the size of the domain Ω grows to infinity, regardless of the size of the SVE Ω. The principle of Hashin and Shtrikman For any trial field τ, the energy of Hashin and Shtrikman is defined as follows H(τ) = τ : E - 1 2 τ : (C -C 0 ) -1 : τ - 1 2 τ : Γ ∞ 0 τ -χτ . (6) It can be shown [START_REF] Brisard | New boundary conditions for the computation of the apparent stiffness of statistical volume elements[END_REF] that the solution τ to the modified Lippmann-Schwinger equation ( 2) is a critical point of H. Furthermore 1 2 E : C app (C 0 ) : E = 1 2 E : C 0 : E + H(τ). (7) The extremum principle of Hashin and Shtrikman can then be stated under further assumptions on the stiffness of the reference material 1. if C(x) ≤ C 0 for all x ∈ Ω, then H is minimal at τ, and for any trial field τ 1 2 E : C app (C 0 ) : E ≤ 1 2 E : C 0 : E + 1 2 H(τ), (8) 2. if C(x) ≥ C 0 for all x ∈ Ω, then H is maximal at τ, and for any trial field τ 1 2 E : C app (C 0 ) : E ≥ 1 2 E : C 0 : E + H(τ), (9) where inequalities between fourth-order tensors should be understood in the sense of the underlying quadratic forms. The classical bounds of Hashin and Shtrikman In this section, and in the remainder of this paper, Greek indices always refer to material phases. Besides, random variables are indexed by ω. The celebrated bounds of Hashin and Shtrikman were initially derived in [START_REF] Hashin | A variational approach to the theory of the elastic behaviour of polycrystals[END_REF]; a more modern proof was proposed by Willis [START_REF] Willis | Bounds and self-consistent estimates for the overall properties of anisotropic composites[END_REF], who also considered the case of ellipsoidal distributions. Extension to ellipsoidal inclusions with different ellipsoidal distributions is due to Ponte Castañeda and Willis [START_REF] Castañeda | The effect of spatial distribution on the effective behavior of composite materials and cracked media[END_REF]. The random composite under consideration is made of N linearly elastic, perfectly bounded phases. For α = 1, . . . , N and x ∈ Ω, χ α (x; ω) denotes the indicator function at point x of phase α; f α denotes the volume fraction of phase α: f α = χ α ω (where angle brackets denote ensemble averages). The local stiffness of the composite reads C(x; ω) = N α=1 χ α (x; ω)C α , (10) where C α denotes the stiffness of phase α. To derive the bounds of Hashin and Shtrikman, the following trial field is selected τ(x; ω) = N α=1 χ α (x; ω)τ α , (11) where τ1 , . . . , τN are N deterministic symmetric, second-order tensors. Assuming that the reference medium is stiffer than all phases of the composite, Eq. ( 8) gives 1 2 E : C app (C 0 ; ω) : E ≤ 1 2 E : C 0 : E + H(τ 1 , . . . , τN ; ω), (12) where H(τ 1 , . . . , τN ; ω) = H(τ; ω) is a quadratic form of τ1 , . . . , τN . Taking the ensemble average in Eq. ( 12) and passing to the limit of infinite domains Ω leads to 1 2 E : C eff : E ≤ 1 2 E : C 0 : E + H(τ 1 , . . . , τN ; ω) ω , (13) where C eff denotes the effective stiffness of the composite. The ensemble average H(τ 1 , . . . , τN ; ω) ω is a deterministic quadratic form of τ1 , . . . , τN . It can be minimized with respect to these parameters, in order to produce the sharpest bounds on the effective stiffness in Eq. ( 13). For a wide class of composites, the resulting bound can be computed explicitly [START_REF] Willis | Bounds and self-consistent estimates for the overall properties of anisotropic composites[END_REF][START_REF] Castañeda | The effect of spatial distribution on the effective behavior of composite materials and cracked media[END_REF]; for isotropic composites, these bounds depend on the volume fraction and stiffness of each phase only. 3 Towards improved bounds on the effective moduli? Construction of enriched trial fields The trial field [START_REF] Kröner | On the physics and mathematics of self-stresses[END_REF] considered by Hashin and Shtrikman [START_REF] Hashin | A variational approach to the theory of the elastic behaviour of polycrystals[END_REF] includes one point microstructural information only: the polarization stress at point x ∈ Ω is totally defined by the phase at x. Our aim in the present paper is to produce sharper bounds, by providing more microstructural information to the optimization process described in Sec. 2.3. In other words, we will consider an enrichment of the trial fields [START_REF] Kröner | On the physics and mathematics of self-stresses[END_REF]. As already argued in [START_REF] Widjajakusuma | Quantitative prediction of effective material properties of heterogeneous media[END_REF], the local volume fraction is a local descriptor of the microstructure which is believed to play a significant role on the macroscopic properties; we propose trial fields that incorporate this descriptor. The local volume fraction is defined in this paper as the volume fraction of a specified phase contained in a sliding window of specified size. The present derivation is restricted to spherical windows of radius a. The local volume fraction of phase α at point x ∈ Ω is the following quantity fα (x, a; ω) = 1 W y ≤a χ α (x + y; ω) dy, (14) where W denotes the volume of the spherical window. The local volume fraction is a random field; its expectation coincides with the global volume fraction f α . The fα are linearly dependent ; indeed, f1 + • • • + fN = 1. As a consequence, only the f1 , . . . , fN-1 should be included in the proposed enriched trial field. For the sake of simplicity, the remainder of this paper is restricted to two phase materials (N = 2). Therefore, the only local descriptor of the microstructure to be considered is the local volume fraction of phase 1, which will be abusively called the local volume fraction, and denoted f ; besides, the radius a of the spherical window will also be dropped, so that we will write f (x; ω) rather than f1 (x, a; ω). We consider trial fields which are polynomials of the local volume fraction τ(x; ω) = 2 α=1 p k=0 χ α (x; ω) f (x; ω) k ταk , ( 15 ) where ταk is a deterministic (symmetric) tensor. Obviously, the classical trial field ( 11) is retrieved with p = 0; p ≥ 1 effectively leads to an enrichment of the set of trial fields. In turn, this enrichment is expected to lead to sharper bounds on the effective properties of the microstructure. Evaluation of the energy of Hashin and Shtrikman Following the approach described in Sec. 2.3, we must evaluate the ensemble average of H for the trial field specified by Eq. ( 15). Each term of H is evaluated separately below. Introducing the following moments of the local volume fraction Y αk (x) = χ α (x; ω) f (x; ω) k ω ( 16 ) it is readily verified that for statistically homogeneous materials, Y αk does not depend on the observation point x. Indeed, Y αk (x) = 1 W k χ α (x; ω) k i=1 y i ≤a χ 1 (x + y i ; ω) dy i ω = 1 W k y 1 ,..., y k ≤a χ α (x; ω) k i=1 χ 1 (x + y i ; ω) ω dy 1 • • • dy k , and the integrand in the last line does not depend on x, due to statistical homogeneity. Evaluation of the first term of H(τ) is trivial τ = 2 α=1 p k=0 Y αk ταk . (17) The second term of the ensemble-averaged energy of Hashin and Shtrikman reads 1 2 τ : (C -C 0 ) -1 : τ = 1 2V x∈Ω α,β,h,k χ α (x; ω)χ β (x; ω) f (x; ω) h+k ταh : [C(x; ω) -C 0 ] -1 : τβk dx ω , where V denotes the volume of the domain Ω. Observing that χ α (x; ω)χ β (x; ω) = 0 for α β, and that χ α (x; ω)C(x; ω) = χ α (x; ω)C α we finally find τ : (C -C 0 ) -1 : τ = α,h,k Y α,h+k ταh : (C α -C 0 ) -1 : ταk . (18) Evaluation of the last term is more complex ; first, application of the Green operator for strains is written as a convolution product τ : Γ ∞ 0 τ -χτ = 1 V x,y∈Ω τ(x) : Γ ∞ 0 (y -x) : τ(y) -τ dx dy, where the above integral should be understood in the sense of principal values (see e.g. [START_REF] Torquato | Effective stiffness tensor of composite media-I. Exact series expansions[END_REF]). Substituting in the above equation the general form (15) of the trial field, and taking the ensemble average leads to τ : Γ ∞ 0 * τ -χτ = 1 V α,β,h,k x,y∈Ω Z αh,βk (y -x) -Y αh Y βk ταh : Γ ∞ 0 (y-x) : τβk dx dy where Z αh,βk (x, y) = χ α (x; ω)χ β (y; ω)[ f (x; ω)] h [ f (y; ω)] k ω . (19) The above statistical descriptor of the microstructure is translation-invariant [Z αh,βk (x, y) = Z αh,βk (y-x)]. Further assuming that the microstructure is statistically isotropic, so that Z αh,βk (x, y) depends on the norm of (yx) only [Z αh,βk (x, y) = Z αh,βk ( yx )], it can be shown that τ : Γ ∞ 0 τ -χτ = α,h,k Y α, Y α,h+k (C α -C 0 ) -1 + P 0 : ταk = Y αh         E + β,k Y βk P 0 : τβk         , (22) for α = 1, 2 and h = 1, . . . , N. The last term involves the ensemble average of the trial field τ [see Eq. ( 17)], and we have k Y α,h+k (C α -C 0 ) -1 + P 0 : ταk = Y αh (E + P 0 : τ ) , (23) Introducing the inverse X α,hk of Y α,h+k in the following sense X α,h Y α, +k = Y α,h+ X α, k = δ hk , (24) the solution to Eqs. ( 23) is readily found (C α -C 0 ) -1 + P 0 : ταh =        k X α,hk Y αk        (E + P 0 : τ ) . Then, from Eq. ( 24) k X α,hk Y αk = k X α,hk Y α,k+0 = δ h0 , which shows that ταh = 0 for h 0, and the optimum trial field reduces to the classical form given by Eq. [START_REF] Kröner | On the physics and mathematics of self-stresses[END_REF]. In other words, we get the surprising result that the enriched trial field (15) does not improve the classical Hashin and Shtrikman bounds on the effective elastic properties. This result is briefly extended to a wider class of enriched trial fields in 3.4 below. Extension to a wider class of trial fields It can be shown that the above results extend to a much wider class of trial fields. We consider here n local descriptors of the microstructure φ 1 (x; ω), . . . , φ n (x; ω), and the following trial field τ(x; ω) = N α=1 n k=1 χ α (x; ω)φ k (x; ω)τ αk , (25) where ταk is again a deterministic, second order, symmetric tensor. In order to ensure that Eq. ( 25) is indeed an enrichment of Eq. ( 11), we choose φ 1 (x; ω) = 1. It is assumed that these local descriptors of the microstructure are weakly isotropic in the sense that the following two-point statistical descriptors χ α (x; ω)φ h (x; ω)χ β (y; ω)φ k (y; ω) ω (26) depend on the norm yx of the radius-vector only. Under this assumption, it can be shown that optimization of H(τ) with respect to ταh again leads to ταh = 0 for h 1. This means that the classical bounds of Hashin and Shtrikman are again retrieved. Conclusion and outlook In this paper, we have presented an attempt at improving the classical bounds of Hashin and Shtrikman [START_REF] Hashin | A variational approach to the theory of the elastic behaviour of polycrystals[END_REF], by considering enriched trial fields which incorporate non-trivial local descriptors of the microstructure. By contrast, the only local descriptor used to derive the classical bounds is the phase at the observation point. We first try to incorporate the local volume fractions as supplementary descriptors. This was suggested by previous work by Widjajakusuma et al. [START_REF] Widjajakusuma | Quantitative prediction of effective material properties of heterogeneous media[END_REF], and by the fact that this descriptor effectively introduces a length-scale (the size of the sliding window). We were therefore hoping to be able to produce bounds that would be sensitive to e.g. particle-size distributions (which is not the case of the classical bounds). However, our derivation shows that optimization of the ensemble-averaged energy of Hashin and Shtrikman again leads to the classical bounds. The supplementary descriptors are therefore totally ignored. This somewhat unexpected result was then extended to a very wide class of local descriptors of the microstructure. Does this mean that improving the bounds of Hashin and Shtrikman is a hopeless task? Not necessarily. Indeed, the result presented in this paper is obtained under the assumption of isotropic probing of the microstructure [see Eq. ( 26)]. In other words, it is assumed that the two-point cross-correlations of all local descriptors only depend on the distance between the two observation points. This strongly suggests to use anisotropic probes; this will be investigated in future work. h+k ταh : P 0 : ταk -Y αh Y βk ταh : P 0 : τβk , (20)where P 0 denotes the Hill tensor of a spherical inclusion embedded in the reference material C 0 . Gathering Eqs. (17), (18) and (20) leads to the following expression of the ensemble averaged energy of Hashin and Shtrikman α,β,h,k H(τ) = a,k Y αk ταk : E - 1 2 α,h,k Y α,h+k ταh : (C α -C 0 ) -1 + P 0 : ταk + 1 2 α,β,h,k Y αh Y βk ταh : P 0 : τβk . (21) 3.3 Determination of the optimum trial field Optimization of expression (21) with respect to ταk leads to the following charac- terization of the critical point k
22,155
[ "2804" ]
[ "204904" ]
01474688
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01474688/file/978-3-642-40919-6_2_Chapter.pdf
Santiago Aguirre email: [email protected] Carlos Parra email: [email protected] Jorge Alvarado email: [email protected] Combination of process mining and simulation techniques for business process redesign: a methodological approach Keywords: process mining, data mining, simulation, process redesign, BPM Organizations of all sizes are currently supporting their performance on information systems that record the real execution of their business processes in event logs. Process mining tools analyze the log to provide insight on the real problems of the process, as part of the diagnostic phase. Nonetheless, to complete the lifecycle of a process, the latter has to be redesigned, a task for which simulation techniques can be used in combination with process mining, in order to evaluate dierent improvement alternatives before they are put in practice. In this context, the current work presents a methodological approach to the integration of process mining and simulation techniques in a process redesign project. Introduction Information systems have become the backbone of most organizations. Without them, companies could not sell products or services, purchase materials, pay suppliers or submit their tax reports. These systems record valuable information about process execution on event logs containing activities, originators, timestamps and case data. This information can be extracted and analyzed to produce useful knowledge for organizations to diagnose and improve their business processes. This is called process mining [START_REF] Van Der Aalst | Process mining manifesto[END_REF] . Process mining is a discipline that aims to discover, monitor and improve business processes by extracting knowledge from information systems event logs [START_REF] Van Der Aalst | Process Mining: Discovery, Conformance and Enhancement of Business Process[END_REF], making use of data mining techniques. Event logs record information about real business process execution and are available in Process Aware Information Systems (PAIS) such as BPM, ERP, CRM, Workow Management Systems, etc. [START_REF] Stahl | Modeling Business Process: A Petri Net-Oriented Approach[END_REF]. Process mining is, therefore, a recent discipline that lies between data mining and process modeling and analysis [START_REF] Van Der Aalst | Process mining manifesto[END_REF]. The ultimate goal of process mining is to generate useful knowledge for organizations to understand and improve their business processes mainly through the application of data-mining-based tools. Figure 1 shows the three components of process mining [START_REF] Van Der Aalst | Process Mining: Discovery, Conformance and Enhancement of Business Process[END_REF]: Discovery, Conformance and Enhancement. Fig. 1. Process mining components Table 1 details the organizational enhancement possibilities oered by process mining through its three components. For its part, Business Process Simulation provides techniques for testing solutions before their actual implementation. Both simulation and process mining contribute to Business Process Lifecycle [START_REF] Weske | Business Process Management: Concepts, Languages, Architectures[END_REF] (Figure 2), which starts with business process design based on customer and stakeholder requirements. Next, the implementation stage comprises business rules and policy denition as well as computer platform conguration. Then, the process enters its actual execution stage. Later, in the monitoring and analysis stage, the process is optimized by measuring and analyzing its performance indicators. Table 2 details the contribution of simulation and process mining to each stage of the cycle. Component Application Process Discovery Finding out how the process actually runs. Process mining algorithms applied to the analysis of event logs allow organizations to clearly see and model the real execution of a process in terms of either a Petri net or BPMN notation. The point here is that process mining describes the real situation and is not based on people's (subjective) perception [START_REF] Rozinat | Discovering colored petri nets from event logs[END_REF]. Conformance Checking Determining whether the process complies with regulations and procedures. The real execution model of a business process can be compared to documented procedure protocols in order to determine its conformance with established standards, regulations and policies. Process mining has proved useful for detecting potential sources of fraud and non-compliance [START_REF] Jans | A business process mining application for internal transaction fraud mitigation[END_REF]. Process Enhacement Analyzing the social interaction of the process. Through the application of process mining techniques, it is possible to assess the social network supporting the process, in order to analyze interactions between individuals and discover loops that may delay its execution [START_REF] Van Der Aalst | Business process mining: an industrial application[END_REF]. These techniques are also used to interpret roles in the process as an example group of users involved only in one task. Discovering bottlenecks (bottlenecks). These techniques allow nding actual bottlenecks on which action can be taken to improve process implementation. Predicting specic time cycles. Certain data mining techniques such as decision trees facilitate the prediction of the remaining execution time of a running process [START_REF] Van Der Aalst | Time prediction based on process mining[END_REF], [START_REF] Aguirre | Aplicacion de mineria de procesos al proceso de compras de la puj[END_REF]. Implementation and Execution In the implementation phase, process mining is used to verify that the process complies with business policies and rules. It is also possible to predict the remaining execution time of a running case. Having been tested and improved through simulation, business processes are implemented in this phase. Monitoring and analysis In the analysis phase, process Fig. 2. Business Process Lifecycle According to the authors of the Process Mining Manifesto [START_REF] Van Der Aalst | Process mining manifesto[END_REF], one of the challenges that must be addressed to improve the usability of process mining is its integration with other methodologies and analysis techniques. A clear example is provided by simulation tools, which are likely to complement process diagnosis and analysis by testing alternative process mining implementation scenarios as part of the business process lifecycle. Most simulation techniques have been applied to production and logistics, where process routes are predened and can therefore be more easily modeled. However, in service processes such as complaint appraisal and response, there can be many variations or process routes depending on the type of complaint. For this reason, it is important to start by analyzing the information system's event log in order to reach a realistic model, rather than an idealized version of the process. The necessary parameters to build such a model can be supplied by process mining. Furthermore, a series of methodological approaches have been developed for business process redesign and for the application of simulation and process mining. BP trends [START_REF] Harmon | Business Process Change[END_REF] proposes ve general stages for a process redesign eort: 1) Project Understanding, 2) Business Process Analysis, 3) Business Process Redesign, 4) Business Process Redesign Implementation and 5) Redesigned Business Process Roll Out. Although this methodology constitutes a valuable approach, it needs to be complemented with specic tools such as simulation and process mining. The current paper presents a methodological approach to process redesign, based on a combination of simulation techniques and both data and process mining tools, together with those of the understanding phase of the BP trends Business Process Redesign methodology [START_REF] Harmon | Business Process Change[END_REF]. Section 2 contains a complete review of the state of the art and related works. Section 3 provides a detailed explanation of the methodological approach and Section 4 describes the case study to which the method was applied. Finally, Section 5 draws the conclusions and future work. 2 Literature review and related works The literature review focuses on previous methodological developments intended not only for the application of process mining to business process improvement, but for the combination of process mining and simulation as well. Methodologies for process mining Bozcaya [START_REF] Bozkaya | Process diagnostics: A method based on process mining, information, process, and knowledge management[END_REF] proposed a methodology for applying process mining to business process diagnosis, based on three perspectives: control ow, performance and organizational analysis. The method in question starts with Log preparation, which includes event log extraction, interpretation and transformation, in order to determine the activities and their sequence. The next step is to inspect and clean the event log data to eliminate cases with missing data. Once the log has been cleaned, the control ow analysis is performed for conformance checking against procedures through the application of discovery techniques like alpha, fuzzy of genetic algorithms. As a next step, these same authors propose performance analysis in order to discover business process bottlenecks and delays. Finally, the social network algorithms are used to apply an organizational analysis aimed not only at determining role interactions involved in process execution, but at discovering loops that might be delaying process cycling time. This method was applied to a case study and constitutes an important step ahead in the diagnostic phase of process redesign. Nevertheless, this phase needs to be complemented with the understanding (planning), redesign (to-be) and implementation stages of a complete business process redesign cycle. Rebuge and Ferreira [START_REF] Rebuge | Business process analysis in healthcare environments: A methodology based on process mining[END_REF] developed a methodological approach to business process analysis in the health care sector. They start by describing the complexity of business processes in this sector, which are inherently dynamic, multidisciplinary and highly variable. Therefore, process mining techniques are the most suitable ones for diagnosing and analyzing these processes. These researchers describe their method as an extension of Bozcaya's one [START_REF] Bozkaya | Process diagnostics: A method based on process mining, information, process, and knowledge management[END_REF], on which they based their work, including the sequence cluster analysis applied by this author after the log inspection phase. Rebuge and Ferreira [START_REF] Rebuge | Business process analysis in healthcare environments: A methodology based on process mining[END_REF] actually focus on this cluster technique, which is aimed at discovering process ow patterns. When applied to the emergency care process of a hospital, this method allowed identifying all variations and deviations from the internal protocols and guidelines of the institution, thus demonstrating the usefulness of process mining for diagnosis and analysis in these cases. As to future work, they suggest complementing the method with additional steps such as the use of heuristics for determining the number of clusters, on the one hand, and the establishment of measures for evaluating the quality of the results, on the other hand. Process mining and simulation The literature on this topic presents research works and case studies in which process mining and simulation are used in combination. Rozinat [START_REF] Rozinat | Discovering colored petri nets from event logs[END_REF] uses process mining techniques to discover business processes and, based on past executions, analyzes how data attributes inuence decisions on said processes. This analysis allows nding each event's probabilities and frequencies, based on which a model is constructed and represented by a Colored Petri Net (CPN), in order to simulate dierent resource usage optimization and throughput time reducing alternatives. Maruster [START_REF] Maruster | Redesigning business processes: a methodology based on simulation and process mining techniques[END_REF]proposes a process redesign methodology based on the combination of process mining and simulation techniques, and presents its application to three case studies. Mainly supported by CPN simulation, the method consists of three phases: process performance variable denition, process analysis (as-is), and process redesign (to-be). This approach constitutes an important step forward in the integration of dierent tools in this eld. However, it focuses on CPN simulation, thus tending to underscore the understanding phase, which is seen, according to Harmon [START_REF] Harmon | Business Process Change[END_REF], as the rst stage of any process redesign project. The current work focuses on complementing the methodology proposed by Maruster [START_REF] Maruster | Redesigning business processes: a methodology based on simulation and process mining techniques[END_REF], by emphasizing the project understanding phase, featured by process scope analysis, process redesign goal setting and performance gap analysis. According to Vander Alast [START_REF] Van Der Aalst | Process mining manifesto[END_REF] one of the reasons why process mining has not been widely applied is the lack of a comprehensive methodology that is capable of linking organization Key Process Indicators (KPIs) with actual analysis and redesign eorts. The methodology proposed in this paper intends to close this gap by linking business priorities to process analysis and redesign using process mining and simulation tools. Just as well, it shows how data mining tools (e.g., decision trees) can be combined with simulation in a process redesign project. Part of this method was applied to the case study described in Section 4 1 . 3 The redesign project: a methodological approach Including process mining and simulation tools, the development of the current methodology took into consideration both BPtrends method [START_REF] Harmon | Business Process Change[END_REF] and Maruster's [START_REF] Stahl | Modeling Business Process: A Petri Net-Oriented Approach[END_REF] approach. It comprises the following phases: Phase I: Project Understanding. The goal of this phase is to gain consensus over the problem to be solved, the scope of the project and the desired goals as stated in terms of the business process indicators. 1 Some data about this case study has been modied forprivacy reasons. Phase II: Project Understanding. The goal of this phase is to gain consensus over the problem to be solved, the scope of the project and the desired goals as stated in terms of the business process indicators. Phase III: Business Process Redesign (to-be). This phase is intended to develop and simulate the corresponding business process improvement alternatives. Phase IV: Implementation. The goal of the implementation phase is to put in operation the amendments in question through changes in procedures, job descriptions and work assignments. Figure 3 and Table 3 explain the activities and tools that describe the proposed methodology. Fig. 3. Proposed methodology with phases, activities and tools. The desired amendments of the to-be process turn to be the project goals. Phase II: Business Process Analysis (AS IS) Table 3. Description of the phases and activities of the proposed methodology. 4 Case study: procurement process at a private University The case study to which we applied the proposed methodology consists in the procurement process of a private university that handles approximately 15,000 purchase orders every year, with an estimated budget of $ US 50 million. The normal functioning of the University and its projects depends on the eciency of the Procurement Department in obtaining the required goods and services. The procurement process is supported by an ERP system 2 in which the fol- lowing activities are executed: purchase requisition, requisition approval, purchase order, purchase order approval, goods receipt, invoice receipt and vendor payment. Application of the methodological approach The methodology presented in the current work was applied to this case study using process mining and simulation techniques. The following is the detailed step by step explanation of the process. A. Phase I: Understanding the Project In this phase, the problem is described, the gap analysis between as-is and to-be is performed, and the project goals are set. Problem description 2 The organization uses Oracle PeopleSoft ®. Despite the support of an integrated system, the procurement process in question has been presenting problems and inconveniences such as long approval waiting times and overload of manual documents and activities not managed by the ERP. This makes the process inecient, as only 32% of orders are delivered within 1 month, which is the user expected time. The users (professors, research and administrative sta ) frequently present complaints about delays and excessive paperwork in the process. Although professors must make purchases for research projects having 1 or 2 year time frames, the purchase of an imported good may take more than 6 months, which certainly impacts the schedule of these projects. Process scope and stakeholder identication Figure 4 shows the process scope diagram, where it can be seen that the main input is the requisition made by departments and areas of the university. Said requisition starts a process that nishes when the product is delivered to the areas and the supplier has been paid. There is a procedure for the order approval subprocess, but there is no business rule specifying the maximum time allowed for this step. There is also a good governance code for managing suppliers and contracts. Enablers correspond to two dierent resources of the process: the information system (ERP system) and the sta involved in the process. The shaded boxes in gure 4 represent the process stakeholders: departments, suppliers, purchasing board, and both the Administrative and IT oces. Gap analysis The gap analysis was used to represent the current (as-is) and expected (to-be) process performances, as mediated by process redesign. In order to determine the expected performance, it is important to ask the stakeholders why the process should be improved and what the expected performance is in terms of key process indicators. Figure 5 shows the main performance and capability gaps and the tools that were used in the analysis and improvement of the process. Fig. 5. Gap analysis Denition of project goals The desired amendments of the to-be process turn to be the project goals: Reducing cycle time to ensure that 70% of orders are delivered within 1 month. Reduce the number of user complaints. B. Phase II: Analyzing Business Process (as-is) This phase begins with event log data extraction in order to discover the real process model and to apply data mining techniques for an in-depth process analysis. The objective of this phase is to establish process improvement opportunities. Event log extraction In this phase, the event log is extracted from the ERP system. The information supplied by the log includes case id, time stamps, activities and performers (originators) of the procurement process. In addition, there is information regarding each order such as requested product or service, supplier, requesting department, cost, product family and the person approving each purchase requisition or order. The original log contained one year of historical data, corresponding to 15,091 cases. The quality of the log was inspected in the statistical package 3 , which allowed nding some missing data and outliers. After cleaning the log, the cases were reduced to 8,987. Process discovery Through the application of process mining algorithms such as alpha mining [START_REF] Van Der Aalst | Workow mining: discovering process models from event logs[END_REF], heuristic mining [START_REF] Weijters | Flexible heuristics miner[END_REF] or genetic mining [START_REF] Medeiros | Genetic process mining: An experimental evaluation[END_REF] it is possible to automatically discover the actual process model using the ProM software functionality. This model can be represented in a Petri Net, or through BPMN notation. Because of its mathematical foundations, process mining uses Petri Nets in most applications, which allows the implementation of analysis techniques [START_REF] Stahl | Modeling Business Process: A Petri Net-Oriented Approach[END_REF]. Some case studies make use of Colored Petri Nets (CPN) because of their simulation capabilities in packages such as CPN tools [START_REF] Rozinat | Discovering simulation models[END_REF]. In the current case, the alpha algorithm was used due to the low complexity of the process model and paths. For more complex processes, genetic algorithms or the heuristic mining algorithm are recommended. Figure 6 shows the studied procurement process modeled in a Petri Net. Fig. 6. Procurement process Process performance analysis The event log was assessed through descriptive statistics in order to analyze 3 IBM SPSS® was used for the data mining analysis. some key process indicators such as cycle time, cycle time per buyer and buyer productivity, among others. Figure 7 shows a box plot of cycle times per buyer, which exhibits a high variability in mean time cycles between buyers and, in some cases, high variability within buyers. This analysis suggests an inuence of the buyer in time cycles. This inuence is going to be analyzed in depth in the data mining analysis section. Fig. 7. Procurement cycle times per buyer Table 4 presents some key ndings of the process performance analysis. The main bottleneck of the process is the purchase requisition approval subprocess Mean cycle time is 50 days, with a standard deviation of 28 days. Only 32% of orders are delivered within 1 month. Imports require thrice as much more time than local purchases. The minimum time required for an imported good is 40 days. The mean cycle time per buyer is highly variable (Fig 7). Table 4. Key ndings of the process performance analysis. Data mining analysis For a more detailed diagnosis of the purchase requisition approval subprocess, a decision tree analysis was made to discover the roles of the organization that delay the process. The database was split in three parts: training (40% of the records), validation (40% of the records) and test (20% of the records). The decision tree was growth and pruned using the classication and regression tree algorithm (CART) and Gini impurity. The CART procedure minimizes classication error given a tree size [START_REF] Shmueli | Data mining for business intelligence[END_REF] Figure 8 shows the tree results in test data. Figure 8 shows that if the order must be approved by the roles in node 2, the probability that the requisition arrives before 30 days is 1%. When the approvers are those in node 1, the odds of receiving the request within 30 days rise to 50%. Fig. 8. Decision tree for purchase requisition approval Table 5 presents the key ndings of the purchase requisition approval decision tree analysis. The person that approves the purchase request has a signicant impact on the probability of receiving the request within 30 days. Table 5. Key ndings of the purchase requisition approval decision tree analysis. Root cause analysis The Cause and Eect analysis was used to determine the cause of the problem. Through this tool, the roles involved in the execution of the process identied the major causes of delay in purchase requisition approval. Figure 4.1 shows these causes as classied by categories. Fig. 9. Root cause analysis Table 6 presents the key ndings of this analysis. One of the main approval delay causes is that the physical documents that are handled in the process are not managed in a central repository. Given that there is no business rule determining a time limit for approvals, approvers do not give the required priority to this process. Table 6. Key ndings of this analysis. C. Phase III: Redesigning the Business Process (to-be) In the redesign stage, the dierent process improvement options are simulated and evaluated. Simulation model Based on the process discovered in phase 1, and on processing (P) and waiting times (W) calculated through the statistical analysis of the event log data, a simulation model was generated. Figure 10 shows the simulation model, which was obtained in the Process Modeler application. Process improvement alternatives The process improvement alternatives were dened to overcome the issues found in the as-is analysis phase. Said alternatives are shown in table 8. Process improvement alternative 1. Removing the purchase order approval process. Out of the 8,987 analyzed cases, no purchase order was rejected, so this control can be eliminated, the responsibility lying on the purchase requisition approval process. 2. Establishing an approval time limit business rule. Said business rule would dene that the approvers have a maximum of 5 days for purchase requisition approval. Lessons learned One of the key success factors for the implementation of the proposed methodology is the involvement of the people (users) playing a role in the actual execution of the business process. User knowledge is crucial for the interpretation of the cases, activities and variables of the process' event log, especially when it comes to preventing data misinterpretation and organizing a log that represents the actual execution of the process. Data extraction from, and cleansing of the event log is a crucial step that must be carried out in close connection with the users because they are the ones who know the real facts about outlier values and missing or wrong data. The sequence proposed in this methodology does not necessarily have to be executed in that same order. Tools like process performance analysis, data mining and root cause analysis can be used in any order and may be complemented with other tools like Statistical Process Control from Six Sigma or Value Stream Mapping from Lean. These tools are complementary and might be useful in complex business processes where data mining and root cause analysis are not enough for a complete as-is process analysis. Although the currently available process mining packages have been evolving in functionality, they still need to be more user-friendly, especially regarding data display techniques. Working with state-of-the-art algorithms, ProM is particularly useful for process discovery, but the resulting petri nets are not easy to interpret by the business user. Disco from Fluxicom is emerging as a user friendly package that provides more understandable visualization and animation tools. Providing adequate functionalities for nding missing data and outliers, SPSS and SAS are helpful and robust statistical packages when it comes to event log cleaning. Data mining analyses such as cluster and decision trees can be used with these applications. Conclusions and further research The present paper presents a methodological approach to process redesign that combines simulation techniques, data mining and process mining tools, as well as the tools of the understanding phase of the BPtrends methodology [START_REF] Harmon | Business Process Change[END_REF]. These tools and techniques are complementary to one another, and their integration contributes to achieving the goals set for each phase of the methodology. BPtrend tools are useful for the understanding phase of the project, in which the scope of the process is established, the gap analysis between as-is and to-be is performed, and the stakeholders agree on the expected performance of the business process. On the other hand, specic process mining techniques such as alpha, heuristics or genetic algorithms allow both discovering the actual process model and checking for compliance with business rules and procedures. Said model is used to construct the as-is simulation model. Data mining techniques such as decision trees and cluster analysis are useful for determining the variables that inuence process cycle times and to determine the odds of executing a process within a certain time limit. Simulation benets greatly from process mining since the latter provides the parameters that are needed to construct the simulation model, based on the real process model. Simulation makes it possible to test dierent process redesign alternatives before implementing them, thus becoming a valuable decision-making tool. A process redesign project requires more than a single tool to achieve the expected results. Although process mining provides tools for process diagnosis and analysis, it must be complemented with other methodologies and techniques such as simulation and other process improvement tools that allow understanding and planning the process redesign eort. The methodological approach proposed in this paper needs to be validated in other case studies reaching the implementation phase, in order to assess whether it meets the expected results. Further research is needed to determine how the event log source (ERP, WFMS, CRM) determines the necessary log extraction, transformation and cleansing activities. problem description is to understand and gain consensus on the reasons why the process needs to be improved (customer complains, compliance requirements, costs, phase, the source, input, output and customer of the process are identied. The process stakeholders are established and interviewed to gain a better understanding of the performance desired for the process. Analysis The gap analysis identies the actual process performance indicators (as-is) and establishes the desired process performance indicators (to be) based on process vision, benchmarking and stakeholder expectations. of the actual execution of the business process must be extracted from the information system (ERP, CRM, BPMS). Then, the event log is cleaned from missing data and transformed, so that it can be analyzed with data mining and process mining packages. of process mining algorithms such as alpha mining[START_REF] Van Der Aalst | Workow mining: discovering process models from event logs[END_REF], heuristic mining[START_REF] Weijters | Flexible heuristics miner[END_REF] or genetic mining[START_REF] Medeiros | Genetic process mining: An experimental evaluation[END_REF], it is possible to automatically discover the actual process model from the event log using ProM or Disco software functionalities. This model can be represented in a Petri net, or through BPMN notation. The real process model allows visualizing bottlenecks, loops or lack of compliance. -Table 3 . 3 are used to extract knowledge from process execution. Techniques such as decision trees are used to discover those variables that have greater incidence on process delays. Social Network is useful for analyzing role interactions between the people executing the process, with the aim of nding either functional loops or key roles within the process is useful to examine the causes of the main problems that have been discovered in the previous steps. This analysis is a simple way to organize and classify the list of possible causes and requires the knowledge of the people participating in the execution of and causes are clear, the process improvement alternatives for the to-be process must be established to overcome the issues found in the as-is analysis phase (Phase II).-Cost-benet analysis Description of the phases and activities of the proposed methodology. Fig. 4 . 4 Fig. 4. Process scope analysis Fig. 10 . 10 Fig. 10. Simulation model of the procurement process Table 1 . 1 Process Mining components and their applications Phase Contribution of process mining Contribution of simulation Re(Design) The real process model, which Through simulation it is is discovered by process mining possible to perform what if techniques, is an important analyses of the dierent process input for process design or design or redesign options. redesign. Table 2 . 2 Contributions of simulation and process mining to Business Process Lifecycle. Table 3 : 3 Description of the phases and activities of the proposed methodology. Phase Description Tools /Activity Phase I: Project Understanding Table 3 . 3 Description of the phases and activities of the proposed methodology. Table 3 . 3 Description of the phases and activities of the proposed methodology. Table 7 . 7 Scoreboard Scenario Name Total Exits Average Time In Expected Average System (Days) Time (Days) Baseline Request 4.224,00 50.72 30 Table 7 . 7 Average process cycle time Table 8 . 8 Process improvement alternatives Simulation and what if analysis The dierent improvement alternatives were simulated in corresponding scenar- ios. Scenario 1: Removal of the purchase order approval process. Scenario 2: Establishment of a business rule stating that approvers have a maximum of 5 days for purchase requisition approval. Scenario 3: Scenario 1 + Scenario 2 Table 9 shows the key ndings of the simulation analysis Table 9 . 9 Key ndings of simulation and what if analysisScenario 1: If the purchase order approval process was removed, the cycle time could be reduced to 43 days. Scoreboard Name Total Exits Average Time in System (Day) Solicitud 4224,00 42,81 Scenario 2: If the university established a business rule specifying that approvers have a maximum of 5 days to approve purchase requisitions, the cycle time could be reduced to 43 days. Scoreboard Name Total Exits Average Time in System (Day) Solicitud 4224,00 40,23 Scenario 3: Through the simultaneous implementation of scenarios 1 and 2, the cycle time could be reduced to 35 days. Scoreboard Name Total Exits Average Time in System (Day) Solicitud 4224,00 34,76 Table 9 . 9 Key ndings of the simulation analysis D. Phase: Implementation At this stage, the selected alternatives are implemented to improve the business process. For this case study, scenario 3 presented in Table9allows decreasing the cycle time to 35 minutes, thus constituting the alternative that is going to be recommended to the University for Implementation.
34,829
[ "1002430", "1002431", "1002432" ]
[ "51187", "51187", "51187" ]
01474689
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01474689/file/978-3-642-40919-6_3_Chapter.pdf
J C A M Buijs M La H A Reijers email: [email protected] B F Van Dongen email: [email protected] W M P Van Der Aalst Improving Business Process Models using Observed Behavior Process-aware information systems (PAISs) can be configured using a reference process model, which is typically obtained via expert interviews. Over time, however, contextual factors and system requirements may cause the operational process to start deviating from this reference model. While a reference model should ideally be updated to remain aligned with such changes, this is a costly and often neglected activity. We present a new process mining technique that automatically improves the reference model on the basis of the observed behavior as recorded in the event logs of a PAIS. We discuss how to balance the four basic quality dimensions for process mining (fitness, precision, simplicity and generalization) and a new dimension, namely the structural similarity between the reference model and the discovered model. We demonstrate the applicability of this technique using a real-life scenario from a Dutch municipality. Introduction Within the area of process mining several algorithms are available to automatically discover process models. By only considering an organization's records of its operational processes, models can be derived that accurately describe the operational business processes. Organizations often use a reference process model, obtained via expert interviews, to initially configure a process. During execution however the operational process typically starts deviating from this reference model, for example, due to new regulations that have not been incorporated into the reference model yet, or simply because the reference model is not accurate enough. Process mining techniques can identify where reality deviates from the original reference model and especially how the latter can be adapted to better fit reality. Not updating the reference model to reflect new or changed behavior has several disadvantages. First of all, such a practice will overtime drastically diminish the reference model's value in providing a factual, recognizable view on how work is accomplished within an organization. Second, a misaligned reference model cannot be used to provide operational support in the form of, e.g., predictions or recommendations during the execution of a business process. A straightforward approach to fix the misalignment between a reference model and reality is to simply discover a new process model from scratch, using automated process discovery techniques from process mining [START_REF] Van Der Aalst | Process Mining: Discovery, Conformance and Enhancement of Business Processes[END_REF]. The resulting model may reflect reality better but may also be very different from the initial reference model. Business analysts, process owners and other process stakeholders, may heavily rely on the initial reference model to understand how a particular process functions. Confronting them with an entirely new model may make it difficult for them to recognize its original, familiar ingredients and understand the changes in the actual situation. As a result, a freshly discovered process model may actually be useless in practice. In this paper, we propose to use process mining to discover a process model that accurately describes an existing process yet is very similar to the initial reference process model. To explain our approach, it is useful to reflect on the four basic quality dimensions of the process model with respect to the observed behavior [START_REF] Van Der Aalst | Process Mining: Discovery, Conformance and Enhancement of Business Processes[END_REF][START_REF] Van Der Aalst | Replaying History on Process Models for Conformance Checking and Performance Analysis[END_REF] (cf. Figure 1a). The replay fitness dimension quantifies the extent to which the discovered model can accurately replay the cases recorded in the log. The precision dimension measures whether the discovered model prohibits behavior which is not seen in the event log. The generalization dimension assesses the extent to which the resulting model will be able to reproduce possible future, yet unseen, behavior of the process. The complexity of the discovery process model is captured by the simplicity dimension, which operationalizes Occam's Razor. Following up on the idea to use process mining for aligning reference process models to observed behaviors, we propose to add a fifth quality dimension to this spectrum: similarity to a given process model. By incorporating this dimension, we can present a discovered model that maximizes the four dimensions while remaining aligned, as far as possible, with the intuitions and familiar notions modeled in a reference model. "able to replay event log" "Occam's razor" "not overfitting the log" "not underfitting the log" Figure 1b illustrates the effects of introducing this additional dimension. By setting a similarity boundary, the search for a model that balances the initial four quality dimensions is restrained. In this way, a new version of the reference model can be found that is similar to the initial reference model yet is improved with respect to its fit with actual behavior. Clearly, if the similarity boundary is being relaxed sufficiently (i.e. the discovered model is allowed to deviate strongly from the reference model), it is possible to discover the optimal process model. Such an optimal model, as explained, may not be desirable to use for process analysts and end users as a reference point, since they may find it difficult to recognize the original process set-up within it. The remainder of the paper is structured as follows. In Section 2 we present related work in the area of process model improvement and process model repair. In Section 3 we present our approach using a genetic algorithm to balance the different quality dimensions while in Section 4 we show how to incorporate the similarity dimension in our approach. In Section 5 we show the results of applying our technique to a small example. In Section 6 the technique is applied to a real life case. Finally, Section 7 concludes the paper. Related Work Automatically improving or correcting process models using different sources of information is an active research area. Li et. al. [START_REF] Li | The minadept clustering approach for discovering reference process models out of process variants[END_REF] discuss how a reference process model can be discovered from a collection of process model variants. In their heuristic approach they consider the structural distance of the discovered reference model to the original reference model as well as the structural distance to the process variants. By balancing these two forces they make certain changes to the original reference model to make it more similar to the collection of process model variants. Compared to our approach, here the starting point is a collection of process variants, rather than a log. An approach aimed to automatically correct errors in an unsound process model (a process model affected by behavioral anomalies) is presented by Gambini et. al. [START_REF] Gambini | Automated Error Correction of Business Process Models[END_REF]. Their approach considers three dimensions: the structural distance, behavioral distance and 'badness' of a solution w.r.t. the unsound process model, whereby 'badness' indicates the ability of a solution to produce traces that lead to unsound behavior. The approach uses simulated annealing to simultaneously minimize all three dimensions. The edits applied to the process model are aimed to correct the model rather than to balance the five different forces. Detecting deviations of a process model from the observed behavior has been researched, among others, by Adriansyah et. al. [START_REF] Van Der Aalst | Replaying History on Process Models for Conformance Checking and Performance Analysis[END_REF][START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF]. Given a process model and an event log, deviations are expressed in the form of skipped activities (activities that should be performed according to the model, but do not occur in the log) and inserted activities (activities that are not supposed to happen according to the model, but that occur in the log). A cost is attributed to these operations based on the particular activity being skipped/inserted. Based on this information an alignment can be computed between the process model and the log, which indicates how well the process model can describe the recorded behavior. While this approach provides an effective measure for the replay fitness quality dimension of Figure 1a, the approach per se does not suggest any corrections to rectify the process model's behavior. The work of Fahland et. al. [START_REF] Fahland | Repairing Process Models to Reflect Reality[END_REF] provides a first attempt at repairing process models based on observed behavior. In their notion, a process model needs repair if the observed behavior cannot be replayed by the process model. This is detected using the alignment between the process model and the observed behavior of [START_REF] Van Der Aalst | Replaying History on Process Models for Conformance Checking and Performance Analysis[END_REF][START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF]. The detected deviations are then repaired by extending the process model with sub-processes nested in a loop block. These fixes are applied repeatedly until a process model is obtained that can perfectly replay the observed behavior. This approach extends the original process model's behavior by adding new fragments that enable the model to replay the observed behavior (no existing fragments are removed). The main disadvantage of this approach is that only one aspect of deviation, namely that of not being able to replay the observed behavior, is considered. Moreover, since repairs add transitions to the model, by definition, the model can only become more complex and less precise. It is unclear how to balance all five quality dimensions by extending the work in [START_REF] Fahland | Repairing Process Models to Reflect Reality[END_REF]. Our Mining Technique In this section we briefly introduce our flexible evolutionary algorithm first presented in [START_REF] Buijs | On the Role of Fitness, Precision, Generalization and Simplicity in Process Discovery[END_REF]. This algorithm can seamlessly balance four process model quality dimensions during process discovery. Process Trees Our approach internally uses a tree structure to represent process models. Because of this, we only consider sound process models. This drastically reduces the search space thus improving the performance of the algorithm. Moreover, we can apply standard tree change operations on the process trees to evolve them further, such as adding, removing and updating nodes. Figure 2 shows the possible operators of a process tree and their translation to a Petri net. A process tree contains operator nodes and leaf nodes. An operator node specifies the relation between its children. Possible operators are sequence (→), parallel execution (∧), exclusive choice (×), non-exclusive choice (∨) and loop execution ( ). The order of the children matters for the sequence and loop operators. The order of the children of a sequence operator specifies the order in which the children are executed (from left to right). For a loop, the left child is the 'do' part of the loop. After the execution of this part the right child, the 'redo' part, might be executed. After this execution the 'do' part is again enabled. The loop in Figure 2 for instance is able to produce the traces A , A, B, A , A, B, A, B, A and so on. Existing process models can be translated to the process tree notation, possibly by duplicating activities. Quality Dimensions To measure the quality of a process tree, we consider one metric for each of the four quality dimensions, as we proposed in [START_REF] Buijs | On the Role of Fitness, Precision, Generalization and Simplicity in Process Discovery[END_REF]. We base these metrics on existing work in each of the four areas [START_REF] Van Der Aalst | Replaying History on Process Models for Conformance Checking and Performance Analysis[END_REF][START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF] and we adapt them for process trees, as discussed below. For the formalization of these metrics on process trees we refer to [START_REF] Buijs | On the Role of Fitness, Precision, Generalization and Simplicity in Process Discovery[END_REF]. Replay fitness quantifies the extent to which the model can reproduce the traces recorded in the log. We use an alignment-based fitness computation defined in [START_REF] Adriansyah | Conformance Checking using Cost-Based Fitness Analysis[END_REF] to compute the fitness of a process tree. Basically, this technique aligns as many events as possible from the trace with activities in an execution of the model (this results in a so-called alignment). If necessary, events are skipped, or activities are inserted without a corresponding event present in the log. Penalties are given for skipping and inserting activities. The total costs for the penalties are then normalized, using information on the maximum possible costs for this event log and process model combination, to obtain a value between 1 (perfect) and 0 (bad). Precision compares the state space of the tree execution while replaying the log. Our metric is inspired by [START_REF] Adriansyah | Alignment Based Precision Checking[END_REF] and counts so-called escaping edges, i.e. decisions that are possible in the model, but never made in the log. If there are no escaping edges, the precision is perfect. We obtain the part of the state space used from information provided by the replay fitness, where we ignore events that are in the log, but do not correspond to an activity in the model according to the alignment. Generalization considers the frequency with which each node in the tree needs to be visited if the model is to produce the given log. For this we use the alignment provided by the replay fitness. If a node is visited more often, then we are more certain that its behavior is (in)correct. If some parts of the tree are very infrequently visited, generalization is bad. Simplicity quantifies the complexity of the model. Simplicity is measured by comparing the size of the tree with the number of activities in the log. This is based on the finding that the size of a process model is the main factor Fig. 2: Relation between process trees and block-structured Petri nets. for perceived complexity and introduction of errors in process models [START_REF] Mendling | Detection and Prediction of Errors in EPCs of the SAP Reference Model[END_REF]. Furthermore, since we internally use binary trees, the number of leaves of the process tree has a direct influence on the number of operator nodes. Thus, the tree in which each activity is represented exactly once is considered to be as simple as possible. The four metrics above are computed on a scale from 0 to 1, where 1 is optimal. Replay fitness, simplicity and precision can reach 1 as optimal value. Generalization can only reach 1 in the limit i.e., the more frequent the nodes are visited, the closer the value gets to 1. The flexibility required to find a process model that optimizes a weighted sum over the four metrics can efficiently be implemented using a genetic algorithm. The ETM Algorithm In order to be able to seamlessly balance the different quality dimensions we implemented the ETM algorithm (which stands for Evolutionary Tree Miner ). In general, this genetic algorithm follows the process shown in Figure 3. The input of the algorithm is an event log describing the observed behavior and, optionally, one or more reference process models. First, the different quality dimensions for each candidate currently in the population are calculated, and using the weight given to each dimension, the overall fitness of the process tree is calculated. In the next step certain stop criteria are tested such as finding a tree with the desired overall fitness, or exceeding a time limit. If none of the stop criteria are satisfied, the candidates in the population are changed and the fitness is again calculated. This is continued until at least one stop criterion is satisfied and the best candidate (highest overall fitness) is then returned. The genetic algorithm has been implemented as a plug-in for the ProM framework [START_REF] Verbeek | XES, XESame, and ProM 6[END_REF]. We used this implementation for all experiments presented in this paper. The algorithm stops after 1, 000 generations or sooner if a candidate with perfect overall fitness is found before. In [START_REF] Joos | A Genetic Algorithm for Discovering Process Trees[END_REF] we empirically showed that 1, 000 generations are typically enough to find the optimal solution, especially for processes with few activities. All other settings were selected according to the optimal values presented in [START_REF] Joos | A Genetic Algorithm for Discovering Process Trees[END_REF]. Similarity as the 5th Dimension In order to extend our ETM algorithm for process model improvement we need to add a metric to measure the similarity of the candidate process model to the reference process model. Similarity of business process models is an active area of research [START_REF] Van Der Aalst | Process Equivalence: Comparing Two Process Models Based on Observed Behavior[END_REF][START_REF] Dijkman | Similarity of Business Process Models: Metrics and Evaluation[END_REF][START_REF] Van Dongen | Measuring Similarity between Business Process Models[END_REF][START_REF] Jin | Efficient retrieval of similar business process models based on structure[END_REF][START_REF] Kunze | Behavioral Similarity -A Proper Metric[END_REF][START_REF] Rosa | Business Process Model Merging: An Approach to Business Process Consolidation[END_REF][START_REF] Li | The minadept clustering approach for discovering reference process models out of process variants[END_REF][START_REF] Zha | A Workflow Net Similarity Measure based on Transition Adjacency Relations[END_REF]. We distinguish two types of similarity: i) behavioral similarity and ii) structural similarity. Approaches focusing on behavioral similarity, e.g. [START_REF] Van Der Aalst | Process Equivalence: Comparing Two Process Models Based on Observed Behavior[END_REF][START_REF] Dijkman | Similarity of Business Process Models: Metrics and Evaluation[END_REF][START_REF] Van Dongen | Measuring Similarity between Business Process Models[END_REF][START_REF] Kunze | Behavioral Similarity -A Proper Metric[END_REF][START_REF] Zha | A Workflow Net Similarity Measure based on Transition Adjacency Relations[END_REF], encode the behavior described in the two process models to compare using different relations. Examples are causal footprints [START_REF] Van Dongen | Measuring Similarity between Business Process Models[END_REF], transition adjacency relations [START_REF] Zha | A Workflow Net Similarity Measure based on Transition Adjacency Relations[END_REF], or behavioral profiles [START_REF] Kunze | Behavioral Similarity -A Proper Metric[END_REF]. By comparing two process models using such relations, it is possible to quantify behavioral similarity in different ways. Approaches focusing on structural similarity only consider the graph structure of models and abstract from the actual behavior, e.g., heuristic approaches like [START_REF] Li | The minadept clustering approach for discovering reference process models out of process variants[END_REF], only focus on the number of common activities ignoring the connecting arcs, or vice versa, ignore the actual activities to only consider the arcs. Most approaches [START_REF] Dijkman | Similarity of Business Process Models: Metrics and Evaluation[END_REF][START_REF] Jin | Efficient retrieval of similar business process models based on structure[END_REF][START_REF] Rosa | Business Process Model Merging: An Approach to Business Process Consolidation[END_REF] provide a similarity metric based on the minimal number of edit operations required to transform one model into the other model, where an edit is either a node or an arc insertion/removal. Both behavioral and structural similarity approaches first require a suitable mapping of nodes between the two models. This mapping can be best achieved by combining techniques for syntactic similarity (e.g. using string-edit distance) with techniques for linguistic similarity (e.g. using synonyms) [START_REF] Dijkman | Similarity of Business Process Models: Metrics and Evaluation[END_REF]. Our algorithm only needs to consider the structural similarity, since the event log already captures the behavior that the process model should describe. Recall that the behavior of the reference model w.r.t. the logs is already measured by means of the four mining dimensions (Fig. 3.2). Hence, we use structural similarity to quantify the fifth dimension. Tree Edit Distance as a Metric for Similarity Since we use process trees as our internal representation, similarity between two process trees can be expressed by the tree edit distance for ordered trees. The tree edit distance dimension indicates how many simple edit operations (add, remove and change) need to be made to nodes in one tree in order to obtain the other tree. Since the other four quality metrics are normalized to values between 0 and 1, we need to do the same for the edit distance. This is easily done by making the number of edits relative to the sum of the size of both trees. The similarity score finally is calculated as 1 minus the edit distance ratio. Hence, a similarity score of 1.000 means that the process model is the same as the reference model. Figure 4 shows examples of each of the three edit operations. The reference tree is shown in Figure 4a. Figure 4b shows the result after deleting activity B from the tree. Our trees are binary trees, meaning that each non-leaf node has exactly 2 children. Therefore, the × operator node is also removed. The removal of B from the tree results in an edit distance of 2. The similarity is 1-2 5+3 = 0.75. The process tree shown in Figure 4c has activity D added in parallel to activity A. This also results in 2 edits since a new ∧ operator node needs to be added, including a leaf for activity D. Since the resulting tree has grown, the relative edit distance is less than when part of the tree is deleted. Finally, changing a node as shown in Figure 4d, where the root → operator is changed into an ∧ operator, only requires 1 edit operation. We use the Robust Tree Edit Distance (RTED) algorithm [START_REF] Pawlik | RTED: A Robust Algorithm for the Tree Edit Distance[END_REF] to calculate the edit distance between two ordered trees. The RTED approach first computes the optimal strategy to use for calculating the edit distance. It then calculates the edit distance using that strategy. Since the overhead of determining the optimal strategy is minimal, this ensures best performance and memory consumption, especially for larger trees. However, it is important to realize that our approach is not limited to the RTED algorithm. Furthermore, although in this paper we only consider a single distance metric, it is possible to incorporate multiple metrics (for example looking at both structural and behavioral similarity). Trace # Trace # A B C D E G 6 A D B C F G 1 A B C D F G 38 A D B C E G 1 A B D C E G 12 A D C B F G 4 A B D C F G 26 A C D B F G 2 A B C F G 8 A C B F G 1 A C B E G 1 Experimental Evaluation Throughout this section we use a small example to explain the application and use of our approach. Figure 5 describes a simple loan application process of a financial institute which provides small consumer credits through a webpage. The figure shows the process as it is known within the company. When a potential customer fills in a form and submits the request from the website, the process starts by executing activity A which notifies the customer with the receipt of the request. Next, according to the process model, there are two ways to proceed. The first option is to start with checking the credit (activity B) followed by calculating the capacity (activity C), checking the system (activity D) and rejecting the application by executing activity F. The other option is to start with calculating the capacity (activity C) after which another choice is possible. If the credit is checked (activity B) then finally the application is rejected (activity F). Another option is the only one resulting in executing E, concerned with accepting the application. Here activity D follows activity C, after which activity B is executed, and finally activity E follows. In all three cases the process ends with activity G, which notifies the customer of the decision made. However, the observed behavior, as is recorded in the event log shown in Table 1, deviates from this process model. The event log contains 11 different traces whereas the original process model only allows for 3 traces, i.e., modeled and observed behavior differ markedly. To demonstrate the effects of incorporating the similarity between process trees, we run the extended ETM algorithm on the example data of Table 1. In [START_REF] Buijs | On the Role of Fitness, Precision, Generalization and Simplicity in Process Discovery[END_REF] we showed that, on this data set, the optimal weights are 10 for replay fitness and 1 for precision, generalization and simplicity. In the first experiment (Section 5.1), we only change the similarity weight to vary the amount of change we allow. In the second experiment (Section 5.2) we fix the weight for similarity and ignore each of the other four quality dimensions, one at a time. The experiment settings and their results are shown in Table 2. Varying the similarity weight Figure 6a shows the process tree that is discovered when giving the similarity a weight of 100. The similarity ratio is 1.000, indicating that no change has taken place. Apparently no change in the tree would improve the other dimensions enough to be beneficial. If we reduce the similarity weight to 10, the process tree as shown in Figure 6b is discovered. Three edits have been applied: in the bottom-right part of the tree two → and an × operator have been changed to ∧ and ∨. This allows for more behavior, as is indicated by the increase in replay fitness of 0.220. Also, generalization increased by 0.183, at the cost of a decrease in precision of 0.115. → G → × → × → → E B D → F B C → → F D → C B If we lower the weight of the similarity to 1, we get the process tree as shown in Figure 6c. This process tree requires 12 edits starting from the original tree and is very different from the process tree we started with. However, compared to the previous process tree, the other 4 quality dimensions have improved overall. Replay fitness has now reached a value of 1.000 since this process tree allows skipping activity D. Also, simplicity reached 1.000 since no activities are duplicated or missing. Finally, reducing the similarity weight to 0.1 provides us with the process tree shown in Figure 6d, which is also the process tree that would be found when no initial process tree has been provided, i.e., pure discovery. The only improvement w.r.t. the previous tree is the slight increase in precision. However, the tree looks significantly different. The resemblance to the original tree is little as is indicated by a similarity of 0.693, caused by the 13 edits required to the original model. Ignoring One Quality Dimension In [START_REF] Buijs | On the Role of Fitness, Precision, Generalization and Simplicity in Process Discovery[END_REF] we showed that ignoring one of the four quality dimensions in general does not produce meaningful process models. However, many of these undesirable and extreme models are avoided by considering similarity. To demonstrate this we set the similarity weight to 10. The other weights are the same as in the previous experiment: 10 for fitness, 1 for the rest. We then ignore one dimension in each experiment. The results are shown in Figure 7. Ignoring the fitness dimension results in the process tree as shown in Figure 7a. No changes were made, demonstrating that no improvement could be made on the other three dimensions that was worth the edit. If precision is ignored, the result is the process tree as shown in Figure 7b. Replay fitness and generalization improved by applying 3 edits. The tree of Figure 6b, where we used the same similarity weight but included precision, only 1 edit was allowed. By removing the restriction on precision, it is worth to apply more edits to improve replay fitness and generalization. We do not see this effect as strongly when we ignore generalization or simplicity. The resulting process trees, shown in Figure 7c and Figure 7d, are very similar to the original one with only 1 edit. This experiment shows that considering similarity to a reference process model avoids the extreme cases encountered in [START_REF] Buijs | On the Role of Fitness, Precision, Generalization and Simplicity in Process Discovery[END_REF]. → G → × → × → → E B D → F B C → → F D → C B Application in Practice Within the context of the CoSeLoG project, we are collaborating with ten Dutch municipalities that are facing the problem addressed in this paper. 1 The municipalities have implemented case management support, using a particular reference model. Now they are deriving new, possibly shared, reference models because they want to align each model with their own real process and the real processes in the municipalities they are collaborating with. One of the municipalities participating in the CoSeLoG project recently started looking at one of their permit processes. The reference model used in the implementation was very detailed, with many checks that the employees in practice did not always do (usually with good reasons). Therefore, they were interested in a model that looks similar to the original reference model, but still shows most of the behavior actually observed. For this we applied our technique to discover different variants of the process model, focusing on different quality combinations, while maintaining the desired similarity to the reference model. → ∧ → × × → Z → X → Y X W V → U → S → T S R τ τ → P ∨ → → → J I H → G F → τ K E → D B C τ sim: 1.000 0 edits f: 0.744 p: 0.785 s: 0.755 g: 0.528 (a) Similarity x1000 → ∧ → × × → Z → X → Y X W V → U → S → T S R τ τ → P ∨ → → → J I H → G F → τ K E → D B C L sim: 0.990 1 edit f: 0.858 p: s: 0.792 g: 0.566 (b) Similarity x100 ∨ ∧ → × × → Z → X → Y X W V → U → S → T S R τ τ → P ∨ × → × J I H → G F → τ K E → D B C ∧ A L sim: 0.942 6 edits f: 0.960 p: 0.770 s: 0.815 g: 0.685 For this application we only used the first part of the process which contains a total of 27 different activity labels, which we anonymized using letters from A to Z plus AA. (c) Similarity x10 × → ∨ → × × → Z → X → Y X W V → U → × S → T S R Q τ → P E ∨ → D C → E × × G τ F E ∧ A → × τ → → × τ → A B P E → × E → × τ B L The experiment settings and their results are shown in Table 3. We experimented with fixed weights for the original four quality dimensions, and only changed the weight for the similarity to the reference model. The results confirm the intuition that reducing the weight of the similarity dimension allows more edits to be made (cf. the quality part of the table). In general, we also see that by allowing more edits, most of the quality dimensions improve. Of course, there is a trade-off between different dimensions. Since we weight the replay fitness dimensions 10 times more than the others, we see that this dimension always improves, sometimes at the cost of the other dimensions. Figure 8a shows the process tree that is discovered using a similarity weight of 000. This is the same process tree as created from the process model provided by the municipality. All four quality are relatively bad and many improvements are possible. However, none of these improvements were applied since one change needs to drastically improve the process tree to be worth it. If we set the similarity weight to 100 we obtain the process tree of Figure 8b. Here one edit has been made, namely the left-most activity leaf node has been changed from τ to L. This single edit causes all four quality dimensions to improve, especially replay fitness. The original process model used a significantly different activity name than the one present in the event log, which was translated to a τ in the process tree. If we further relax the importance of similarity by using a weight of 10, we obtain the process tree of Figure 8c. Here 6 edits have been made from the original model. The root node now changed to an ∨ to allow more behavior. The left branch of the root node also changed to allow more behavior, better suiting the recorded behavior. Also the operator node of activities I and J changed as well as the operator of their grandparent node. It appears that the event log contains a lot of short traces, only containing activities from the first part of the process. Some traces even contain activity L only. All the dimensions have improved after these changes, except precision which has slightly decreased. Further relaxing the similarity we obtain the process trees of Figure 8d (weight of 1 -42 changes) and Figure 8e (weight of 0.1 -83 changes). Both these models have little to do with the original reference model. At the same time, the quality of these two process trees with respect to the log did not improve much, while their appearance did. Therefore, for this experiment, we propose the process tree of Figure 8c as the improved version of the reference model. By applying only 6 edits the process model has improved significantly, mainly on replay fitness (from 0.744 to 0.960), while still showing a great resemblance to the original reference model. → ∨ → × × → Z → → Y X W V U τ → × × τ O → T → R Q P → × τ P E → → × τ P → × τ E D ∨ × A → A B C → × τ → → A P E → × τ × τ F → × τ G → × τ J → × τ F E → × τ → × τ → × τ D C A → × τ E → × τ A Conclusion In this paper, we proposed a novel process mining algorithm that improves a given reference process model using observed behavior, as extracted from the event logs of an information system. A distinguishing feature of the algorithm is that it takes account the structural similarity between the discovered process model and the initial reference process model. The proposed algorithm is able to improve the model with respect to the four basic quality aspects (fitness, precision, generalization and simplicity) while remaining as similar as possible to the original reference model (5th dimension). The relative weights of all five dimensions can be configured by the user, thus guiding the discovery/modification procedure. We demonstrated the feasibility of the algorithm through various experiments and illustrated its practical use within the CoSeLoG project. A limitation of this paper is that it assumes that the deviations discovered from the logs are always rightful. Indeed, some process deviations do reflect an evolving business process due to new acceptable practices or regulations, and as such should be accommodated into the reference model. However, some other deviations may be the result of non-compliance or be caused by a subefficient execution. These undesirable deviations should be isolated and discarded in order to prevent bad practices form becoming a part of the reference model. In future work, we plan to implement a more fine-grained control on the different costs for edit actions on different parts of the process model. For example, edits on operators may have lower costs than edits on labels. In this way we can for instance restrict our changes to extensions of the original reference model, and prevent existing parts of the model from being changed. Also, pre-defined domain-specific constraints which the process model should adhere to can be fixed in this way. However, while these techniques may help produce better results, the identified deviations still need to be validated by a domain expert before making their way into the reference model. Only in this way we can ensure that false positives are properly identified. Finally, we plan to conduct an empirical evaluation of the understandability of the process models discovered using our algorithm, as perceived by domain experts, and compare the results with those obtained with other process mining algorithms, which ignore the similarity dimension. Fig. 1 : 1 Fig. 1: Adding similarity as a process model quality dimension. Fig. 3 : 3 Fig. 3: The different phases of the genetic algorithm. Fig. 4 : 4 Fig. 4: Examples of possible edits on a tree (a) and respective similarities. Fig. 5 : 5 Fig. 5: Petri net of a loan application process. (A = send e-mail, B = check credit, C = calculate capacity, D = check system, E = accept, F = reject, G = send e-mail) 1 Fig. 6 : 16 Fig. 6: Varying similarity weight Fig. 7 : 7 Fig. 7: Ignoring one dimension. Fig. 8 : 8 Fig. 8: Process trees for the municipality process. Fig. 8 : 8 Fig. 8: Process trees for the municipality process (cont'ed). Table 1 : 1 The event log Table 2 : 2 Different weight combinations and the resulting fitness values for the simple example. Weights Quality Sim f p g s Sim edits f p g s 100 10 1 1 1 1.000 0 0.880 1.000 0.668 0.737 10 10 1 1 1 0.935 3 1.000 0.885 0.851 0.737 1 10 1 1 1 0.667 12 1.000 0.912 0.889 1.000 0.1 10 1 1 1 0.639 13 1.000 0.923 0.889 1.000 10 0 1 1 1 1.000 0 0.880 1.000 0.668 0.737 10 10 0 1 0.935 3 1.000 0.849 0.851 0.737 10 10 1 0 1 0.978 1 0.951 0.992 0.632 0.737 10 10 1 1 0 0.935 3 1.000 0.885 0.851 0.737 Table 3 : 3 Different weight combinations and the resulting fitness values for the practice application. Weights Quality Sim f p g s Sim edits f p g s 1000 10 1 1 1 1.000 0 0.744 0.785 0.528 0.755 100 10 1 1 1 0.990 1 0.858 0.799 0.566 0.792 10 10 1 1 1 0.942 6 0.960 0.770 0.685 0.815 1 10 1 1 1 0.650 42 0.974 0.933 0.747 0.613 0.1 10 1 1 1 0.447 83 0.977 0.862 0.721 0.519 See http://www.win.tue.nl/coselog
38,623
[ "1002433", "1002434", "995083", "871257" ]
[ "4629", "93528", "93528", "486526", "4629", "4629", "4629" ]
01474690
en
[ "info" ]
2024/03/04 23:41:46
2012
https://inria.hal.science/hal-01474690/file/978-3-642-40919-6_4_Chapter.pdf
Sjoerd Van Der Spoel email: [email protected] Maurice Van Keulen email: [email protected] Chintan Amrit email: [email protected] Process Prediction in Noisy Data Sets: A Case Study in a Dutch Hospital Keywords: process prediction, process mining, classification, cash flow prediction, data noise, case study Predicting the amount of money that can be claimed is critical to the effective running of an Hospital. In this paper we describe a case study of a Dutch Hospital where we use process mining to predict the cash flow of the Hospital. In order to predict the cost of a treatment, we use different data mining techniques to predict the sequence of treatments administered, the duration and the final "care product" or diagnosis of the patient. While performing the data analysis we encountered three specific kinds of noise that we call sequence noise, human noise and duration noise. Studies in the past have discussed ways to reduce the noise in process data. However, it is not very clear what effect the noise has to different kinds of process analysis. In this paper we describe the combined effect of sequence noise, human noise and duration noise on the analysis of process data, by comparing the performance of several mining techniques on the data. Introduction In the Netherlands, insurance companies play an important role in settling the finances of medical care. Hospitals and other care providers claim their costs for treating a patient with the patient's insurance company, who then bill their customer if there are costs not covered by the insurance. Starting January 1st, 2012, Dutch hospitals are required to use a new system, called "DOT", for claiming their costs for treating patients. The central principle of DOT is that the amount of the claim is based on the actual care provided, which is only known after the treatment has finished. Previously, the amount of the claim was based on the diagnosis for which average costs would be determined by negotiation between the government, hospitals and insurance companies. This change confronted the hospitals with a finance management problem. Whereas they previously could claim costs already after diagnosis, they now have to wait until treatment has finished. Since it is unknown ahead of time when treatments will finish, it also unknown when the hospital can expect cash flow for their treatment processes and how large those cash flows will be. Cash flow prediction for treatment processes seems like a problem that a combination of data and process mining could well help to solve. A treatment process of one patient, called a care path, can be seen as a sequence of activities which determines the so-called care product with an associated cost. Based on a data set with all details about completed care paths and associated care products, one could develop a predictor that, given only the start of such a path, could predict the rest of the path and its associated duration and cost. This paper describes our experiences and results with this case study in data and process mining. The paper specifically addresses a problem with three kinds of noise in our data that significantly affected our results. It is of course common knowledge that noise in the underlying data may affect the result of any kind of data mining including process mining. We encountered, however, three different kinds of noise that were rather specific to our process mining and that were so prevalent that known solutions from literature did not apply. We call these kinds of noise (a) sequence noise meaning errors in or uncertainty about the order of events in an event trace, in our case study the order of activities in a care path, (b) duration noise meaning noise arising from missing or wrong time stamps for activities and variable duration between activities. (c) human noise meaning noise from human errors such as activities in a care path which were the result of a wrong or faulty diagnosis or from a faulty execution of a treatment or procedure. The reason why sequence uncertainty was so prevalent in our case study is as follows. During the day, a medical specialist typically sees many patients. During a consult or treatment, however, it is often too disruptive for the specialist to update the patient's electronic dossier. It is common practice that (s)he updates the dossiers at the end of day or even later. As a consequence, the modification time of a patient's dossier (which is what is recorded) does not reflect the actual moment of the activity, and it often does not even reflect the order in which the activities took place during that day. Furthermore, it also common that patients undergo several activities and see several specialists on the same day. For example, a hospitalized patient may receive a visit from a specialist doing his/her rounds, receive medication, undergo surgery, results from a blood analysis may be finished, all on the same day. In the case study data sets, we have averages of about 2.5 to 10 activities per day. This inherent noisy timestamp problem and its magnitude causes major sequence noise in the underlying data. Furthermore, the noisy timestamp issue also causes the duration noise, as it makes the calculation of the duration between activities very noisy and error prone. Also, much of the duration noise comes from the fact that in our case study data two activities are considered by the client to be consecutive even if they are separated by many days or weeks, as long as they are part of the same treatment and no other activities have occurred in between them. On the other hand, human noise arises from human errors which include (i) Noise due to a wrong or faulty diagnosis -this leads to a faulty extra series of process steps at the beginning of the treatment process, (ii) Noise due to faulty execution of the treatment and/or erroneous procedures that need to be repeated. So, our data contains large amounts of sequence noise + human noise+ duration noise and when combined, these make the prospect of noise removal very complicated and difficult to perform. Hence, in our paper we proceed by analysing the noisy data to demonstrate what kind of results one can expect with different data mining analysis techniques. Contribution The contributions of this paper are: -A case study of applying data and process mining for cash flow prediction in a Dutch hospital. -Accuracy results for different prediction tasks: given a diagnosis and start of a care path, predict the rest of the path, the duration of the path, the final care product, and the associated cash flow. -Experimental comparison of the performance of several mining techniques on these tasks. -Analysis and discussion on the effects of sequence and human noise, both are kinds of data noise specific for process mining which have received little attention in literature. Case study Starting January 1st, 2012, Dutch hospitals will use a new system for claiming the costs they make for treating patients. Hospitals claim these costs at patient's insurance companies, who then bill their customers. DOT registration system The new registration and claiming system addresses the problem that actual costs are unequal to what is charged, in an effort to increase the transparency of cost calculation for provided care. The system is referred to as DOT, which stands for "DBCs towards transparency". A DBC is a combination of diagnosis (D) and treatment (Dutch: behandeling, B). The central principle of DOT is to decide the care product based on the care provided. Whereas, in the previous registration system the care product was based solely on the diagnosis. A treatment path in the DOT system consists of the following: -Care product and associated care product code: This is the name and code given to the treatment performed, which is made up one or more sub paths (Care types). -Sub path: Is a sequence of activities performed (with one or more Activity types) in a treatment sequence -Care type and associated care type code: Is the specific name and code associated with the sub path -Activity type and associated activity type code: Are the activities performed for a particular care type in a sub path The hospital data is structured into several tables, of which two are important in this case: Activities and Sub paths. An activity represents an item of work, like surgery or physical therapy, but also, days spent in hospital are considered activities. Groups of activities that are performed to treat the individual patient's diagnosis are called sub paths. They are called sub paths because multiple moreor-less independent sub paths make up the total care path for a patient: the path from the patient registering some complaint to being fully treated. Care paths do not have care products, but sub paths do. Table 1 shows the structure of the activity data, table 2 shows the same for the sub path data. A care product has an associated cost, which is claimed at a patient's insurance company. To derive which care product a patient has received, the DOT system uses a system called the grouper. This grouper consists of rules that specify how care products are derived from performed activities. In the DOT methodology, it is the activities performed that decide the care product. This does not mean that every activity influences the care product, in fact, laboratory work and medicines have no influence on the grouper result. DOTs are processed by the grouper after they have been closed. When a DOT registration is closed depends on the amount of time that has passed since the last activity. If more than the specified number of days has passed, the DOT is marked closed. Different types of activities have different durations after which the DOT is to be marked as closed. The grouper is maintained and operated by the independent DBC Maintenance authority. The rules that make up the grouper are the results of negotiations between the academic Dutch hospitals and insurance companies. The DOT system should lead to a better matching between actual provided care and associated care product. In turn, this leads to what the insurance companies (and therefore patients) pay. While a patient is still undergoing treatment, the DOT system poses two problems: -The hospital does not know how much they will receive for the care they have provided, as they don't know what care product will be associated with the open DOTs. -The hospital does not know when a DOT is likely to be closed, because a DOT closes only some time after the final treatment. If the patient turns out to require another treatment before the closing date of the DOT (based on the previous treatment), the closing date moves further into the future. Because the hospital does not know when a DOT closes, they also don't know when they can claim the cost of the associated care product. These problems are in essence process prediction problems. The first involves predicting the process steps, because these process steps dictate the care product. The second problem involves predicting the duration of a process. To see how well different approaches work for this practical case, we test predicting care product, product cost and care duration. For this purpose we use anonymized patient data from a Dutch hospital. The data we have available is based on heart and lung specialties. Dataset construction To produce data sets for our experiment, the activity and sub path tables where joined to get a new table with the sub path id, care product, activity code and registration date. A separate table was made for every diagnosis, as the diagnosis is not part of prediction, but is known at the start of a sub path. The produced tables where converted to a Sub path id -Care product -Activity 1 -. . . -Activity n format. The data format is explained in detail in section 5. Because activities are recorded every day and not at the moment they are performed, we do not know the sequence of activities within a day. This leads to sequence noise and duration noise in the data sets, as the inferred sequence is not necessarily the right one, as we approximate the sequencing by ordering activity codes alphabetically. The sequence noise, duration noise and the human noise increases the complexity of the task of finding the right set of future activities, and then determining its duration. Note that we did not artificially add noise: the noise is the result of the lack of explicit time-based sequence of activities in the original data, so we had to recover the exact sequence where it was implicit. The approaches we present in this paper for predicting "care products" will have to deal with this fact, as it is a consequence of the way data is stored at the hospital. Problem formalization We start with providing notation for and defining the most important concepts in our case study in order to be able to more precisely define the prediction tasks that we distinguish. Basic concepts An activity Act is defined as a label taken from the set of possible DBC codes. A care path P is a sequence of activities P = Act 1 , . . . , Act n . We focus our prediction only on subpaths which have a unique care product, so a care path should be interpreted as a subpath. We sometimes denote a care path with P to emphasize that it is closed, i.e., it belongs to a finished treatment or to a DOT that was closed by the grouper for some reason. The last activity in a closed care path is denoted with Act end ; the last activity of an 'incomplete' care path is often denoted with Act cur to emphasize its role as 'current' activity. P 1 ⊕ P 2 denotes the concatenation of P 1 and P 2 . d = d P denotes the duration of care path P in terms of time (measured in days in our case study). The grouper Grp is a function that determines the care product C = Grp( P ) for a given path P . The associated cost of a care product C is denoted by cost(C ). From a subset of our data, we construct a directed weighted process graph G = (N , E ), where the nodes N represent activities and the edges e the possibility that one activity can follow another. The weight of an edge w (e) is defined as the number of occurrences that these two nodes follow each other in that order. We added a node "Start" to each G to obtain a single starting point for all care paths. We furthermore added the care products as separate terminal nodes, such that for each P , Grp( P ) = C = Act end . We assume that the available underlying data is in the form of a set of complete care paths. We chose to work with separate sets of care paths each belonging to one specialty and diagnosis code. Prediction tasks With the notation above, we can precisely define the prediction tasks we consider in this paper. Let P = Act 2 , . . . , Act cur1 , P = Act cur+1 , . . . , Act end , and P = P ⊕ P . The prediction tasks are: 1a) Given an incomplete care path P , predict the care product C directly (i.e., without considering or predicting P ). We view this prediction task as a classification task. We use a subset of the closed care paths belonging to one diagnosis with their associated care product as training data for supervised learning of the classifier. 1b) A predicted care product determines its cost cost(C ). 2a) Given a process graph G constructed from a subset of the care paths belonging to one diagnosis, an activity Act cur , and a care product C = Act end , predict the path P in between. We view this prediction task as a process mining task. Note that we attempt our prediction given only the current activity Act cur and not the path leading to this activity P . The latter is left to future research. 2b) A predicted path determines its duration, i.e., d P , from which a prediction can be derived for the full duration d P given G, Act cur , and C . In the end the financial department of the hospital is interested in an amount of money (i.e., cost(Grp( P ))) and a moment in time (which can be derived from d P ). It may happen, however, that an entirely wrong care product is predicted, but that it has a similar cost, or that an entirely wrong path P is predicted with a similar length. In those cases, the prediction of what we are ultimately interested in is close, but rather unjustifiably so. Therefore, we target not only the b-tasks, but also the a-ones. Moreover, examining the results of predicting the rest of the path also provides more insight into the effects of sequence and human noise. 4 Literature Review Classifier Algorithms Prediction tasks 1a and 1b are about assigning a class -the careproduct -to a combination of independent variables -the activities. That is why we consider this to be a classification problem. The field of classifier algorithms divides into four categories: decision tree, clustering, bayesian and neural network classifiers [START_REF] Han | Data mining: concepts and techniques[END_REF]. Besides these classifiers, there are some classifier aggregation techniques that serve to augment the accuracy of individual classifiers. Decision tree classifiers are based on Hunt's algorithm for growing a tree by selecting attributes from a training set. The attributes are are converted into rules to split the data. Although this can be an accurate technique, the decision tree family of classifiers is sensitive to overfitting or overtraining: training the classifier on a data set with noisy data will include bad (too training-set specific) rules in the tree, reducing accuracy. Algorithms like C4.5 [START_REF] Quinlan | Improved use of continuous attributes[END_REF] use pruning to prevent overtraining, but the fact remains that decision trees are not effective in very noisy data. Clustering algorithms for classification use distance (or equivalently: proximity) between instances to group them into clusters or classes. New instances are classified based on their proximity to existing clusters [START_REF] Alpaydın | Introduction to Machine Learning[END_REF]. Examples are nearest-neighbour classifiers and support vector machines. Bayesian classifiers use Bayes' rule to select the class that has the highest probability given a set of attribute values. Bayes' rule assumes independency of the attributes in a data set, if this is not the case, classifier accuracy could be hindered. Neural networks are composed of multiple layers of elements that mimic biological neurons, called perceptrons. The perceptrons are trained to give a certain output signal based on some input signal, which is propagated to the next layer of the neural network. The network consists of one input layer, one output layer and multiple hidden layers in between. Neural networks are computationally intensive, but are accurate classifiers. However, neural networks are sensitive to overtraining [START_REF] Alpaydın | Introduction to Machine Learning[END_REF]. Besides techniques consisting of a single (trained) classifier instance, there are techniques that aggregate results from multiple classifiers. Examples are bootstrap aggregating [START_REF] Breiman | Bagging predictors[END_REF], boosting [START_REF] Freund | Experiments with a new boosting algorithm[END_REF] and the Random Forests algorithm [START_REF] Breiman | Random forests[END_REF]. The techniques have the same paradigm: let every individual classifier vote on the class of the data instance, the class with most votes is selected. Breiman found that this principle is especially accurate when the underlying classifier has mixed performance on the dataset [START_REF] Breiman | Bagging predictors[END_REF], something that occurs most often with the less complex algorithms. That is also the case for the Random Forest classifier, which is comprised of multiple randomly grown decision trees. All these aggregation techniques generally display better accuracy than the individual classifiers. Curram et al. [START_REF] Curram | Neural networks, decision tree induction and discriminant analysis: an empirical comparison[END_REF] show that decision tree classifiers (such as CART and C4.5 [START_REF] Quinlan | Improved use of continuous attributes[END_REF]) are outperformed by neural networks. In turn, neural networks are less accurate than the boosting type of classifier aggregation, as shown by Alfaro et al. [START_REF] Alfaro | Bankruptcy forecasting: An empirical comparison of adaboost and neural networks[END_REF]. Furthermore, the Random Forest classifier [START_REF] Breiman | Random forests[END_REF] has been shown to have good accuracy compared to other classifiers [START_REF] Banfield | A comparison of decision tree ensemble creation techniques[END_REF] [START_REF] Gislason | Random forests for land cover classification[END_REF]. Therefore, we use boosting (specifically, Freund and Schapire's Adaboost algorithm [START_REF] Freund | Experiments with a new boosting algorithm[END_REF]) and Random Forests for care product classification. For some comparison, we also investigate Process prediction algorithms To predict the care activities (path through the process graph), we need for a complete and accurate graph. A possible approach is to take the event log and create a graph that contains every edge in every trace of the log. Consider the set of traces in figure 1a, converted to the graph in figure 1b. Every combination of two subsequent activities is an edge in the graph, every activity is a node. There are no duplicate edges or nodes, but the number of occurrences of two nodes subsequently is noted as the weight of the edge. The algorithm is naive, as it also includes possibly noisy edges that only occur a few times in the data set. In figure 1b, the C -C edge could be noise, it occurs only once in the data. This makes the naive graph elicitation algorithm inaccurate. To remedy this flaw inherent to the naive approach, the weights of the nodes can be considered as the likelihood of an edge not being noise. The more times an edge occurs, the less likely it is to be noise. Not only individual edges are noisy, also larger sets of edges might be noise. This would result in several nodes connected by low-weight edges. This points to a possible solution: use a path algorithm to determine paths that have the highest weight, or to exclude the paths with the lowest weight. This type of algorithm is known as a shortest-path-algorithm: find a path through the graph that has the lowest possible weight [START_REF] Baase | Computer algorithms: introduction to design and analysis[END_REF]. The most well-known shortest-path-algorithm is Dijkstra's algorithm, which calculates the single shortest path through a series of nodes. This algorithm can be used to determine the distances from a start node to every other node. A distance is the compound weight of the edges towards a node. A modification of Dijkstra's algorithm determines the path that has the highest weight for all its edges combined. To remove (infinite) loops duplicate activities within a trace are relabelled so that they are unique. In figure 1b, this means that C is replaced by C and C , C -C is replaced by C -C . There are some problems when using Dijkstra: the algorithm has to be run for every node in the graph to give the shortest or longest path from the start node to some other node. Second: Dijkstra cannot give the shortest path between two specific nodes, it only gives the node that has the shortest/highest distance from "start". A different shortest-path algorithm, known as Floyd-Warshall does have this capability: it gives every shortest path between two nodes. This makes it possible to find the most likely path or most unlikely path from one node to another. Besides shortest path approaches that determine path likelihood by taking the sum of edge weights, another approach is to take the product of the probabilities of the edges. The probability of an edge E from node A to B is determined by the number of times it is traversed divided by the total outgoing weight of A's edges. Given the edge probabilities, the most likely path (based on Bayes' rule, assuming independency of the next edge on previously traversed edges) is the path with the highest product of edge probabilities. This is similar to a Markov chain. Floyd-Warshall and Dijkstra's algorithm produce the path (set of edges) that has the lowest sum of edge weights. Since both produce the shortest path, both algorithms can be used, but we choose Floyd-Warshall, as it is easier to implement Floyd-Warshall. The adaptation we made on Floyd-Warshall's algorithm does not take the sum of edge weight as the measure of length, but the product of edge weights. We then look for the path (set of edges) that has the maximum product of edge weights: the most likely path. Finally, we have considered techniques from the process mining field, such as the alpha-algorithm proposed by Van der Aalst et al., the Little Thumb algorithm, InWolve and the suite of algorithms provided by the ProM framework [START_REF] Van Der Aalst | Workflow mining: a survey of issues and approaches[END_REF][START_REF] Van Der Aalst | Workflow mining: Discovering process models from event logs[END_REF][START_REF] Van Der Aalst | Finding structure in unstructured processes: The case for process mining[END_REF][START_REF] Van Der Aalst | Business process mining: An industrial application[END_REF][START_REF] Van Der Aalst | Process mining: a two-step approach to balance between underfitting and overfitting[END_REF][START_REF] Van Der Aalst | Process mining[END_REF][START_REF] Mans | Application of process mining in healthcare-a case study in a dutch hospital[END_REF][START_REF] Gunther | Using process mining to learn from process changes in evolutionary systems[END_REF][START_REF] Weijters | Rediscovering workflow models from eventbased data using little thumb[END_REF][START_REF] Herbst | Workflow mining with InWoLve[END_REF][START_REF] Van Dongen | The prom framework: A new era in process mining tool support[END_REF]. We have found these algorithms to be unsuitable for our goals, as they do not deal with cycles in the data [START_REF] Kiepuszewski | Fundamentals of control flow in workflows[END_REF]. Since cycles are a central aspect of the noise in the data available, we do not consider the process mining family of techniques/algorithms useful for our purposes. Noise in Process Mining Datta (1998) [START_REF] Datta | Automating the discovery of as-is business process models: Probabilistic and algorithmic approaches[END_REF] defines noise in a business process as an unrelated activity that is not part of any activity of the business process. An example of such an activity would be a phone call or a lunch break [START_REF] Datta | Automating the discovery of as-is business process models: Probabilistic and algorithmic approaches[END_REF]. They suggest three techniques for the removal of such noise. The techniques being a stochastic strategy and two algorithmic strategies based on a finite state machine [START_REF] Datta | Automating the discovery of as-is business process models: Probabilistic and algorithmic approaches[END_REF]. Weijters and van der Aalst [START_REF] Weijters | Rediscovering workflow models from eventbased data using little thumb[END_REF] regard noise as (i) a missing activity and/or (ii) randomly swapped activities in the workflow log. To extract the work flow process from the log, Weijters and van der Aalst [START_REF] Weijters | Rediscovering workflow models from eventbased data using little thumb[END_REF] construct a dependency-frequency (D/F) table (which holds the frequency and order of task co-occurrence or dependencies), from which they construct a D/F graph (the graph of the task dependencies)and finally a work-flow net (or WF net, that represents the D/F graph along with the splits and joins) using heuristics [START_REF] Weijters | Rediscovering workflow models from eventbased data using little thumb[END_REF]. Both Agrawal et al. [START_REF] Agrawal | Mining process models from workflow logs[END_REF] as well as Huang and Yang's [START_REF] Hwang | On the discovery of process models from their instances[END_REF] define noise as instances when executed activities were not collocated, timestamps of the activities are mistakenly recorded or exceptions that deviate from the normal processing order [START_REF] Hwang | On the discovery of process models from their instances[END_REF]. They furthermore use similar probabilistic reasoning and estimate how much error their process discovery algorithm would give due to the noise [START_REF] Hwang | On the discovery of process models from their instances[END_REF]. As we have explained earlier, the noise that we describe in our paper is however in a slightly different league from the noise described in process literature. Hence instead of cleaning the noise, we take a different approach and show what results one can expect by analysing very noisy data with different data mining techniques. Design of Case Analysis Table 4 shows the main attributes of the data in the three data sets we use. Each set contains of rows of activities that are each represented by a code. Each specific activity, such as "Perform surgery X" has its own unique code in the DOT method. Only when creating a graph are these activities relabelled to remove loops, so table 4 does not include relabelled activities. An idea of the extent of noise in the data is given by the Average number of activities per day and Average path length in table 4: large parts of the sequence of activities are uncertain and may well be wrong -although we have no way of knowing. Hence, in order to reduce the sequence noise a subset of the data was used for the classification tasks. To create the subset, we removed the activity codes for lab-work and medications from the raw data, because these were known to have no effect on the grouper result of a sub path. The characteristics of the data set for the classification task are shown in table 5. The effect of removing these activities is best shown by the shorter average path length and the fewer average number of activities per day. We created three data sets from the hospital's cardiology and lung care department data. Each set contained the data from patients with a specific diagnosis. The three diagnoses were pericarditis (an inflammation of the heart), represented by code 320.701; angina pectoris (chest pain), diagnosis code 320.202; and malignant lung cancer, diagnosis code 302.701. We chose these data sets as they were clearly quite different, not only in size, but also in the amount of sequence noise, path length and duration. This made the sets well suited to test the prediction approaches presented in this paper on a broad sample of actual process data. Also, only data from the heart and lung specialties was available to us for this research. Table 3 shows the columns that make up each dataset, as well as some example content. The number of columns varies between 286 + 2 for the dataset 302.66 to 703+2 for 302.701. Activity id Care path id Sub path id Care activity code The unique id of the performed activity Unique identifier of the care path containing the sub path the activity is part of The identifier of the sub path the activity is part of A (non-unique) code describing the activity (cont.d) Date Performing specialty Amount The date on which the activity was registered The specialty performing the activity The quantity of the activity, for example used for medication Using these data sets we performed the prediction tasks in section 3.2. The setup per task is: Task 1a) Predict care product C from P . We wanted to know the prediction accuracy for different lengths of incomplete care path C . So, for path length 1 . . . 30, we took a sample of the closed care paths of at least that length for training -the remainder was the test set. We took the first n activities per path P in the training and test sets. The training set of paths of length n in combination with the care product C were then used to train a classifier. The test set of paths was used to assess the performance of the classifier. Task 1b) Predict care product cost: cost(Grp( P )). This task builds on task 1a: The same procedure was used to determine a care product C , but now we looked up the cost cost(C ). The predicted cost for a partial path P was then compared to the cost of the actual care product C = Grp( P ). We measured the difference in predicted cost and actual cost E total as well as the average absolute error E average : E total = | cost(Grp( P )), cost(Grp( P ))| cost(Grp( P )) E average = |cost(Grp( P )), cost(Grp( P ))| | P | Task 2a) Predict P .. Here, we determined the precision and recall of predicting the path P between Act cur and Act end . So, we created the process graph G from a training sample of the complete paths P . The remainder of the data set was then used for testing. Next, for activity Act n ∈ P , n ∈ {1 . . . 15} for each complete path in the test set we predicted P . We compared P to the sub path Act n . . . Act cur ∈ P and determined the precision and recall of the prediction. In order to make sense of the results of both algorithms, we introduce two performance metrics: precision and recall. These metrics are based on finding the number of true positives, false positives and false negatives. True positives are the number of activities that occur in both predicted P and actual P . False positives are the activities that are predicted but do not occur in the actual path. False negatives are the number of nodes that are not in the predicted P , but are in the actual P . Given these three numbers, precision and recall are calculated by: Precision = tp tp + fp Recall = tp tp + fn Task 2b) Predict d P . Given the predicted path P we created a predicted complete path P = P ⊕ P . We looked up the duration for each edge e ∈ P , where duration is the average time difference between Act a , Act b ∈ e for all occurrences of e in the training set. These durations were combined into a duration prediction d P and were compared to the actual time difference between Act 1 , Act end ∈ P . Results Task 1a: Predicting Grp(C ) Table 6 shows the results for the care product predicting task. As expected from the literature review, we found the Random Forest algorithm to be the best performing classifier in terms of accuracy. This table shows the accuracy for care product prediction for path lengths 1 to 30. Table 7 shows the Random Forest results when only taking paths that have at least length 10. Figures 3 and4 show the accompanying Random Forest accuracy plots for these two tables. The plot in figure 2 shows the number of paths of length n for each of the data sets, which also explains the cut off point of 10 activities: the number of complete paths P with more than 10 activities is low. To put the Random Forest accuracy in perspective, we compare the results to a more naive approach: take a random guess of care product. This would result a chance of predicting the care product of 16 percent maximum, given that the data set with the fewest number of care products has six products. The Random Forest classifier seems to outperform this method. The comparison with a somewhat less naive approach, taking the most occurring care product is shown in figure 3 and4. This shows that the Random Forest classifier is always better than a random classifier for two of the three datasets. The dashed lines in both plots represent the expected accuracy when always selecting the most occurring care product. The accuracy of the classifier does not improve with a larger path length |P |. This is the result of noise: as the number of activities in sequence increases, the likelihood of that sequence containing noise also increases. This noise has a negative effect on the classifier's performance. The goal of predicting a care product C is predicting the cost, so the hospital knows what it will receive for provided care. Table 8 shows the results for a test of predicting cost(C ) in terms of the total error fraction E total . Figure 5 shows the average absolute error E average in predicting the cost. These errors cancel each other out in some cases, which explains the lower best performance Dashed lines represent the expected accuracy when randomly selecting care product results shown in table 8 than in 5. The test shows again that Random Forests provide decent predictions of the care product: Even though it averages around 40 percent correctly predicted care products, the 60 percent it predicts wrong apparently has a similar cost. Path search algorithms are the tools we have used to predict the path P . The algorithms we use are variations on Floyd-Warshalls shortest path algorithm, as discussed above. We have experimented with two algorithms, that both take an activity and a care product and attempt to find the activities in between. The first variation, called Longest Path, returns P such that the sum of the weight of the edges is maximized. Here, the edge weight is the number of times that the start and end node of that edge occurred consecutively. The second variation, called Most Likely Path, returns P such that the product of edge weights is maximized. In this scenario, the weight of an edge shows the likeliness of that edge being traversed. show the precision and recall results for the three data sets. In all these figures, the lines marked "o" are the precision results and lines marked "x" are recall results. Darker lines are the Most Likely Path results, lighter lines are the Longest Path result. The plots show that precision of the Longest Path algorithm is almost always lower than Most Likely Path, but on the other hand, its recall is higher. This means that Longest Path predicts more of the actual activities, but also predicts activities that do not actually occur. Tables 9a and9b show the effect of the path predictions on the duration prediction for Most Likely Path and Longest Path, respectively. Both algorithms show lower predictions than the actual durations. Again, the Longest Path shows the effect of having greater recall: the predicted path lengths are longer and closer to the actual. The average duration prediction error in table 9 seems reasonable at up to around two weeks. This changes when taking the predicted durations into account: they are close to zero for the Most Likely Path. Of course, this problem is related to the lack of recall using MLP. Longest Path is performing a bit better, but still gives short predicted durations. If the predicted path is much shorter than the actual the predicted duration will also be much lower: duration is calculated as the sum of average edge durations in the predicted path. Discussion We can analyse and discuss the results obtained above in the light of the three different kinds of noise in the data: namely, the sequence noise, duration noise and human noise. Let us consider the results from each of the goals of this paper. Beginning with (1a), predicting Grp(C ). The accuracy of the prediction does not go up when we consider sequences containing a greater number of activities 3, the reason for this is that we assumed a sequence for the activities conducted every day -sequence noise. Hence, when we have a greater number of activities on a certain day, we would expect a larger sequence noise, and as a result the accuracy of predicting the care product would decrease. For this measurement we used a subset of the actual dataset, the main characteristics of which can be seen in Table 5. Comparing the average number of activities in Table 5, with those in Table 4, we can clearly see that there has been a large decrease in sequence noise, especially for data sets 320.202 and 320.701. Hence, we can say that for this task the noise in the data sets is nearly of the same magnitude. This is the case as the duration noise and the human noise (that does not influence the sequence noise) do not affect the classification task. So, it comes as no surprise that the results of the classification (taking the mean Random Forest values) shown in Table 6, are nearly the same for the three sets of data. Added to this is the fact that if we use the best performing Random Forest classifier, it helps in taking care of some of the error in the data. Furthermore, if we consider only subpaths whose path lengths are at least 10, we see from Fig. 4 that the accuracy for predicting the care product has the same trend for all the three data sets. However, we do notice from Figure 3 (and even from Figure 4) that the accuracy of predicting the path for data set 302.66 drops after about 15 activities and gets even worse than randomly selecting the care product. One could attribute this to a certain amount of human noise present in the data set 302.66, which could be greater than the human noise present in the other two data sets. Task (1b), predicting cost(Grp(C )), again we see the Random Forest algorithm performing the best. Though the mean values of E total is similar for 320.202 and 320.701, it is higher for the data set 302.66. This maybe explained partly due to the greater amount of human noise present in the data set 302.66, which also explains the trend of E total in Figure 5, where similar to Figure 3 the average absolute error of 302.66 increases as compared to the other two data sets. Task (2a), predicting P . For tasks (2a) and (2b) we need to keep in mind that we use the actual raw data set (Table 4) and not a subset(Table 5) as in the case of Tasks (1a) and (1b), the reason being that the laboratory tests do affect the result of the tasks (2a) and (2b). From Figure 6, Figure 7 and Figure 8 we see that recall is always lower than the precision. The reason behind this is that the methods we used to predict the path, returned paths that were shorter than the actual paths the data had and hence the recall is low. While on the other hand, the precision is higher as the predicted activities are mostly in the actual path. We also see a consistently high trend of precision for the Most Likely Path algorithm for all three data sets, which, as expected, has a decreasing trend for 302.66. This could be due to the higher presence of human noise (as we noted earlier). However, when we observe the trend for the Recall values (in Figure 6, Figure 7 and Figure 8) we see that the performance is in the order: Recall(302.66) > Recall(320.202) > Recall(320.701). We think this is an indicator of the sequence noise present in the data sets which also follows the same trend, going by the number of average activities per day in Table 4. We think that sequence noise affects the recall more than the precision as the greater the sequence noise the greater is the number of actual activities not in a particular path, i.e. the false negatives are higher. Task (2b), predicting d P . From Table 9a and Table 9a, we see that the Longest Path gives less error than the Most Likely Path algorithm. This could be as we explained earlier a consequence of the Longest Path algorithm having a higher recall (Figure 6, Figure 7 and Figure 8) and hence having fewer false negatives and being closer to the original path.We also notice that the prediction error is higher for 320.202 compared to 302.66 and 320.701. This result seemingly contradicts the trend of sequence noise between the data sets 320.202 and 302.66, as discussed in the earlier paragraph. We can however reason this result, by arguing that sequence noise does not affect the duration prediction as much as duration noise does. For example, in our Hospital data two identical subpaths (with the same activities and in the same order) can have entirely different durations, because of the different timestamps and the different intervals between activities (that can vary from a couple of minutes to weeks. A practical example of such a case is when an young and a relatively old patient is admitted in a Hospital for a similar fracture or injury. Both the individuals would get a similar treatment(Care product -and would have the same subpaths, however, the older patient could take a much longer time to recover. Hence, we can reason the data in Table 9a and Table 9a by concluding that the data set 320.202 has greater duration noise than the data sets 302.66 and 320.701. An issue prevalent in the data sets and which could be one of the major causes of the duration noise and human noise, is the fact that all patients, irrespective of their particular demographic background, are subjected to the same treatment procedures. This would cause a lot of human noise, apart from the duration noise (as explained in the earlier paragraph), as we can expect that all patients would not respond to the same treatments in an identical manner. Hence, the likelihood of faulty treatment procedures (human noise) increases. Conclusions In our case study we have shown how one can combine data and process mining techniques for forecasting cash flow in Dutch hospitals. The techniques and processes we have demonstrated, we think can be used to analyse the cash flow for other hospitals with a few modifications. For patients still undergoing treatment, we managed to, given a diagnosis and past activities, predict the care product with an accuracy of a little less than 50% (for all the 3 sets of data), which was still (on an average) better than randomly predicting the care product. On the other hand, for predicting the associated cost of the Care, we obtained an error of of less than 10% for 2 data sets and 17% for one. These results were obtained with a Random Forest classifier which was shown to perform the best. This shows that our method of predicting is a viable solution to the cash flow problem we raised in the introduction. We furthermore managed to, given a diagnosis, the most recent activity, and the (predicted) care product, predict the remaining activities using two algorithms (Longest Path and Most Likely Path). We achieved a precision of about 80% for the Most Likely Path and about 60-70% for the Longest Path algorithm and a recall of 35-40% (for the three data sets). We algorithms predicted the associated duration with an error of about 30% for 202 and around 11% for the data sets 701 and 66. The case study is of particular interest because of the causes for the modest to weak prediction results for the process mining tasks. In our opinion, three process mining-specific types of noise significantly affected our results: sequence noise, human noise, and time noise. In this paper, we thoroughly discuss the causes for these types of noise and their effects. From the predicted cost and duration, it is possible to derive a prediction (albeit with some error) for the future cash flow for all patients currently undergoing treatment. This we think is a major contribution to research and practice, as given the noise in the data, we were still able to make reasonable predictions which could cheaper and more practical than asking a few expert practitioners. Future work can include attempts at removing or decreasing the different kinds of noise in the data and then carrying out the prediction analysis to compare the results. # Trace 1 .Fig. 1 : 11 Fig. 1: Converting traces to a graph Fig. 2 : 2 Fig. 2: Number of sub paths of length |P | ≥ 1 . . . n Fig. 3 :Fig. 4 : 34 Fig. 3: Random Forest accuracy for path length |P | = 1 . . . n. Dashed lines represent the expected accuracy when randomly selecting care product Fig. 5 : 5 Fig. 5: Random Forest cost error for path length |P | = 1 . . . n Fig. 6 :Figures 6 , 7 and 8 668 Fig. 6: Precision and recall for dataset 320.202 Fig. 7 :Fig. 8 : 78 Fig. 7: Precision and recall for dataset 320.701 Table 1 : 1 The performed activity table Sub path id Care path id Start date End date Medical specialty The unique The care path The date on The date on The specialty identifier of the the sub path is which the sub which the sub responsible for sub path part of path was path was closed this sub path opened (cont.d) Care type Diagnosis code Care product Closure reason code The care type of A code for the The care The reason the the sub path. diagnosis that product for the care path was led to this sub closed sub path closed, for path example, enough time had expired. Table 2 : 2 The sub path table Columns: Sub path Care product Activity 0 Activity 1 Activity 2 . . . Activity n Example contents: 1700750 979001104 33229 299999 33231 190205 33229 40176052 99499015 33285 10320 Table 3 : 3 The columns in each dataset and example dataset contents 320.202 320.701 302.66 Number of sub paths 9950 274 1985 Number of unique activity codes 315 190 239 Number of unique care products 22 6 11 Largest path length | P | 559 703 286 Average path length | P | 15.99 41.36 15.07 Average number of activities per day 6.79 10.13 2.49 Average duration (in days) 35.81 31.23 126.22 Table 4 : 4 Main characteristics of the data sets 320.202 320.701 302.66 Number of sub paths 9950 274 1985 Number of unique activity codes 160 70 149 Number of unique care products 22 6 11 Largest path length | P | 152 174 104 Average path length | P | 6.91 11.35 11.39 Average number of activities per day 3.18 3.03 2.15 Table 5 : 5 Main characteristics of the classification data set. Lab-activities are excluded from this set. Table 6 : 6 Accuracy (fraction correct of total) of prediction of care product for path lengths |P | = 1 . . . n 6.2 Task 1b: Predicting cost(Grp(C )) Table 7 : 7 Prediction of care product from path length |P | = 1 . . . 10 Min Max Mean Median We start with activity 2, because activity 1 is always "Start".
49,930
[ "1002435", "1002436", "1001496" ]
[ "303060", "303060", "303060" ]