input
stringlengths
29
3.27k
created_at
stringlengths
29
29
__index_level_0__
int64
0
16k
The software ecosystem is a trust-rich part of the world. Collaboratively, software engineers trust major hubs in the ecosystem, such as package managers, repository services, and programming language ecosystems. This trust, however, is often broken by vulnerabilities, ransomware, and abuse from malignant actors. But what is trust? In this paper we explore, through twelve in-depth interviews with software engineers, how they perceive trust in their daily work. From the interviews we conclude three things. First, software engineers make a distinction between an adoption factor and a trust factor when selecting a package. Secondly, while in literature mostly technical factors are considered as the main trust factors, the software engineers in this study conclude that organizational factors are more important. Finally, we find that different kinds of software engineers require different views on trust, and that it is impossible to create one unified perception of trust. Keywords: software ecosystem trust, empirical software engineering, TrustSECO, external software adoption, cross-sectional exploratory interview analysis, trust perception.
2021-01-14 17:31:04.000000000
1,379
In this work, we propose a novel perspective to the problem of patch correctness assessment: a correct patch implements changes that "answer" to a problem posed by buggy behaviour. Concretely, we turn the patch correctness assessment into a Question Answering problem. To tackle this problem, our intuition is that natural language processing can provide the necessary representations and models for assessing the semantic correlation between a bug (question) and a patch (answer). Specifically, we consider as inputs the bug reports as well as the natural language description of the generated patches. Our approach, Quatrain, first considers state of the art commit message generation models to produce the relevant inputs associated to each generated patch. Then we leverage a neural network architecture to learn the semantic correlation between bug reports and commit messages. Experiments on a large dataset of 9135 patches generated for three bug datasets (Defects4j, Bugs.jar and Bears) show that Quatrain can achieve an AUC of 0.886 on predicting patch correctness, and recalling 93% correct patches while filtering out 62% incorrect patches. Our experimental results further demonstrate the influence of inputs quality on prediction performance. We further perform experiments to highlight that the model indeed learns the relationship between bug reports and code change descriptions for the prediction. Finally, we compare against prior work and discuss the benefits of our approach.
2022-08-05 23:37:04.000000000
6,436
This paper aims to discuss the pilot study and analysis of the current development and measurement practices in Jordanian small software firms. It is conducted because most developers build web applications without using any specific development method and don't know how to integrate the suitable measurements inside the process to improve and reduce defect, time and rework of the development life cycle. Furthermore the objectives of this pilot study are firstly; determine the real characteristics of small software firms in Jordan. Secondly, investigate the current development and measurement practices. Thirdly, examine the need of new development methodology for building web application in small software firms. Consequently, Pilot survey was conducted in Jordanian small software firms. Descriptive statistics analysis was used to rank the development and measurements methods according to their importance. This paper presents the data, analysis and finding based on pilot survey. These actual findings of this survey will contribute to build new methodology for developing web applications in small software firms taking to account how to integrate the suitable measurement program to the whole development process and also will provide useful information to those who are doing research in the same area.
2012-01-08 06:45:05.000000000
14,054
While the concept of a digital twin to support maritime operations is gaining attention for predictive maintenance, real-time monitoring, control, and overall process optimization, clarity on its implementation is missing in the literature. Therefore, in this review we show how different authors implemented their digital twins, discuss our findings, and finally give insights on future research directions.
2023-01-21 01:58:34.000000000
11,675
The development and analysis of mobile applications in term of security have become an active research area from many years as many apps are vulnerable to different attacks. Especially the concept of hybrid applications has emerged in the last three years where applications are developed in both native and web languages because the use of web languages raises certain security risks in hybrid mobile applications as it creates possible channels where malicious code can be injected inside the application. WebView is an important component in hybrid mobile applications which used to implements a sandbox mechanism to protect the local resources of smartphone devices from un-authorized access of JavaScript. However, the WebView application program interfaces (APIs) also have security issues. For example, an attacker can attack the hybrid application via JavaScript code by bypassing the sandbox security through accessing the public methods of the applications. Cross-site scripting (XSS) is one of the most popular malicious code injection technique for accessing the public methods of the application through JavaScript. This research proposes a framework for detection and prevention of XSS attacks in hybrid applications using state-of-the-art machine learning (ML) algorithms. The detection of the attacks have been perform by exploiting the registered Java object features. The dataset and the sample hybrid applications have been developed using the android studio. Then the widely used toolkit, RapidMiner, has been used for empirical analysis. The results reveal that the ensemble based Random Forest algorithm outperforms other algorithms and achieves both the accuracy and F-measures as high as of 99%.
2020-06-04 21:12:06.000000000
3,204
Bug reports are a popular target for natural language processing (NLP). However, bug reports often contain artifacts such as code snippets, log outputs and stack traces. These artifacts not only inflate the bug reports with noise, but often constitute a real problem for the NLP approach at hand and have to be removed. In this paper, we present a machine learning based approach to classify content into natural language and artifacts at line level implemented in Python. We show how data from GitHub issue trackers can be used for automated training set generation, and present a custom preprocessing approach for bug reports. Our model scores at 0.95 ROC-AUC and 0.93 F1 against our manually annotated validation set, and classifies 10k lines in 0.72 seconds. We cross evaluated our model against a foreign dataset and a foreign R model for the same task. The Python implementation of our model and our datasets are made publicly available under an open source license.
2021-10-02 14:31:12.000000000
6,610
We describe a new SMT bit-blasting API for floating-points and evaluate it using different out-of-the-shelf SMT solvers during the verification of several C programs. The new floating-point API is part of the SMT backend in ESBMC, a state-of-the-art bounded model checker for C and C++. For the evaluation, we compared our floating-point API against the native floating-point APIs in Z3 and MathSAT. We show that Boolector, when using floating-point API, outperforms the solvers with native support for floating-points, correctly verifying more programs in less time. Experimental results also show that our floating-point API implemented in ESBMC is on par with other state-of-the-art software verifiers. Furthermore, when verifying programs with floating-point arithmetic, our new floating-point API produced no wrong answers.
2020-04-24 15:19:28.000000000
9,346
Background: OSS projects face various challenges. One major challenge is to onboard and integrate newcomers to the project. Aim: We aim to understand and discuss the challenges newcomers face when joining an OSS project and present evidence on how hackathons can mitigate those challenges. Method: We conducted two searches on digital libraries to (1) explore challenges faced by newcomers to join OSS projects, and (2) collect evidence on how hackathons were used to address them. We defined four evidence categories (positive, inconclusive, and no evidence) to classify evidence how hackathons address challenges. In addition, we investigated whether a hackathon event was related to an OSS project or not. Result: We identified a range of newcomer challenges that were successfully addressed using hackathons. However, not all of the solutions we identified were applied in the context of OSS. Conclusion: There seems to be potential in using hackathons to overcome newcomers' challenges in OSS projects and allow them to integrate faster into the project.
2023-05-15 16:42:51.000000000
7,804
As innovation in deep learning continues, many engineers seek to adopt Pre-Trained Models (PTMs) as components in computer systems. Researchers publish PTMs, which engineers adapt for quality or performance prior to deployment. PTM authors should choose appropriate names for their PTMs, which would facilitate model discovery and reuse. However, prior research has reported that model names are not always well chosen - and are sometimes erroneous. The naming for PTM packages has not been systematically studied. In this paper, we frame and conduct the first empirical investigation of PTM naming practices in the Hugging Face PTM registry. We initiated our study with a survey of 108 Hugging Face users to understand the practices in PTM naming. From our survey analysis, we highlight discrepancies from traditional software package naming, and present findings on naming practices. Our findings indicate there is a great mismatch between engineers' preferences and practical practices of PTM naming. We also present practices on detecting naming anomalies and introduce a novel automated DNN ARchitecture Assessment technique (DARA), capable of detecting PTM naming anomalies. We envision future works on leveraging meta-features of PTMs to improve model reuse and trustworthiness.
2023-10-02 07:45:56.000000000
1,650
Due to the risks associated with vulnerabilities in smart contracts, their security has gained significant attention in recent years. However, there is a lack of open datasets on smart contract vulnerabilities and their fixes that allows for data-driven research. Towards this end, we propose an automated method for mining and classifying Ethereum's smart contract vulnerabilities and their corresponding fixes from GitHub and from the Common Vulnerabilities and Exposures (CVE) records in the National Vulnerability Database. We implemented the proposed method in a fully automated framework, which we call AutoMESC. AutoMESC uses seven of the most well-known smart contract security tools to classify and label the collected vulnerabilities based on vulnerability types. Furthermore, it collects metadata that can be used in data-intensive smart contract security research (e.g., vulnerability detection, vulnerability classification, severity prediction, and automated repair). We used AutoMESC to construct a sample dataset and made it publicly available. Currently, the dataset contains 6.7K smart contracts' vulnerability-fix pairs written in Solidity. We assess the quality of the constructed dataset in terms of accuracy, provenance, and relevance, and compare it with existing datasets. AutoMESC is designed to collect data continuously and keep the corresponding dataset up-to-date with newly discovered smart contract vulnerabilities and their fixes from GitHub and CVE records.
2022-12-19 17:16:30.000000000
15,537
Maintaining legacy enterprise information systems is a known necessity in companies. To date, it remains an expensive and time-consuming process, requiring high effort and cost to get small changes implemented. MITRAS seeks to reduce the maintenance cost by providing an automatic maintenance system model based on graph transformations. This article presents Parthenos, a different approach to MITRAS, whose goal is to guarantee the correctness of introduced modifications at a syntax and type semantics level of the source code. Along with that, it proposes an extensible architecture, which allows the most varied types of systems to carry out software maintenance. Parthenos was evaluated through functional tests to evaluate its effectiveness, using measures of precision, recall, and f-measure.
2021-04-24 11:38:05.000000000
1,771
This is a short tutorial of the Case Management Model and Notation (CMMN) version 1.0. It is targeted to readers with knowledge of basic process or workflow modeling, and it covers the complete CMMN notation. A simple complaints process is used to demonstrate the notation. At the end of the tutorial the reader will be able to understand and create CMMN models. An appendix summarizing the notation is included for reference purposes.
2016-08-12 15:33:54.000000000
5,194
In a recent study, Reinforcement Learning (RL) used in combination with many-objective search, has been shown to outperform alternative techniques (random search and many-objective search) for online testing of Deep Neural Network-enabled systems. The empirical evaluation of these techniques was conducted on a state-of-the-art Autonomous Driving System (ADS). This work is a replication and extension of that empirical study. Our replication shows that RL does not outperform pure random test generation in a comparison conducted under the same settings of the original study, but with no confounding factor coming from the way collisions are measured. Our extension aims at eliminating some of the possible reasons for the poor performance of RL observed in our replication: (1) the presence of reward components providing contrasting or useless feedback to the RL agent; (2) the usage of an RL algorithm (Q-learning) which requires discretization of an intrinsically continuous state space. Results show that our new RL agent is able to converge to an effective policy that outperforms random testing. Results also highlight other possible improvements, which open to further investigations on how to best leverage RL for online ADS testing.
2024-03-19 13:54:14.000000000
2,046
In this paper, we explore the feasibility of finding algorithm implementations from code. Successfully matching code and algorithms can help understand unknown code, provide reference implementations, and automatically collect data for learning-based program synthesis. To achieve the goal, we designed a new language named p-language to specify the algorithms and a static analyzer for the p-language to automatically extract control flow, math, and natural language information from the algorithm descriptions. We embedded the output of p-language (p-code) and source code in a common vector space using self-supervised machine learning methods to match algorithm with code without any manual annotation. We developed a tool named Beryllium. It takes pseudo code as a query and returns a list of ranked code snippets that likely match the algorithm query. Our evaluation on Stony Brook Algorithm Repository and popular GitHub projects show that Beryllium significantly outperformed the state-of-the-art code search tools in both C and Java. Specifically, for 98.5%, 93.8%, and 66.2% queries, we found the algorithm implementations in the top 25, 10, and 1 ranked list, respectively. Given 87 algorithm queries, we found implementations for 74 algorithms in the GitHub projects where we did not know the algorithms before.
2023-05-24 01:19:05.000000000
15,651
Data science projects often involve various machine learning (ML) methods that depend on data, code, and models. One of the key activities in these projects is the selection of a model or algorithm that is appropriate for the data analysis at hand. ML model selection depends on several factors, which include data-related attributes such as sample size, functional requirements such as the prediction algorithm type, and non-functional requirements such as performance and bias. However, the factors that influence such selection are often not well understood and explicitly represented. This paper describes ongoing work on extending an adaptive variability-aware model selection method with bias detection in ML projects. The method involves: (i) modeling the variability of the factors that affect model selection using feature models based on heuristics proposed in the literature; (ii) instantiating our variability model with added features related to bias (e.g., bias-related metrics); and (iii) conducting experiments that illustrate the method in a specific case study to illustrate our approach based on a heart failure prediction project. The proposed approach aims to advance the state of the art by making explicit factors that influence model selection, particularly those related to bias, as well as their interactions. The provided representations can transform model selection in ML projects into a non ad hoc, adaptive, and explainable process.
2023-11-22 13:20:25.000000000
13,892
As a new programming paradigm, deep neural networks (DNNs) have been increasingly deployed in practice, but the lack of robustness hinders their applications in safety-critical domains. While there are techniques for verifying DNNs with formal guarantees, they are limited in scalability and accuracy. In this paper, we present a novel abstraction-refinement approach for scalable and exact DNN verification. Specifically, we propose a novel abstraction to break down the size of DNNs by over-approximation. The result of verifying the abstract DNN is always conclusive if no spurious counterexample is reported. To eliminate spurious counterexamples introduced by abstraction, we propose a novel counterexample-guided refinement that refines the abstract DNN to exclude a given spurious counterexample while still over-approximating the original one. Our approach is orthogonal to and can be integrated with many existing verification techniques. For demonstration, we implement our approach using two promising and exact tools Marabou and Planet as the underlying verification engines, and evaluate on widely-used benchmarks ACAS Xu, MNIST and CIFAR-10. The results show that our approach can boost their performance by solving more problems and reducing up to 86.3% and 78.0% verification time, respectively. Compared to the most relevant abstraction-refinement approach, our approach is 11.6-26.6 times faster.
2022-06-30 15:00:03.000000000
15,708
Hyper-heuristic is a new methodology for the adaptive hybridization of meta-heuristic algorithms to derive a general algorithm for solving optimization problems. This work focuses on the selection type of hyper-heuristic, called the Exponential Monte Carlo with Counter (EMCQ). Current implementations rely on the memory-less selection that can be counterproductive as the selected search operator may not (historically) be the best performing operator for the current search instance. Addressing this issue, we propose to integrate the memory into EMCQ for combinatorial t-wise test suite generation using reinforcement learning based on the Q-learning mechanism, called Q-EMCQ. The limited application of combinatorial test generation on industrial programs can impact the use of such techniques as Q-EMCQ. Thus, there is a need to evaluate this kind of approach against relevant industrial software, with a purpose to show the degree of interaction required to cover the code as well as finding faults. We applied Q-EMCQ on 37 real-world industrial programs written in Function Block Diagram (FBD) language, which is used for developing a train control management system at Bombardier Transportation Sweden AB. The results of this study show that Q-EMCQ is an efficient technique for test case generation. Additionally, unlike the t-wise test suite generation, which deals with the minimization problem, we have also subjected Q-EMCQ to a maximization problem involving the general module clustering to demonstrate the effectiveness of our approach.
2020-02-15 15:06:06.000000000
1,725
Large Language Models (LLMs) have revolutionized natural language processing by generating human-like text and images from textual input. However, their potential to generate complex 2D/3D visualizations has been largely unexplored. We report initial experiments showing that LLMs can generate 2D/3D visualizations that may be used for legal visualization. Further research is needed for complex 2D visualizations and 3D scenes. LLMs can become a powerful tool for many industries and applications, generating complex visualizations with minimal training.
2023-05-04 14:49:44.000000000
10,757
Essence Framework EF aims at addressing the core problems of software engineering SE and its practices
2018-12-03 06:16:26.000000000
7,820
Several technologies are emerging that provide new ways to capture, store, present and use knowledge. This book is the first to provide a comprehensive introduction to five of the most important of these technologies: Knowledge Engineering, Knowledge Based Engineering, Knowledge Webs, Ontologies and Semantic Webs. For each of these, answers are given to a number of key questions (What is it? How does it operate? How is a system developed? What can it be used for? What tools are available? What are the main issues?). The book is aimed at students, researchers and practitioners interested in Knowledge Management, Artificial Intelligence, Design Engineering and Web Technologies. During the 1990s, Nick worked at the University of Nottingham on the application of AI techniques to knowledge management and on various knowledge acquisition projects to develop expert systems for military applications. In 1999, he joined Epistemics where he worked on numerous knowledge projects and helped establish knowledge management programmes at large organisations in the engineering, technology and legal sectors. He is author of the book "Knowledge Acquisition in Practice", which describes a step-by-step procedure for acquiring and implementing expertise. He maintains strong links with leading research organisations working on knowledge technologies, such as knowledge-based engineering, ontologies and semantic technologies.
2008-02-26 11:26:09.000000000
13,121
The widespread use of GitHub among software developers as a communal platform for coordinating software development has led to an abundant supply of publicly accessible data. Ever since the inception of Bitcoin, blockchain teams have incorporated the concept of open source code as a fundamental principle, thus making the majority of blockchain-based projects' code and version control data available for analysis. We define health in open source software projects to be a combination of the concepts of sustainability, robustness, and niche occupation. Sustainability is further divided into interest and engagement. This work uses exploratory factor analysis to identify latent constructs that are representative of general public interest or popularity in software, and software robustness within open source blockchain projects. We find that interest is a combination of stars, forks, and text mentions in the GitHub repository, while a second factor for robustness is composed of a criticality score, time since last updated, numerical rank, and geographic distribution. Cross validation of the dataset is carried out with good support for the model. A structural model of software health is proposed such that general interest positively influences developer engagement, which, in turn, positively predicts software robustness. The implications of structural equation modelling in the context of software engineering and next steps are discussed.
2023-10-30 10:14:27.000000000
8,430
This research assesses the aspects of software organizations' DevOps environments and identifies the factors contributing to these environments' success. DevOps is a recent concept, and many organizations are moving from old-style software development methods to agile approaches such as DevOps. However, there is no comprehensive information on what factors impact the success of the DevOps environment once organizations adopt it. This research focused on addressing this gap through a systematic literature review. The systematic review consisted of 33 articles from five selected search systems and databases from 2015 to 2021. Based on the included articles, 15 factors were identified and grouped into four categories: Collaborative Culture, Organizational Aspects, Tooling and Technology, and Continuous Practices. In addition, this research proposes a DevOps environment success factors model to potentially contribute to DevOps research and practice. Recommendations are made for additional research on the effectiveness of the proposed model and its success factors.
2022-11-04 05:25:54.000000000
6,499
This paper presents a data-driven framework to improve the trustworthiness of US tax preparation software systems. Given the legal implications of bugs in such software on its users, ensuring compliance and trustworthiness of tax preparation software is of paramount importance. The key barriers in developing debugging aids for tax preparation systems are the unavailability of explicit specifications and the difficulty of obtaining oracles. We posit that, since the US tax law adheres to the legal doctrine of precedent, the specifications about the outcome of tax preparation software for an individual taxpayer must be viewed in comparison with individuals that are deemed similar. Consequently, these specifications are naturally available as properties on the software requiring similar inputs provide similar outputs. Inspired by the metamorphic testing paradigm, we dub these relations metamorphic relations. In collaboration with legal and tax experts, we explicated metamorphic relations for a set of challenging properties from various US Internal Revenue Services (IRS) publications including Publication 596 (Earned Income Tax Credit), Schedule 8812 (Qualifying Children/Other Dependents), and Form 8863 (Education Credits). We focus on an open-source tax preparation software for our case study and develop a randomized test-case generation strategy to systematically validate the correctness of tax preparation software guided by metamorphic relations. We further aid this test-case generation by visually explaining the behavior of software on suspicious instances using easy to-interpret decision-tree models. Our tool uncovered several accountability bugs with varying severity ranging from non-robust behavior in corner-cases (unreliable behavior when tax returns are close to zero) to missing eligibility conditions in the updated versions of software.
2022-05-09 14:56:21.000000000
8,026
Fine-tuning pre-trained language models (LMs) is essential for enhancing their capabilities. Existing techniques commonly fine-tune on input-output pairs (e.g., instruction tuning) or with numerical rewards that gauge the output quality (e.g., RLHF). We explore LMs' potential to learn from textual interactions (LETI) that not only check their correctness with binary labels but also pinpoint and explain errors in their outputs through textual feedback. Our focus is the code generation task, where the model produces code based on natural language instructions. This setting invites a natural and scalable way to acquire textual feedback: the error messages and stack traces from code execution using a Python interpreter. LETI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback. Prepended to this fine-tuning text, a binary reward token is used to differentiate correct and buggy solutions. LETI requires no ground-truth outputs for training and even outperforms a fine-tuned baseline that does. LETI not only improves the performance of LMs on a code generation dataset MBPP, but also generalizes to other datasets. Trained on MBPP, it achieves comparable or better performance than the base LMs on unseen problems in HumanEval. Furthermore, compared to binary feedback, we observe that textual feedback leads to improved generation quality and sample efficiency, achieving the same performance with fewer than half of the gradient steps. LETI is equally applicable in natural language tasks when they can be formulated as code generation, which we empirically verified on event argument extraction.
2023-05-16 16:11:04.000000000
3,581
The ubiquity of smartphones, and their very broad capabilities and usage, make the security of these devices tremendously important. Unfortunately, despite all progress in security and privacy mechanisms, vulnerabilities continue to proliferate. Research has shown that many vulnerabilities are due to insecure programming practices. However, each study has often dealt with a specific issue, making the results less actionable for practitioners. To promote secure programming practices, we have reviewed related research, and identified avoidable vulnerabilities in Android-run devices and the "security code smells" that indicate their presence. In particular, we explain the vulnerabilities, their corresponding smells, and we discuss how they could be eliminated or mitigated during development. Moreover, we develop a lightweight static analysis tool and discuss the extent to which it successfully detects several vulnerabilities in about 46,000 apps hosted by the official Android market.
2020-05-30 22:59:02.000000000
9,047
Kotlin is a relatively new programming language from JetBrains: its development started in 2010 with release 1.0 done in early 2016. The Kotlin compiler, while slowly and steadily becoming more and more mature, still crashes from time to time on the more tricky input programs, not least because of the complexity of its features and their interactions. This makes it a great target for fuzzing, even the basic forms of which can find a significant number of Kotlin compiler crashes. There is a problem with fuzzing, however, closely related to the cause of the crashes: generating a random, non-trivial and semantically valid Kotlin program is hard. In this paper, we talk about type-centric compiler fuzzing in the form of type-centric enumeration, an approach inspired by skeletal program enumeration and based on a combination of generative and mutation-based fuzzing, which solves this problem by focusing on program types. After creating the skeleton program, we fill the typed holes with fragments of suitable type, created via generation and enhanced by semantic-aware mutation. We implemented this approach in our Kotlin compiler fuzzing framework called Backend Bug Finder (BBF) and did an extensive evaluation, not only testing the real-world feasibility of our approach, but also comparing it to other compiler fuzzing techniques. The results show our approach to be significantly better compared to other fuzzing approaches at generating semantically valid Kotlin programs, while creating more interesting crash-inducing inputs at the same time. We managed to find more than 50 previously unknown compiler crashes, of which 18 were considered important after their triage by the compiler team.
2020-12-10 09:30:31.000000000
6,350
Deep learning (DL) defines a data-driven programming paradigm that automatically composes the system decision logic from the training data. In company with the data explosion and hardware acceleration during the past decade, DL achieves tremendous success in many cutting-edge applications. However, even the state-of-the-art DL systems still suffer from quality and reliability issues. It was only until recently that some preliminary progress was made in testing feed-forward DL systems. In contrast to feed-forward DL systems, recurrent neural networks (RNN) follow a very different architectural design, implementing temporal behaviors and memory with loops and internal states. Such stateful nature of RNN contributes to its success in handling sequential inputs such as audio, natural languages and video processing, but also poses new challenges for quality assurance. In this paper, we initiate the very first step towards testing RNN-based stateful DL systems. We model RNN as an abstract state transition system, based on which we define a set of test coverage criteria specialized for stateful DL systems. Moreover, we propose an automated testing framework, DeepCruiser, which systematically generates tests in large scale to uncover defects of stateful DL systems with coverage guidance. Our in-depth evaluation on a state-of-the-art speech-to-text DL system demonstrates the effectiveness of our technique in improving quality and reliability of stateful DL systems.
2018-12-12 11:03:27.000000000
8,148
Currently, most machine learning models are trained by centralized teams and are rarely updated. In contrast, open-source software development involves the iterative development of a shared artifact through distributed collaboration using a version control system. In the interest of enabling collaborative and continual improvement of machine learning models, we introduce Git-Theta, a version control system for machine learning models. Git-Theta is an extension to Git, the most widely used version control software, that allows fine-grained tracking of changes to model parameters alongside code and other artifacts. Unlike existing version control systems that treat a model checkpoint as a blob of data, Git-Theta leverages the structure of checkpoints to support communication-efficient updates, automatic model merges, and meaningful reporting about the difference between two versions of a model. In addition, Git-Theta includes a plug-in system that enables users to easily add support for new functionality. In this paper, we introduce Git-Theta's design and features and include an example use-case of Git-Theta where a pre-trained model is continually adapted and modified. We publicly release Git-Theta in hopes of kickstarting a new era of collaborative model development.
2023-06-06 09:19:02.000000000
13,271
Process models constitute crucial artifacts in modern information systems and, hence, the proper comprehension of these models is of utmost importance in the utilization of such systems. Generally, process models are considered from two different perspectives: process modelers and readers. Both perspectives share similarities and differences in the comprehension of process models (e.g., diverse experiences when working with process models). The literature proposed many rules and guidelines to ensure a proper comprehension of process models for both perspectives. As a novel contribution in this context, this paper introduces the Process Model Comprehension Framework (PMCF) as a first step towards the measurement and quantification of the perspectives of process modelers and readers as well as the interaction of both regarding the comprehension of process models. Therefore, the PMCF describes an Evaluation Theory Tree based on the Communication Theory as well as the Conceptual Modeling Quality Framework and considers a total of 96 quality metrics in order to quantify process model comprehension. Furthermore, the PMCF was evaluated in a survey with 131 participants and has been implemented as well as applied successfully in a practical case study including 33 participants. To conclude, the PMCF allows for the identification of pitfalls and provides related information about how to assist process modelers as well as readers in order to foster and enable a proper comprehension of process models.
2021-06-22 16:19:02.000000000
15,055
Implementing even a conceptually simple web application requires an inordinate amount of time. FORWARD addresses three problems that reduce developer productivity: (a) Impedance mismatch across the multiple languages used at different tiers of the application architecture. (b) Distributed data access across the multiple data sources of the application (SQL database, user input of the browser page, session data in the application server, etc). (c) Asynchronous, incremental modification of the pages, as performed by Ajax actions. FORWARD belongs to a novel family of web application frameworks that attack impedance mismatch by offering a single unifying language. FORWARD's language is SQL++, a minimally extended SQL. FORWARD's architecture is based on two novel cornerstones: (a) A Unified Application State (UAS), which is a virtual database over the multiple data sources. The UAS is accessed via distributed SQL++ queries, therefore resolving the distributed data access problem. (b) Declarative page specifications, which treat the data displayed by pages as rendered SQL++ page queries. The resulting pages are automatically incrementally modified by FORWARD. User input on the page becomes part of the UAS. We show that SQL++ captures the semi-structured nature of web pages and subsumes the data models of two important data sources of the UAS: SQL databases and JavaScript components. We show that simple markup is sufficient for creating Ajax displays and for modeling user input on the page as UAS data sources. Finally, we discuss the page specification syntax and semantics that are needed in order to avoid race conditions and conflicts between the user input and the automated Ajax page modifications. FORWARD has been used in the development of eight commercial and academic applications. An alpha-release web-based IDE (itself built in FORWARD) enables development in the cloud.
2013-07-31 03:28:40.000000000
6,653
According to Wikipedia, The Mining Software Repositories (MSR) field analyzes the rich data available in software repositories, such as version control repositories, mailing list archives, bug tracking systems, issue tracking systems, etc. to uncover interesting and actionable information about software systems, projects and software engineering. The MSR field has received a great deal of attention and has now its own research conference : [LINK]/. However performing MSR studies is still a technical challenge. Indeed, data sources (such as version control system or bug tracking systems) are highly heterogeneous. Moreover performing a study on a lot of data sources is very expensive in terms of execution time. Surprisingly, there are not so many tools able to help researchers in their MSR quests. This is why we created the Harmony platform, as a mean to assist researchers in performing MSR studies.
2013-08-29 10:04:30.000000000
3,352
Large languages models (LLMs) trained on datasets of publicly available source code have established a new state-of-the-art in code completion. However, these models are mostly unaware of the code that already exists within a specific project, preventing the models from making good use of existing APIs. Instead, LLMs often invent, or "hallucinate", non-existent APIs or produce variants of already existing code. Although the API information is available to IDEs, the input size limit of LLMs prevents code completion techniques from including all relevant context into the prompt. This paper presents De-Hallucinator, an LLM-based code completion technique that grounds the predictions of a model through a novel combination of retrieving suitable API references and iteratively querying the model with increasingly suitable context information in the prompt. The approach exploits the observation that LLMs often predict code that resembles the desired completion, but that fails to correctly refer to already existing APIs. De-Hallucinator automatically identifies project-specific API references related to the code prefix and to the model's initial predictions and adds these references into the prompt. Our evaluation applies the approach to the task of predicting API usages in open-source Python projects. We show that De-Hallucinator consistently improves the predicted code across four state-of-the-art LLMs compared to querying the model only with the code before the cursor. In particular, the approach improves the edit distance of the predicted code by 23-51% and the recall of correctly predicted API usages by 24-61% relative to the baseline.
2024-01-02 07:22:36.000000000
62
Virtualization is one of the biggest buzzwords of the technology industry right at this moment. The fast growth in storage capacity and processing power in enterprise installations coupled with the need for high availability, requires Storage Area Network (SAN) architecture to provide seamless addition of storage and performance elements without downtime. The usual goal of virtualization is to centralize administrative tasks while improving scalability and work loads. This paper, describing about new proposed method for virtualization, which would be overcome limitations of existed methods for storage virtualization
2013-10-06 14:27:59.000000000
12,361
Machine Vision Components (MVC) are becoming safety-critical. Assuring their quality, including safety, is essential for their successful deployment. Assurance relies on the availability of precisely specified and, ideally, machine-verifiable requirements. MVCs with state-of-the-art performance rely on machine learning (ML) and training data but largely lack such requirements. In this paper, we address the need for defining machine-verifiable reliability requirements for MVCs against transformations that simulate the full range of realistic and safety-critical changes in the environment. Using human performance as a baseline, we define reliability requirements as: 'if the changes in an image do not affect a human's decision, neither should they affect the MVC's.' To this end, we provide: (1) a class of safety-related image transformations; (2) reliability requirement classes to specify correctness-preservation and prediction-preservation for MVCs; (3) a method to instantiate machine-verifiable requirements from these requirements classes using human performance experiment data; (4) human performance experiment data for image recognition involving eight commonly used transformations, from about 2000 human participants; and (5) a method for automatically checking whether an MVC satisfies our requirements. Further, we show that our reliability requirements are feasible and reusable by evaluating our methods on 13 state-of-the-art pre-trained image classification models. Finally, we demonstrate that our approach detects reliability gaps in MVCs that other existing methods are unable to detect.
2022-02-07 06:48:30.000000000
15,480
Runtime enforcement is an increasingly popular and effective dynamic validation technique aiming to ensure the correct runtime behavior (w.r.t. a formal specification) of systems using a so-called enforcement monitor. In this paper we introduce runtime enforcement of specifications on component-based systems (CBS) modeled in the BIP (Behavior, Interaction and Priority) framework. BIP is a powerful and expressive component-based framework for formal construction of heterogeneous systems. However, because of BIP expressiveness, it remains difficult to enforce at design-time complex behavioral properties. First we propose a theoretical runtime enforcement framework for CBS where we delineate a hierarchy of sets of enforceable properties (i.e., properties that can be enforced) according to the number of observational steps a system is allowed to deviate from the property (i.e., the notion of k-step enforceability). To ensure the observational equivalence between the correct executions of the initial system and the monitored system, we show that i) only stutter-invariant properties should be enforced on CBS with our monitors, ii) safety properties are 1-step enforceable. Given an abstract enforcement monitor (as a finite-state machine) for some 1-step enforceable specification, we formally instrument (at relevant locations) a given BIP system to integrate the monitor. At runtime, the monitor observes and automatically avoids any error in the behavior of the system w.r.t. the specification. Our approach is fully implemented in an available tool that we used to i) avoid deadlock occurrences on a dining philosophers benchmark, and ii) ensure the correct placement of robots on a map.
2014-06-18 12:16:43.000000000
8,994
Context: DevOps and microservices are acknowledged to be important new paradigms to tackle contemporary software demands and provide capabilities for rapid and reliable software development. Industrial reports show that they are quickly adopted together in massive software companies. However, because of the technical and organizational requirements, many difficulties against efficient implementation of the both emerge in real software teams. Objectives: This study aims to discover the organization, benefits and issues of software teams using DevOps & microservices from an immersive perspective. Method: An ethnographic study was carried out in three companies with different business, size, products, customers, and degree of globalization. All the three companies claimed their adoption of DevOps and microservices. Seven months (cumulative) of participant observations and nine interviews with practitioners were conducted to collect the data of software teams related to DevOps and microservices. A cross-company empirical investigation using grounded theory was done by analyzing the archive data. Results: The adoption of DevOps and microservices brings benefits to rapid delivery, ability improvements and burden reduction, whilst the high cost and lack of practical guidance were emerged. Moreover, our observations and interviews reflect that in software teams, the relationship between DevOps and microservices is not significant, which differs from the relationship described in the previous studies. Four lessons for practitioners and four implications for researchers were discussed based on our findings. Conclusion: Our findings contribute to the understanding of the organization, benefits and issues of adopting DevOps and microservices from an immersive perspective of software teams.
2022-04-26 08:55:10.000000000
14,413
Architectural reconstruction is a reverse engineering activity aiming at recovering the missing decisions on a system. It can help identify the components, within a legacy software application, according to the application's architectural pattern. It is useful to identify architectural technical debt. We are interested in identifying layers within a layered application since the layered pattern is one of the most used patterns to structure large systems. Earlier component reconstruction work focusing on that pattern relied on generic component identification criteria, such as cohesion and coupling. Recent work has identified architectural-pattern specific criteria to identify components within that pattern. However, the architectural-pattern specific criteria that the layered pattern embodies are loosely defined. In this paper, we present a first systematic literature review (SLR) of the literature aiming at inventorying such criteria for layers within legacy applications and grouping them under four principles that embody the fundamental design principles under-lying the architectural pattern. We identify six such criteria in the form of design rules. We also perform a second systematic literature review to synthesize the literature on software architecture reconstruction in the light of these criteria. We report those principles, the rules they encompass, their representation, and their usage in software architecture reconstruction.
2021-12-02 15:19:25.000000000
11,863
Livestock producers often need help in standardising (i.e., converting and validating) their livestock event data. This article introduces a novel solution, LEI2JSON (Livestock Event Information To JSON). The tool is an add-on for Google Sheets, adhering to the livestock event information (LEI) schema. The core objective of LEI2JSON is to provide livestock producers with an efficient mechanism to standardise their data, leading to substantial savings in time and resources. This is achieved by building the spreadsheet template with the appropriate column headers, notes, and validation rules, converting the spreadsheet data into JSON format, and validating the output against the schema. LEI2JSON facilitates the seamless storage of livestock event information locally or on Google Drive in JSON. Additionally, we have conducted an extensive experimental evaluation to assess the effectiveness of the tool.
2023-10-25 08:44:52.000000000
12,037
In the literature, there is a rather clear segregation between manually written tests by developers and automatically generated ones. In this paper, we explore a third solution: to automatically improve existing test cases written by developers. We present the concept, design, and implementation of a system called \dspot, that takes developer-written test cases as input (junit tests in Java) and synthesizes improved versions of them as output. Those test improvements are given back to developers as patches or pull requests, that can be directly integrated in the main branch of the test code base. We have evaluated DSpot in a deep, systematic manner over 40 real-world unit test classes from 10 notable and open-source software projects. We have amplified all test methods from those 40 unit test classes. In 26/40 cases, DSpot is able to automatically improve the test under study, by triggering new behaviors and adding new valuable assertions. Next, for ten projects under consideration, we have proposed a test improvement automatically synthesized by \dspot to the lead developers. In total, 13/19 proposed test improvements were accepted by the developers and merged into the main code base. This shows that DSpot is capable of automatically improving unit-tests in real-world, large-scale Java software.
2018-11-07 09:52:13.000000000
3,008
This paper presents a comprehensive classification of identity management approaches. The classification makes use of three axes: topology, type of user, and type of environment. The analysis of existing approaches using the resulting identity management cube (IMC) highlights the trade-off between user control and trust in attributes. A comparative analysis of IMC and established models identifies missing links between the approaches. The IMC is extended by a morphology of identity management, describing characteristics of cooperation. The morphology is then mapped to the life cycle of users and identity management in a further step. These classifications are practically underlined with current approaches. Both methods combined provide a comprehensive characterization of identity management approaches. The methods help to choose suited approaches and implement needed tools.
2022-12-25 14:49:37.000000000
3,265
In recent decades, there has been a major shift towards improved digital access to scholarly works. However, even now that these works are available in digital form, they remain document-based, making it difficult to communicate the knowledge they contain. The next logical step is to extend these works with more flexible, fine-grained, semantic, and context-sensitive representations of scholarly knowledge. The Open Research Knowledge Graph (ORKG) is a platform that structures and interlinks scholarly knowledge, relying on crowdsourced contributions from researchers (as a crowd) to acquire, curate, publish, and process this knowledge. In this experience report, we consider the ORKG in the context of Crowd-based Requirements Engineering (CrowdRE) from two perspectives: (1) As CrowdRE researchers, we investigate how the ORKG practically applies CrowdRE techniques to involve scholars in its development to make it align better with their academic work. We determined that the ORKG readily provides social and financial incentives, feedback elicitation channels, and support for context and usage monitoring, but that there is improvement potential regarding automated user feedback analyses and a holistic CrowdRE approach. (2) As crowd members, we explore how the ORKG can be used to communicate scholarly knowledge about CrowdRE research. For this purpose, we curated qualitative and quantitative scholarly knowledge in the ORKG based on papers contained in two previously published systematic literature reviews (SLRs) on CrowdRE. This knowledge can be explored and compared interactively, and with more data than what the SLRs originally contained. Therefore, the ORKG improves access and communication of the scholarly knowledge about CrowdRE research. For both perspectives, we found the ORKG to be a useful multi-tool for CrowdRE research.
2021-08-10 10:52:49.000000000
1,247
Third-party libraries (TPLs) are frequently reused in software to reduce development cost and the time to market. However, external library dependencies may introduce vulnerabilities into host applications. The issue of library dependency has received considerable critical attention. Many package managers, such as Maven, Pip, and NPM, are proposed to manage TPLs. Moreover, a significant amount of effort has been put into studying dependencies in language ecosystems like Java, Python, and JavaScript except C/C++. Due to the lack of a unified package manager for C/C++, existing research has only few understanding of TPL dependencies in the C/C++ ecosystem, especially at large scale. Towards understanding TPL dependencies in the C/C++ecosystem, we collect existing TPL databases, package management tools, and dependency detection tools, summarize the dependency patterns of C/C++ projects, and construct a comprehensive and precise C/C++ dependency detector. Using our detector, we extract dependencies from a large-scale database containing 24K C/C++ repositories from GitHub. Based on the extracted dependencies, we provide the results and findings of an empirical study, which aims at understanding the characteristics of the TPL dependencies. We further discuss the implications to manage dependency for C/C++ and the future research directions for software engineering researchers and developers in fields of library development, software composition analysis, and C/C++package manager.
2022-09-04 08:35:16.000000000
6,140
Business analysts and domain experts are often sketching the behaviors of a software system using high-level models that are technology- and platform-independent. The developers will refine and enrich these high-level models with technical details. As a consequence, the refined models can deviate from the original models over time, especially when the two kinds of models evolve independently. In this context, we focus on behavior models; that is, we aim to ensure that the refined, low-level behavior models conform to the corresponding high-level behavior models. Based on existing formal verification techniques, we propose containment checking as a means to assess whether the system's behaviors described by the low-level models satisfy what has been specified in the high-level counterparts. One of the major obstacles is how to lessen the burden of creating formal specifications of the behavior models as well as consistency constraints, which is a tedious and error-prone task when done manually. Our approach presented in this paper aims at alleviating the aforementioned challenges by considering the behavior models as verification inputs and devising automated mappings of behavior models onto formal properties and descriptions that can be directly used by model checkers. We discuss various challenges in our approach and show the applicability of our approach in illustrative scenarios.
2014-04-03 10:44:09.000000000
3,629
Chatbots are software agents that are able to interact with humans in natural language. Their intuitive interaction paradigm is expected to significantly reshape the software landscape of tomorrow, while already today chatbots are invading a multitude of scenarios and contexts. This article takes a developer's perspective, identifies a set of architectural patterns that capture different chatbot integration scenarios, and reviews state-of-the-art development aids.
2020-09-05 03:12:16.000000000
2,507
Behavioral testing offers a crucial means of diagnosing linguistic errors and assessing capabilities of NLP models. However, applying behavioral testing to machine translation (MT) systems is challenging as it generally requires human efforts to craft references for evaluating the translation quality of such systems on newly generated test cases. Existing works in behavioral testing of MT systems circumvent this by evaluating translation quality without references, but this restricts diagnosis to specific types of errors, such as incorrect translation of single numeric or currency words. In order to diagnose general errors, this paper proposes a new Bilingual Translation Pair Generation based Behavior Testing (BTPGBT) framework for conducting behavioral testing of MT systems. The core idea of BTPGBT is to employ a novel bilingual translation pair generation (BTPG) approach that automates the construction of high-quality test cases and their pseudoreferences. Experimental results on various MT systems demonstrate that BTPGBT could provide comprehensive and accurate behavioral testing results for general error diagnosis, which further leads to several insightful findings. Our code and data are available at https: //github.com/wujunjie1998/BTPGBT.
2023-10-18 20:12:44.000000000
7,876
We present a term rewrite system that formally models the Message Authenticator Algorithm (MAA), which was one of the first cryptographic functions for computing a Message Authentication Code and was adopted, between 1987 and 2001, in international standards (ISO 8730 and ISO 8731-2) to ensure the authenticity and integrity of banking transactions. Our term rewrite system is large (13 sorts, 18 constructors, 644 non-constructors, and 684 rewrite rules), confluent, and terminating. Implementations in thirteen different languages have been automatically derived from this model and used to validate 200 official test vectors for the MAA.
2017-03-18 13:55:50.000000000
4,869
Design time uncertainty poses an important challenge when developing a self-adaptive system. As an example, defining how the system should adapt when facing a new environment state, requires understanding the precise effect of an adaptation, which may not be known at design time. Online reinforcement learning, i.e., employing reinforcement learning (RL) at runtime, is an emerging approach to realizing self-adaptive systems in the presence of design time uncertainty. By using Online RL, the self-adaptive system can learn from actual operational data and leverage feedback only available at runtime. Recently, Deep RL is gaining interest. Deep RL represents learned knowledge as a neural network whereby it can generalize over unseen inputs, as well as handle continuous environment states and adaptation actions. A fundamental problem of Deep RL is that learned knowledge is not explicitly represented. For a human, it is practically impossible to relate the parametrization of the neural network to concrete RL decisions and thus Deep RL essentially appears as a black box. Yet, understanding the decisions made by Deep RL is key to (1) increasing trust, and (2) facilitating debugging. Such debugging is especially relevant for self-adaptive systems, because the reward function, which quantifies the feedback to the RL algorithm, must be defined by developers. The reward function must be explicitly defined by developers, thus introducing a potential for human error. To explain Deep RL for self-adaptive systems, we enhance and combine two existing explainable RL techniques from the machine learning literature. The combined technique, XRL-DINE, overcomes the respective limitations of the individual techniques. We present a proof-of-concept implementation of XRL-DINE, as well as qualitative and quantitative results of applying XRL-DINE to a self-adaptive system exemplar.
2022-10-11 02:39:29.000000000
12,498
Self-adaptation approaches usually rely on closed-loop controllers that avoid human intervention from adaptation. While such fully automated approaches have proven successful in many application domains, there are situations where human involvement in the adaptation process is beneficial or even necessary. For such "human-in-the-loop" adaptive systems, two major challenges, namely transparency and controllability, have to be addressed to include the human in the self-adaptation loop. Transparency means that relevant context information about the adaptive systems and its context is represented based on a digital twin enabling the human an immersive and realistic view. Concerning controllability, the decision-making and adaptation operations should be managed in a natural and interactive way. As existing human-in-the-loop adaptation approaches do not fully cover these aspects, we investigate alternative human-in-the-loop strategies by using a combination of digital twins and virtual reality (VR) interfaces. Based on the concept of the digital twin, we represent a self-adaptive system and its respective context in a virtual environment. With the help of a VR interface, we support an immersive and realistic human involvement in the self-adaptation loop by mirroring the physical entities of the real world to the VR interface. For integrating the human in the decision-making and adaptation process, we have implemented and analyzed two different human-in-the-loop strategies in VR: a procedural control where the human can control the decision making-process and adaptations through VR interactions (human-controlled) and a declarative control where the human specifies the goal state and the configuration is delegated to an AI planner (mixed-initiative). We illustrate and evaluate our approach based on an autonomic robot system that is accessible and controlled through a VR interface.
2021-03-19 04:16:39.000000000
14,727
UML state machines are widely used to specify dynamic systems behaviours. However its semantics is described informally, thus preventing the application of model checking techniques that could guarantee the system safety. In a former work, we proposed a formalisation of non-concurrent UML state machines using coloured Petri nets, so as to allow for formal verification. In this paper, we report our experience to implement this translation in an automated manner using the model-to-text transformation tool Acceleo. Whereas Acceleo provides interesting features that facilitated our translation process, it also suffers from limitations uneasy to overcome.
2014-05-04 05:45:59.000000000
11,340
In order to maintain a system, it is beneficial to know its software architecture. In the common case that this architecture is unavailable, architecture recovery provides a way to recover an architectural view of the system. Many different methods and tools exist to provide such a view. While there have been taxonomies of different recovery methods and surveys of their results along with measurements of how these results conform to expert's opinions on the systems, there has not been a survey that goes beyond an automatic comparison and instead seeks to answer questions about the viability of individual methods in given situations, the quality of their results and whether these results can be used to indicate and measure the quality and quantity of architectural changes. For our case study, we look at the results of recoveries of versions of Android and Apache Hadoop obtained by running PKG, ACDC and ARC.
2019-01-17 22:46:47.000000000
11,413
In this thesis a comprehensive verification framework is proposed to contend with some important issues in composability verification and a verification process is suggested to verify composability of different kinds of systems models, such as reactive, real-time and probabilistic systems. With an assumption that all these systems are concurrent in nature in which different composed components interact with each other simultaneously, the requirements for the extensive techniques for the structural and behavioral analysis becomes increasingly challenging. The proposed verification framework provides methods, techniques and tool support for verifying composability at its different levels. These levels are defined as foundations of consistent model composability. Each level is discussed in detail and an approach is presented to verify composability at that level. In particular we focus on the Dynamic-Semantic Composability level due to its significance in the overall composability correctness and also due to the level of difficulty it poses in the process. In order to verify composability at this level we investigate the application of three different approaches namely (i) Petri Nets based Algebraic Analysis (ii) Colored Petri Nets (CPN) based State-space Analysis and (iii) Communicating Sequential Processes based Model Checking. All three approaches attack the problem of verifying dynamic-semantic composability in different ways however they all share the same aim i.e., to confirm the correctness of a composed model with respect to its requirement specifications.
2023-01-04 12:35:29.000000000
15,909
In this article, we examine how security applies to Service Oriented Architecture (SOA). Before we discuss security for SOA, lets take a step back and examine what SOA is. SOA is an architectural approach which involves applications being exposed as "services". Originally, services in SOA were associated with a stack of technologies which included SOAP, WSDL, and UDDI. This article addresses the defects of traditional enterprise application integration by combining service oriented-architecture and web service technology. Application integration is then simplified to development and integration of services to tackle connectivity of isomerous enterprise application integration, security, loose coupling between systems and process refactoring and optimization.
2011-08-02 02:27:50.000000000
8,949
Context. Companies commonly invest effort to remove technical issues believed to impact software qualities, such as removing anti-patterns or coding styles violations. Objective. Our aim is to analyze the diffuseness of Technical Debt (TD) items in software systems and to assess their impact on code changes and fault-proneness, considering also the type of TD items and their severity. Method. We conducted a case study among 33 Java projects from the Apache Software Foundation (ASF) repository. We analyzed 726 commits containing 27K faults and 12M changes. The projects violated 173 SonarQube rules generating more than 95K TD items in more than 200K classes. Results. Clean classes (classes not affected by TD items) are less change-prone than dirty ones, but the difference between the groups is small. Clean classes are slightly more change-prone than classes affected by TD items of type Code Smell or Security Vulnerability. As for fault-proneness, there is no difference between clean and dirty classes. Moreover, we found a lot of incongruities in the type and severity level assigned by SonarQube. Conclusions. Our result can be useful for practitioners to understand which TD items they should refactor and for researchers to bridge the missing gaps. They can also support companies and tool vendors in identifying TD items as accurately as possible.
2019-08-28 14:12:57.000000000
4,707
In recent years, Jupyter notebooks have grown in popularity in several domains of software engineering, such as data science, machine learning, and computer science education. Their popularity has to do with their rich features for presenting and visualizing data, however, recent studies show that notebooks also share a lot of drawbacks: high number of code clones, low reproducibility, etc. In this work, we carry out a comparison between Python code written in Jupyter Notebooks and in traditional Python scripts. We compare the code from two perspectives: structural and stylistic. In the first part of the analysis, we report the difference in the number of lines, the usage of functions, as well as various complexity metrics. In the second part, we show the difference in the number of stylistic issues and provide an extensive overview of the 15 most frequent stylistic issues in the studied mediums. Overall, we demonstrate that notebooks are characterized by the lower code complexity, however, their code could be perceived as more entangled than in the scripts. As for the style, notebooks tend to have 1.4 times more stylistic issues, but at the same time, some of them are caused by specific coding practices in notebooks and should be considered as false positives. With this research, we want to pave the way to studying specific problems of notebooks that should be addressed by the development of notebook-specific tools, and provide various insights that can be useful in this regard.
2022-03-30 01:36:18.000000000
1,804
There is increasing interest in applying verification tools to programs that have bitvector operations (eg., binaries). SMT solvers, which serve as a foundation for these tools, have thus increased support for bitvector reasoning through bit-blasting and linear arithmetic approximations. In this paper we show that similar linear arithmetic approximation of bitvector operations can be done at the source level through transformations. Specifically, we introduce new paths that over-approximate bitvector operations with linear conditions/constraints, increasing branching but allowing us to better exploit the well-developed integer reasoning and interpolation of verification tools. We show that, for reachability of bitvector programs, increased branching incurs negligible overhead yet, when combined with integer interpolation optimizations, enables more programs to be verified. We further show this exploitation of integer interpolation in the common case also enables competitive termination verification of bitvector programs and leads to the first effective technique for LTL verification of bitvector programs. Finally, we provide an in-depth case study of decompiled ("lifted") binary programs, which emulate X86 execution through frequent use of bitvector operations. We present a new tool DarkSea, the first tool capable of verifying reachability, termination, and LTL of lifted binaries.
2021-05-10 14:22:09.000000000
7,474
Enterprise content management is an urgent issue of current scientific and practical activities in software design and implementation. However, papers known as yet give insufficient coverage of theoretical background of the software in question. The paper gives an attempt of building a state-based model of content management. In accordance with the theoretical principles outlined, a content management information system (CMIS) has been implemented in a large international oil-and-gas group of companies.
2006-07-02 17:16:37.000000000
1,024
Convex polyhedral abstractions of logic programs have been found very useful in deriving numeric relationships between program arguments in order to prove program properties and in other areas such as termination and complexity analysis. We present a tool for constructing polyhedral analyses of (constraint) logic programs. The aim of the tool is to make available, with a convenient interface, state-of-the-art techniques for polyhedral analysis such as delayed widening, narrowing, "widening up-to", and enhanced automatic selection of widening points. The tool is accessible on the web, permits user programs to be uploaded and analysed, and is integrated with related program transformations such as size abstractions and query-answer transformation. We then report some experiments using the tool, showing how it can be conveniently used to analyse transition systems arising from models of embedded systems, and an emulator for a PIC microcontroller which is used for example in wearable computing systems. We discuss issues including scalability, tradeoffs of precision and computation time, and other program transformations that can enhance the results of analysis.
2007-12-17 15:11:36.000000000
7,947
Machine Learning (ML) has become a fast-growing, trending approach in solution development in practice. Deep Learning (DL) which is a subset of ML, learns using deep neural networks to simulate the human brain. It trains machines to learn techniques and processes individually using computer algorithms, which is also considered to be a role of Artificial Intelligence (AI). In this paper, we study current technical issues related to software development and delivery in organizations that work on ML projects. Therefore, the importance of the Machine Learning Operations (MLOps) concept, which can deliver appropriate solutions for such concerns, is discussed. We investigate commercially available MLOps tool support in software development. The comparison between MLOps tools analyzes the performance of each system and its use cases. Moreover, we examine the features and usability of MLOps tools to identify the most appropriate tool support for given scenarios. Finally, we recognize that there is a shortage in the availability of a fully functional MLOps platform on which processes can be automated by reducing human intervention.
2022-02-16 10:26:14.000000000
7,194
ReTest is a novel testing tool for Java applications with a graphical user interface (GUI), combining monkey testing and difference testing. Since this combination sidesteps the oracle problem, it enables the generation of GUI-based regression tests. ReTest makes use of evolutionary computing (EC), particularly a genetic algorithm (GA), to optimize these tests towards code coverage. While this is indeed a desirable goal in terms of software testing and potentially finds many bugs, it lacks one major ingredient: human behavior. Consequently, human testers often find the results less reasonable and difficult to interpret. This thesis proposes a new approach to improve the initial population of the GA with the aid of machine learning (ML), forming an ML-technique enhanced-EC (MLEC) algorithm. In order to do so, existing tests are exploited to extract information on how human testers use the given GUI. The obtained data is then utilized to train an artificial neural network (ANN), which ranks the available GUI actions respectively their underlying GUI components at runtime---reducing the gap between manually created and automatically generated regression tests. Although the approach is implemented on top of ReTest, it can be easily used to guide any form of monkey testing. The results show that with only little training data, the ANN is able to reach an accuracy of 82% and the resulting tests represent an improvement without reducing the overall code coverage and performance significantly.
2018-02-09 19:36:19.000000000
3,165
Legacy software documents are hard to understand and visualize. The tag cloud technique helps software developers to visualize the contents of software documents. A tag cloud is a well-known and simple visualization technique. This paper proposes a new method to visualize software documents, using a tag cloud. In this paper, tags visualize in the cloud based on their frequency in an alphabetical order. The most important tags are displayed with a larger font size. The originality of this method is that it visualizes the contents of JavaDoc as a tag cloud. To validate the JavaDocCloud method, it was applied to NanoXML case study, the results of these experiments display the most common and uncommon tags used in the software documents.
2021-09-28 23:27:57.000000000
15,048
Static analysis tools, or linters, detect violation of source code conventions to maintain project readability. Those tools automatically fix specific violations while developers edit the source code. However, existing tools are designed for the general conventions of programming languages. These tools do not check the project/API-specific conventions. We propose a novel static analysis tool DevReplay that generates code change patterns by mining the code change history, and we recommend changes using the matched patterns. Using DevReplay, developers can automatically detect and fix project/API-specific problems in the code editor and code review. Also, we evaluate the accuracy of DevReplay using automatic program repair tool benchmarks and real software. We found that DevReplay resolves more bugs than state-of-the-art APR tools. Finally, we submitted patches to the most popular open-source projects that are implemented by different languages, and project reviewers accepted 80% (8 of 10) patches. DevReplay is available on [LINK].
2020-05-20 22:48:25.000000000
1,993
Web applications are structured as multi-tier stacks of components. Each component may be written in a different language and interoperate using a variety of protocols. Such interoperation increases developer effort, can introduce security vulnerabilities, may reduce performance and require additional resources. A range of approaches have been explored to minimise web stack interoperation. This paper explores a pragmatic approach to reducing web stack interoperation, namely eliminating a tier/component. That is, we explore the implications of eliminating the Apache web server in a JAPyL web stack: Jupyter Notebook, Apache, Python, Linux, and replacing it with PHP libraries. We conduct a systematic study to investigate the implications for web stack performance, resource consumption, security, and programming effort.
2022-07-14 01:07:17.000000000
4,020
The rising use of information and communication technology in smart grids likewise increases the risk of failures that endanger the security of power supply, e.g., due to errors in the communication configuration, faulty control algorithms, or cyber-attacks. Co-simulations can be used to investigate such effects, but require precise modeling of the energy, communication, and information domain within an integrated smart grid infrastructure model. Given the complexity and lack of detailed publicly available communication network models for smart grid scenarios, there is a need for an automated and systematic approach to creating such coupled models. In this paper, we present an approach to automatically generate smart grid infrastructure models based on an arbitrary electrical distribution grid model using a generic architectural template. We demonstrate the applicability and unique features of our approach alongside examples concerning network planning, co-simulation setup, and specification of domain-specific intrusion detection systems.
2020-08-29 21:59:11.000000000
10,498
Comprehending the behavior of an object-oriented system solely from its source code is troublesome, owing to its dynamism. To aid comprehension, visualizing program behavior through reverse-engineered sequence diagrams from execution traces is a promising approach. However, because of the massiveness of traces, recovered diagrams tend to become very large, causing scalability issues. To address the issues, we propose an object grouping technique that horizontally summarizes a reverse-engineered sequence diagram. Our technique constructs object groups based on Pree's meta patterns, in which each group corresponds to a concept in the domain of a subject system. Visualizing interactions only among important groups, we generate a summarized sequence diagram depicting a behavioral overview of the system. Our experiment showed that our technique outperformed the state-of-the-art trace summarization technique in terms of reducing the horizontal size of reverse-engineered sequence diagrams. Regarding the quality of object grouping, our technique achieved an F-score of 0.670 and a Recall of 0.793 on average under the condition of #lifelines (i.e., the horizontal size of a sequence diagram) < 30, whereas those of the state-of-the-art technique were 0.421 and 0.670, respectively. The runtime overhead imposed by our technique was 129.2% on average, which is relatively smaller in the literature.
2020-03-03 22:54:32.000000000
13,124
In this paper, we propose first to start by presenting a state of the art of existing approaches about scientific workflows (including neuroscience workflows) in order to highlight business users' needs in terms of Web Services combination. Then we discuss about intentional process modeling for scientific workflows especially to search for Web Services. Next we present our approach SATIS to provide reasoning and traceability capabilities on Web Services business combination know-how, in order to bridge the gap between workflows providers and users.
2015-02-17 21:39:23.000000000
182
Data-driven defect prediction has become increasingly important in software engineering process. Since it is not uncommon that data from a software project is insufficient for training a reliable defect prediction model, transfer learning that borrows data/knowledge from other projects to facilitate the model building at the current project, namely cross-project defect prediction (CPDP), is naturally plausible. Most CPDP techniques involve two major steps, i.e., transfer learning and classification, each of which has at least one parameter to be tuned to achieve their optimal performance. This practice fits well with the purpose of automated parameter optimization. However, there is a lack of thorough understanding about what are the impacts of automated parameter optimization on various CPDP techniques. In this paper, we present the first empirical study that looks into such impacts on 62 CPDP techniques, 13 of which are chosen from the existing CPDP literature while the other 49 ones have not been explored before. We build defect prediction models over 20 real-world software projects that are of different scales and characteristics. Our findings demonstrate that: (1) Automated parameter optimization substantially improves the defect prediction performance of 77\% CPDP techniques with a manageable computational cost. Thus more efforts on this aspect are required in future CPDP studies. (2) Transfer learning is of ultimate importance in CPDP. Given a tight computational budget, it is more cost-effective to focus on optimizing the parameter configuration of transfer learning algorithms (3) The research on CPDP is far from mature where it is "not difficult" to find a better alternative by making a combination of existing transfer learning and classification techniques. This finding provides important insights about the future design of CPDP techniques.
2020-02-07 09:00:12.000000000
2,476
Mobile Crowdsourcing (MC) is an effective way of engaging large groups of smart devices to perform tasks remotely while exploiting their built-in features. It has drawn great attention in the areas of smart cities and urban computing communities to provide decentralized, fast, and flexible ubiquitous technological services. The vast majority of previous studies focused on non-cooperative MC schemes in Internet of Things (IoT) systems. Advanced collaboration strategies are expected to leverage the capability of MC services and enable the execution of more complicated crowdsourcing tasks. In this context, Collaborative Mobile Crowdsourcing (CMC) enables task requesters to hire groups of IoT devices' users that must communicate with each other and coordinate their operational activities in order to accomplish complex tasks. In this paper, we present and discuss the novel CMC paradigm in IoT. Then, we provide a detailed taxonomy to classify the different components forming CMC systems. Afterwards, we investigate the challenges in designing CMC tasks and discuss different team formation strategies involving the crowdsourcing platform and selected team leaders. We also analyze and compare the performances of certain proposed CMC recruitment algorithms. Finally, we shed the light on open research directions to leverage CMC service design.
2021-04-11 03:12:40.000000000
1,907
Service sharing is a prominent operating model to support business. Many large inter-organizational networks have implemented some form of value added integrated services in order to reach efficiency and to reduce costs sustainably. Coupling Service orientation with enterprise architecture paradigm is very important at improving organizational performance through business process optimization. Indeed, enterprise architecture management is increasingly discussed because of information system role as part of achieving the strategic direction of value creation and contribution to economic growth. Also, system architecture promotes synergy and business efficiency for inter-organizational collaboration. For this purpose, this work proposes a review of service oriented enterprise architecture. This review, enumerates several integrative and collaborative frameworks for integrated service delivery.
2015-05-05 08:54:44.000000000
8,623
Creativity is a critical skill that professional software engineers leverage to tackle difficult problems. In higher education, multiple efforts have been made to spark creative skills of engineering students. However, creativity is a vague concept that is open to interpretation. Furthermore, studies have shown that there is a gap in perception and implementation of creativity between industry and academia. To better understand the role of creativity in software engineering (SE), we interviewed 33 professionals via four focus groups and 10 SE students. Our results reveal 45 underlying topics related to creativity. When comparing the perception of students versus professionals, we discovered fundamental differences, grouped into five themes: the creative environment, application of techniques, creative collaboration, nature vs nurture, and the perceived value of creativity. As our aim is to use these findings to install and further encourage creative problem solving in higher education, we have included a list of implications for educational practice.
2023-12-18 05:15:50.000000000
9,425
Large language models (LLMs) have significantly improved the ability to perform tasks in the field of code generation. However, there is still a gap between LLMs being capable coders and being top-tier software engineers. Based on the observation that toplevel software engineers often ask clarifying questions to reduce ambiguity in both requirements and coding solutions, I argue that the same should be applied to LLMs for code generation tasks. By asking probing questions in various topics before generating the final code, the challenges of programming with LLMs, such as unclear intent specification, lack of computational thinking, and undesired code quality, may be alleviated. This, in turn, increases confidence in the generated code. In this work, I explore how to leverage better communication skills to achieve greater confidence in generated code. I propose a communication-centered process that uses an LLM-generated communicator to identify issues with high ambiguity or low confidence in problem descriptions and generated code. I then ask clarifying questions to obtain responses from users for refining the code.
2023-08-25 01:09:31.000000000
2,028
Lack of resources is a challenge for small and medium enterprises (SMEs) in implementing an IT-based system to facilitate more efficient business decisions and expanding the market. A community system based on service-oriented architecture (SOA) can help SMEs alleviate this problem. This paper explores and analyses the frameworks proposed by previous studies in the context of inter-enterprise SOA for SMEs. Several problems being the background of the system implementation are identified. Afterward, the offered solutions are presented, including the system architecture, technology adoption, specific elements, and collaboration model. The study also discusses the system architecture patterns of the reviewed studies as well as the collaboration organizational structures.
2020-04-17 03:33:38.000000000
14,265
To improve the agility, dynamics, composability, reusability, and development efficiency restricted by monolithic Federation Object Model (FOM), a modular FOM was proposed by High Level Architecture (HLA) Evolved product development group. This paper reviews the state-of-the-art of HLA Evolved modular FOM. In particular, related concepts, the overall impact on HLA standards, extension principles, and merging processes are discussed. Also permitted and restricted combinations, and merging rules are provided, and the influence on HLA interface specification is given. The comparison between modular FOM and Base Object Model (BOM) is performed to illustrate the importance of their combination. The applications of modular FOM are summarized. Finally, the significance to facilitate composable simulation both in academia and practice is presented and future directions are pointed out.
2009-09-08 19:44:49.000000000
10,650
We present a novel approach - CLAA - for API aspect detection in API reviews that utilizes transformer models trained with a supervised contrastive loss objective function. We evaluate CLAA using performance and impact analysis. For performance analysis, we utilized a benchmark dataset on developer discussions collected from Stack Overflow and compare the results to those obtained using state-of-the-art transformer models. Our experiments show that contrastive learning can significantly improve the performance of transformer models in detecting aspects such as Performance, Security, Usability, and Documentation. For impact analysis, we performed empirical and developer study. On a randomly selected and manually labeled 200 online reviews, CLAA achieved 92% accuracy while the SOTA baseline achieved 81.5%. According to our developer study involving 10 participants, the use of 'Stack Overflow + CLAA' resulted in increased accuracy and confidence during API selection. Replication package: [LINK]
2023-07-29 16:34:56.000000000
5,293
Stack Overflow (SO) is the largest Q&A website for developers, providing a huge amount of copyable code snippets. Using these snippets raises various maintenance and legal issues. The SO license requires attribution, i.e., referencing the original question or answer, and requires derived work to adopt a compatible license. While there is a heated debate on SO's license model for code snippets and the required attribution, little is known about the extent to which snippets are copied from SO without proper attribution. In this paper, we present the research design and summarized results of an empirical study analyzing attributed and unattributed usages of SO code snippets in GitHub projects. On average, 3.22% of all analyzed repositories and 7.33% of the popular ones contained a reference to SO. Further, we found that developers rather refer to the whole thread on SO than to a specific answer. For Java, at least two thirds of the copied snippets were not attributed.
2017-06-29 14:29:24.000000000
9,727
Enterprise IT is currently facing the challenge of coordinating the management of complex, multi-component applications across heterogeneous cloud platforms. Containers and container orchestrators provide a valuable solution to deploy multi-component applications over cloud platforms, by coupling the lifecycle of each application component to that of its hosting container. We hereby propose a solution for going beyond such a coupling, based on the OASIS standard TOSCA and on Docker. We indeed propose a novel approach for deploying multi-component applications on top of existing container orchestrators, which allows to manage each component independently from the container used to run it. We also present prototype tools implementing our approach, and we show how we effectively exploited them to carry out a concrete case study.
2020-02-02 14:36:21.000000000
3,946
System-level testing of avionics software systems requires compliance with different international safety standards such as DO-178C. An important consideration of the avionics industry is automated test data generation according to the criteria suggested by safety standards. One of the recommended criteria by DO-178C is the modified condition/decision coverage (MC/DC) criterion. The current model-based test data generation approaches use constraints written in Object Constraint Language (OCL), and apply search techniques to generate test data. These approaches either do not support MC/DC criterion or suffer from performance issues while generating test data for large-scale avionics systems. In this paper, we propose an effective way to automate MC/DC test data generation during model-based testing. We develop a strategy that utilizes case-based reasoning (CBR) and range reduction heuristics designed to solve MC/DC-tailored OCL constraints. We performed an empirical study to compare our proposed strategy for MC/DC test data generation using CBR, range reduction, both CBR and range reduction, with an original search algorithm, and random search. We also empirically compared our strategy with existing constraint-solving approaches. The results show that both CBR and range reduction for MC/DC test data generation outperform the baseline approach. Moreover, the combination of both CBR and range reduction for MC/DC test data generation is an effective approach compared to existing constraint solvers.
2024-01-05 12:39:08.000000000
4,421
Following the onset of the COVID-19 pandemic and subsequent lockdowns, software engineers' daily life was disrupted and abruptly forced into remote working from home. This change deeply impacted typical working routines, affecting both well-being and productivity. Moreover, this pandemic will have long-lasting effects in the software industry, with several tech companies allowing their employees to work from home indefinitely if they wish to do so. Therefore, it is crucial to analyze and understand how a typical working day looks like when working from home and how individual activities affect software developers' well-being and productivity. We performed a two-wave longitudinal study involving almost 200 globally carefully selected software professionals, inferring daily activities with perceived well-being, productivity, and other relevant psychological and social variables. Results suggest that the time software engineers spent doing specific activities from home was similar when working in the office. However, we also found some significant mean differences. The amount of time developers spent on each activity was unrelated to their well-being, perceived productivity, and other variables. We conclude that working remotely is not per se a challenge for organizations or developers.
2021-01-10 10:02:59.000000000
7,808
The rapid development of IT&T technology had big impact on the traditional telecommunications market, transforming it from monopolistic market to highly competitive high-tech market where new services are required to be created frequently. This paper aims to describe a design approach that puts order management process (as part of enterprise application integration) in function of rapid service creation. In the text we will present a framework for collaborative order handling supporting convergent services. The design splits the order handling processes in convergent environments in three business process groups: order capture, order management and order fulfillment. The paper establishes abstract framework for order handling and provides design guidelines for transaction handling implementation based on the checkpoint and inverse command strategy. The proposed design approach is based in a convergent telecommunication environment. Same principles are applicable in solving problems of collaboration in function of order processing in any given heterogeneous environment.
2012-01-01 15:14:51.000000000
13,840
Good comments help developers understand software faster and provide better maintenance. However, comments are often missing, generally inaccurate, or out of date. Many of these problems can be avoided by automatic comment generation. This paper presents a method to generate informative comments directly from the source code using general-purpose techniques from natural language processing. We generate comments using an existing natural language model that couples words with their individual logical meaning and grammar rules, allowing comment generation to proceed by search from declarative descriptions of program text. We evaluate our algorithm on several classic algorithms implemented in Python.
2018-10-11 07:09:28.000000000
6,651
The development of cyber-physical system (CPS) is a big challenge because of its complexity and its complex requirements. Especially in Requirements Engineering (RE), there exist many redundant and conflict requirements. Eliminating conflict requirements and merged redundant/common requirements lead a challenging task at the elicitation phase in the requirements engineering process for CPS. Collecting and optimizing requirements through appropriate process reduce both development time and cost as every functional requirement gets refined and optimized at very first stage (requirements elicitation phase) of the whole development process. Existing researches have focused on requirements those have already been collected. However, none of the researches have worked on how the requirements are collected and refined. This paper provides a requirements model for CPS that gives a direction about the requirements be gathered, refined and cluster in order to developing the CPS independently. The paper also shows a case study about the application of the proposed model to transport system.
2017-05-04 19:12:09.000000000
3,648
When developing a safety-critical system it is essential to obtain an assessment of different design alternatives. In particular, an early safety assessment of the architectural design of a system is desirable. In spite of the plethora of available formal quantitative analysis methods it is still difficult for software and system architects to integrate these techniques into their every day work. This is mainly due to the lack of methods that can be directly applied to architecture level models, for instance given as UML diagrams. Also, it is necessary that the description methods used do not require a profound knowledge of formal methods. Our approach bridges this gap and improves the integration of quantitative safety analysis methods into the development process. All inputs of the analysis are specified at the level of a UML model. This model is then automatically translated into the analysis model, and the results of the analysis are consequently represented on the level of the UML model. Thus the analysis model and the formal methods used during the analysis are hidden from the user. We illustrate the usefulness of our approach using an industrial strength case study.
2011-07-02 01:35:07.000000000
11,077
A compiler bug arises if the behaviour of a compiled concurrent program, as allowed by its architecture memory model, is not a behaviour permitted by the source program under its source model. One might reasonably think that most compiler bugs have been found in the decade since the introduction of the C/C++ memory model. We observe that processor implementations are increasingly exploiting the behaviour of relaxed architecture models. As such, compiled programs may exhibit bugs not seen on older hardware. To account for this we require model-based compiler testing. While this observation is not surprising, its implications are broad. Compilers and their testing tools will need to be updated to follow hardware relaxations, concurrent test generators will need to be improved, and assumptions of prior work will need revisiting. We explore these ideas using a compiler toolchain bug we reported in LLVM.
2024-01-16 14:19:04.000000000
14,599
Hugging Face (HF) has established itself as a crucial platform for the development and sharing of machine learning (ML) models. This repository mining study, which delves into more than 380,000 models using data gathered via the HF Hub API, aims to explore the community engagement, evolution, and maintenance around models hosted on HF, aspects that have yet to be comprehensively explored in the literature. We first examine the overall growth and popularity of HF, uncovering trends in ML domains, framework usage, authors grouping and the evolution of tags and datasets used. Through text analysis of model card descriptions, we also seek to identify prevalent themes and insights within the developer community. Our investigation further extends to the maintenance aspects of models, where we evaluate the maintenance status of ML models, classify commit messages into various categories (corrective, perfective, and adaptive), analyze the evolution across development stages of commits metrics and introduce a new classification system that estimates the maintenance status of models based on multiple attributes. This study aims to provide valuable insights about ML model maintenance and evolution that could inform future model development strategies on platforms like HF.
2023-11-21 10:21:47.000000000
1,997
Verification activities are necessary to ensure that the requirements are specified in a correct way. However, until now requirements verification research has focused on traditional up-front requirements. Agile or just-in-time requirements are by definition incomplete, not specific and might be ambiguous when initially specified, indicating a different notion of 'correctness'. We analyze how verification of agile requirements quality should be performed, based on literature of traditional and agile requirements. This leads to an agile quality framework, instantiated for the specific requirement types of feature requests in open source projects and user stories in agile projects. We have performed an initial qualitative validation of our framework for feature requests with eight practitioners from the Dutch agile community, receiving overall positive feedback.
2014-06-14 13:33:16.000000000
15,861
The acceptance of autonomous vehicles is dependent on the rigorous assessment of their safety. Furthermore, the commercial viability of AV programs depends on the ability to estimate the time and resources required to achieve desired safety levels. Naive approaches to estimating the reliability and safety levels of autonomous vehicles under development are will require infeasible amounts of testing of a static vehicle configuration. To permit both the estimation of current safety, and make predictions about the reliability of future systems, I propose the use of a standard tool for modelling the reliability of evolving software systems, software reliability growth models (SRGMs). Publicly available data from Californian public-road testing of two autonomous vehicle systems is modelled using two of the best-known SRGMs. The ability of the models to accurately estimate current reliability, as well as for current testing data to predict reliability in the future after additional testing, is evaluated. One of the models, the Musa-Okumoto model, appears to be a good estimator and a reasonable predictor.
2018-12-19 10:48:33.000000000
85
The responsibility of a method/function is to perform some desired computations and disseminate the results to its caller through various deliverables, including object fields and variables in output instructions. Based on this definition of responsibility, this paper offers a new algorithm to refactor long methods to those with a single responsibility. We propose a backward slicing algorithm to decompose a long method into slightly overlapping slices. The slices are computed for each output instruction, representing the outcome of a responsibility delegated to the method. The slices will be non-overlapping if the slicing criteria address the same output variable. The slices are further extracted as independent methods, invoked by the original method if certain behavioral preservations are made. The proposed method has been evaluated on the GEMS extract method refactoring benchmark and three real-world projects. On average, our experiments demonstrate at least a 29.6% improvement in precision and a 12.1% improvement in the recall of uncovering refactoring opportunities compared to the state-of-the-art approaches. Furthermore, our tool improves method-level cohesion metrics by an average of 20% after refactoring. Experimental results confirm the applicability of the proposed approach in extracting methods with a single responsibility.
2023-05-04 17:43:19.000000000
2,676
Any organization that will develop software is faced with a difficult choice of choosing the right software development method. Whereas the software development methods used, play a significant role in the overall software development process. Software development methods are needed so that the software development process can be systematic so that it is not only completed within the right time frame but also must have good quality. There are various methods of software development in System Development Lyfe Cycle (SDLC). Each SDLC method provides a general guiding line about different software development and has different characteristics. Each method of software development has its drawbacks and advantages so that the selection of software development methods should be compatible with the capacity of the software developed. This paper will compare three different software development methods: V-Shaped Model, Parallel Development Model, and Iterative Model with the aim of providing an understanding of software developers to choose the right method.
2017-10-16 08:34:24.000000000
2,325
In dynamic systems that adapt to users' needs and changing environments, dependability needs cannot be avoided. This paper proposes an orthogonal fault tolerance model as a means to manage and reason about multiple fault tolerance mechanisms that co-exist in dynamically adaptive systems. One of the key challenges associated with dynamically evolving fault tolerance needs is the feature interaction problem arising from the integration of fault tolerance features. The proposed approach provides a separation of fault tolerance concerns to study the effects of integrated fault tolerance on the system. This approach uses state machine and operational semantics to reason about these interactions and inconsistencies. The proposed approach is supported by the tool NuSMV to simulate and verify the state machines against logic statements.
2014-04-27 18:31:37.000000000
116
Maintainability is a key quality attribute of successful software systems. However, its management in practice is still problematic. Currently, there is no comprehensive basis for assessing and improving the maintainability of software systems. Quality models have been proposed to solve this problem. Nevertheless, existing approaches do not explicitly take into account the maintenance activities, that largely determine the software maintenance effort. This paper proposes a 2-dimensional model of maintainability that explicitly associates system properties with the activities carried out during maintenance. The separation of activities and properties facilitates the identification of sound quality criteria and allows to reason about their interdependencies. This transforms the quality model into a structured and comprehensive quality knowledge base that is usable in industrial project environments. For example, review guidelines can be generated from it. The model is based on an explicit quality metamodel that supports its systematic construction and fosters preciseness as well as completeness. An industrial case study demonstrates the applicability of the model for the evaluation of the maintainability of Matlab Simulink models that are frequently used in model-based development of embedded systems.
2017-07-21 02:08:09.000000000
14,813
Software crowdsourcing platforms employ extrinsic rewards such as rating or ranking systems to motivate workers. Such rating systems are noisy and provide limited knowledge about workers' preferences and performance. To develop better understanding of worker reliability and trustworthiness in software crowdsourcing, this paper reports an empirical study conducted on more than one year's real-world data from TopCoder, one of the leading software crowdsourcing platforms. To do so, first, we create a bipartite network of active workers based on common task registrations. Then, we use the Clauset-Newman-Moore graph clustering algorithm to identify worker clusters in the network. Finally, we conduct an empirical evaluation to measure and analyze workers' behavior per identified community in the platform by workers' rating. More specifically, workers' behavior is analyzed based on their performances in terms of reliability, trustworthiness, and success; their preferences in terms of efficiency and elasticity; and strategies in terms of comfort, confidence, and deceitfulness. The main result of this study identified four communities of active workers: mixed-ranked, high-ranked, mid-ranked, and low-ranked. This study shows that the low-ranked community associates with the highest reliable workers with an average reliability of 25%, while the mixed-ranked community contains the most trustworthy workers with average trustworthiness of 16%. Such empirical evidence is beneficial to help exploring resourcing options while understanding the relations among unknown resources to improve task success.
2021-07-05 03:09:19.000000000
15,344
Synthesis is the automatic construction of a system from its specification. In classical synthesis algorithms it is always assumed that the system is "constructed from scratch" rather than composed from reusable components. This, of course, rarely happens in real life. In real life, almost every non-trivial commercial software system relies heavily on using libraries of reusable components. Furthermore, other contexts, such as web-service orchestration, can be modeled as synthesis of a system from a library of components. In 2009 we introduced LTL synthesis from libraries of reusable components. Here, we extend the work and study synthesis from component libraries with "call and return"' control flow structure. Such control-flow structure is very common in software systems. We define the problem of Nested-Words Temporal Logic (NWTL) synthesis from recursive component libraries, where NWTL is a specification formalism, richer than LTL, that is suitable for "call and return" computations. We solve the problem, providing a synthesis algorithm, and show the problem is 2EXPTIME-complete, as standard synthesis.
2011-05-30 22:34:51.000000000
4,805
MISRA C is the most authoritative language subset for the C programming language that is a de facto standard in several industry sectors where safety and security are of paramount importance. While MISRA C is currently encoded in 175 guidelines (coding rules and directives), it does not coincide with them: proper adoption of MISRA C requires embracing its preventive approach (as opposed to the "bug finding" approach) and a documented development process where justifiable non-compliances are authorized and recorded as deviations. MISRA C guidelines are classified along several axes in the official MISRA documents. In this paper, we add to these an orthogonal classification that associates guidelines with their main rationale. The advantages of this new classification are illustrated for different kinds of projects, including those not (yet) having MISRA compliance among their objectives.
2021-12-23 07:49:05.000000000
219
Simplifying machine learning (ML) application development, including distributed computation, programming interface, resource management, model selection, etc, has attracted intensive interests recently. These research efforts have significantly improved the efficiency and the degree of automation of developing ML models. In this paper, we take a first step in an orthogonal direction towards automated quality management for human-in-the-loop ML application development. We build ease. ml/meter, a system that can automatically detect and measure the degree of overfitting during the whole lifecycle of ML application development. ease. ml/meter returns overfitting signals with strong probabilistic guarantees, based on which developers can take appropriate actions. In particular, ease. ml/meter provides principled guidelines to simple yet nontrivial questions regarding desired validation and test data sizes, which are among commonest questions raised by developers. The fact that ML application development is typically a continuous procedure further worsens the situation: The validation and test data sets can lose their statistical power quickly due to multiple accesses, especially in the presence of adaptive analysis. ease. ml/meter addresses these challenges by leveraging a collection of novel techniques and optimizations, resulting in practically tractable data sizes without compromising the probabilistic guarantees. We present the design and implementation details of ease. ml/meter, as well as detailed theoretical analysis and empirical evaluation of its effectiveness.
2019-05-30 11:00:25.000000000
9,188
We propose inheritance and refinement relations for a CSP-based component model (BRIC), which supports a constructive design based on composition rules that preserve classical concurrency properties such as deadlock freedom. The proposed relations allow extension of functionality, whilst preserving behavioural properties. A notion of extensibility is defined on top of a behavioural relation called convergence, which distinguishes inputs from outputs and the context where they are communicated, allowing extensions to reuse existing events with different purposes. We mechanise the strategy for extensibility verification using the FDR4 tool, and illustrate our results with an autonomous healthcare robot case study.
2020-05-20 11:03:46.000000000
149
Large Language Models are successfully adopted in software engineering, especially in code generation. Updating these models with new knowledge is very expensive, and is often required to fully realize their value. In this paper, we propose a novel and effective model editing approach, \textsc{MENT}, to patch LLMs in coding tasks. Based on the mechanism of generative LLMs, \textsc{MENT} enables model editing in next-token predictions, and further supports common coding tasks. \textsc{MENT} is effective, efficient, and reliable. It can correct a neural model by patching 1 or 2 neurons. As the pioneer work on neuron-level model editing of generative models, we formalize the editing process and introduce the involved concepts. Besides, we also introduce new measures to evaluate its generalization ability, and build a benchmark for further study. Our approach is evaluated on three coding tasks, including API-seq recommendation, line-level code generation, and pseudocode-to-code transaction. It outperforms the state-of-the-art by a significant margin on both effectiveness and efficiency measures. In addition, we demonstrate the usages of \textsc{MENT} for LLM reasoning in software engineering. By editing the LLM knowledge with \textsc{MENT}, the directly or indirectly dependent behaviors in the chain-of-thought change accordingly and automatically.
2023-12-07 22:19:50.000000000
7,191
Requirements engineering (RE) plays a crucial role in developing software systems by bridging the gap between stakeholders' needs and system specifications. However, effective communication and elicitation of stakeholder requirements can be challenging, as traditional RE methods often overlook emotional cues. This paper introduces a multi-modal emotion recognition platform (MEmoRE) to enhance the requirements engineering process by capturing and analyzing the emotional cues of stakeholders in real-time. MEmoRE leverages state-of-the-art emotion recognition techniques, integrating facial expression, vocal intonation, and textual sentiment analysis to comprehensively understand stakeholder emotions. This multi-modal approach ensures the accurate and timely detection of emotional cues, enabling requirements engineers to tailor their elicitation strategies and improve overall communication with stakeholders. We further intend to employ our platform for later RE stages, such as requirements reviews and usability testing. By integrating multi-modal emotion recognition into requirements engineering, we aim to pave the way for more empathetic, effective, and successful software development processes. We performed a preliminary evaluation of our platform. This paper reports on the platform design, preliminary evaluation, and future development plan as an ongoing project.
2023-06-01 14:52:59.000000000
9,016
This paper introduces prompted software engineering (PSE), which integrates prompt engineering to build effective prompts for language-based AI models, to enhance the software development process. PSE enables the use of AI models in software development to produce high-quality software with fewer resources, automating tedious tasks and allowing developers to focus on more innovative aspects. However, effective prompts are necessary to guide software development in generating accurate, relevant, and useful responses, while mitigating risks of misleading outputs. This paper describes how productive prompts should be built throughout the software development cycle.
2023-11-03 03:05:19.000000000
9,824
Code review is a mature practice for software quality assurance in software development with which reviewers check the code that has been committed by developers, and verify the quality of code. During the code review discussions, reviewers and developers might use code snippets to provide necessary information (e.g., suggestions or explanations). However, little is known about the intentions and impacts of code snippets in code reviews. To this end, we conducted a preliminary study to investigate the nature of code snippets and their purposes in code reviews. We manually collected and checked 10,790 review comments from the Nova and Neutron projects of the OpenStack community, and finally obtained 626 review comments that contain code snippets for further analysis. The results show that: (1) code snippets are not prevalently used in code reviews, and most of the code snippets are provided by reviewers. (2) We identified two high-level purposes of code snippets provided by reviewers (i.e., Suggestion and Citation) with six detailed purposes, among which, Improving Code Implementation is the most common purpose. (3) For the code snippets in code reviews with the aim of suggestion, around 68.1% was accepted by developers. The results highlight promising research directions on using code snippets in code reviews.
2022-03-29 05:59:16.000000000
6,654
During the life span of large software projects, developers often apply the same code changes to different code locations in slight variations. Since the application of these changes to all locations is time-consuming and error-prone, tools exist that learn change patterns from input examples, search for possible pattern applications, and generate corresponding recommendations. In many cases, the generated recommendations are syntactically or semantically wrong due to code movements in the input examples. Thus, they are of low accuracy and developers cannot directly copy them into their projects without adjustments. We present the Accurate REcommendation System (ARES) that achieves a higher accuracy than other tools because its algorithms take care of code movements when creating patterns and recommendations. On average, the recommendations by ARES have an accuracy of 96% with respect to code changes that developers have manually performed in commits of source code archives. At the same time ARES achieves precision and recall values that are on par with other tools.
2017-08-08 03:07:41.000000000
7,563
Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's "located accountability" to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined "supply chain." We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.
2022-09-19 12:17:19.000000000
14,940