input
stringlengths
29
3.27k
created_at
stringlengths
29
29
__index_level_0__
int64
0
16k
Traceability between published scientific breakthroughs and their implementation is essential, especially in the case of open-source scientific software which implements bleeding-edge science in its code. However, aligning the link between GitHub repositories and academic papers can prove difficult, and the current practice of establishing and maintaining such links remains unknown. This paper investigates the role of academic paper references contained in these repositories. We conduct a large-scale study of 20 thousand GitHub repositories that make references to academic papers. We use a mixed-methods approach to identify public access, traceability and evolutionary aspects of the links. Although referencing a paper is not typical, we find that a vast majority of referenced academic papers are public access. These repositories tend to be affiliated with academic communities. More than half of the papers do not link back to any repository. We find that academic papers from top-tier SE venues are not likely to reference a repository, but when they do, they usually link to a GitHub software repository. In a network of arXiv papers and referenced repositories, we find that the most referenced papers are (i) highly-cited in academia and (ii) are referenced by repositories written in different programming languages.
2020-03-31 10:47:10.000000000
8,867
An essential element of any verification technique is that of identifying and communicating to the user, system behaviour which leads to a deviation from the expected behaviour. Such behaviours are typically made available as long traces of system actions which would benefit from a natural language explanation of the trace and especially in the context of business logic level specifications. In this paper we present a natural language generation model which can be used to explain such traces. A key idea is that the explanation language is a CNL that is, formally speaking, regular language susceptible transformations that can be expressed with finite state machinery. At the same time it admits various forms of abstraction and simplification which contribute to the naturalness of explanations that are communicated to the user.
2014-06-09 10:12:18.000000000
15,472
In this letter, we propose a novel three-dimensional conceptual model for an emerging service-oriented simulation paradigm. The model can be used as a guideline or an analytic means to find the potential and possible future directions of the current simulation frameworks. In particular, the model inspects the crossover between the disciplines of modeling and simulation, service-orientation, and software/systems engineering. Finally, two specific simulation frameworks are studied as examples.
2009-09-13 22:36:08.000000000
12,841
The idea of the Neurath Basic Model View Controller (NBMVC) appeared during the discussion of the design of domain-specific modeling tools based on the Neurath Modeling Language [Yer06]. The NBMVC is the core of the modeling process within the modeling environment. It reduces complexity out of the design process by providing domain-specific interfaces between the developer and the model. These interfaces help to organize and manipulate the model. The organization includes, for example, a layer with visual components to drop them in and filter them out. The control routines includes, for example, model transformations.
2009-04-23 15:05:01.000000000
640
Dubbed a safer C, Rust is a modern programming language that combines memory safety and low-level control. This interesting combination has made Rust very popular among developers and there is a growing trend of migrating legacy codebases (very often in C) to Rust. In this paper, we present a C to Rust translation approach centred around static ownership analysis. We design a suite of analyses that infer ownership models of C pointers and automatically translate the pointers into safe Rust equivalents. The resulting tool, Crown, scales to real-world codebases (half a million lines of code in less than 10 seconds) and achieves a high conversion rate.
2023-03-17 03:39:32.000000000
3,751
One of the main open research issues in Service Oriented Computing is to propose automated techniques to analyse service interfaces. A first problem, called compatibility, aims at determining whether a set of services (two in this paper) can be composed together and interact with each other as expected. Another related problem is to check the substitutability of one service with another. These problems are especially difficult when behavioural descriptions (i.e., message calls and their ordering) are taken into account in service interfaces. Interfaces should capture as faithfully as possible the service behaviour to make their automated analysis possible while not exhibiting implementation details. In this position paper, we choose Labelled Transition Systems to specify the behavioural part of service interfaces. In particular, we show that internal behaviours (tau transitions) are necessary in these transition systems in order to detect subtle errors that may occur when composing a set of services together. We also show that tau transitions should be handled differently in the compatibility and substitutability problem: the former problem requires to check if the compatibility is preserved every time a tau transition is traversed in one interface, whereas the latter requires a precise analysis of tau branchings in order to make the substitution preserve the properties (e.g., a compatibility notion) which were ensured before replacement.
2010-10-14 05:16:22.000000000
2,051
Open-Source Software (OSS) vulnerabilities bring great challenges to the software security and pose potential risks to our society. Enormous efforts have been devoted into automated vulnerability detection, among which deep learning (DL)-based approaches have proven to be the most effective. However, the current labeled data present the following limitations: (1) Tangled Patches: Developers may submit code changes unrelated to vulnerability fixes within patches, leading to tangled patches. (2) Lacking Inter-procedural Vulnerabilities: The existing vulnerability datasets typically contain function-level and file-level vulnerabilities, ignoring the relations between functions, thus rendering the approaches unable to detect the inter-procedural vulnerabilities. (3) Outdated Patches: The existing datasets usually contain outdated patches, which may bias the model during training. To address the above limitations, in this paper, we propose an automated data collection framework and construct the first repository-level high-quality vulnerability dataset named ReposVul. The proposed framework mainly contains three modules: (1) A vulnerability untangling module, aiming at distinguishing vulnerability-fixing related code changes from tangled patches, in which the Large Language Models (LLMs) and static analysis tools are jointly employed. (2) A multi-granularity dependency extraction module, aiming at capturing the inter-procedural call relationships of vulnerabilities, in which we construct multiple-granularity information for each vulnerability patch, including repository-level, file-level, function-level, and line-level. (3) A trace-based filtering module, aiming at filtering the outdated patches, which leverages the file path trace-based filter and commit time trace-based filter to construct an up-to-date dataset.
2024-01-23 10:28:33.000000000
4,646
Background. Coping with the rapid growing complexity in contemporary software architecture, tracing has become an increasingly critical practice and been adopted widely by software engineers. By adopting tracing tools, practitioners are able to monitor, debug, and optimize distributed software architectures easily. However, with excessive number of valid candidates, researchers and practitioners have a hard time finding and selecting the suitable tracing tools by systematically considering their features and advantages.Objective. To such a purpose, this paper aims to provide an overview of popular Open tracing tools via comparison. Method. Herein, we first identified \ra{30} tools in an objective, systematic, and reproducible manner adopting the Systematic Multivocal Literature Review protocol. Then, we characterized each tool looking at the 1) measured features, 2) popularity both in peer-reviewed literature and online media, and 3) benefits and issues. We used topic modeling and sentiment analysis to extract and summarize the benefits and issues. Specially, we adopted ChatGPT to support the topic interpretation. Results. As a result, this paper presents a systematic comparison amongst the selected tracing tools in terms of their features, popularity, benefits and issues. Conclusion. The result mainly shows that each tracing tool provides a unique combination of features with also different pros and cons. The contribution of this paper is to provide the practitioners better understanding of the tracing tools facilitating their adoption.
2022-07-12 15:30:46.000000000
12,692
Simulators play a crucial role in autonomous driving, offering significant time, cost, and labor savings. Over the past few years, the number of simulators for autonomous driving has grown substantially. However, there is a growing concern about the validity of algorithms developed and evaluated in simulators, indicating a need for a thorough analysis of the development status of the simulators. To bridge the gap in research, this paper analyzes the evolution of simulators and explains how the functionalities and utilities have developed. Then, the existing simulators are categorized based on their task applicability, providing researchers with a taxonomy to swiftly assess a simulator's suitability for specific tasks. Recommendations for select simulators are presented, considering factors such as accessibility, maintenance status, and quality. Recognizing potential hazards in simulators that could impact the confidence of simulation experiments, the paper dedicates substantial effort to identifying and justifying critical issues in actively maintained open-source simulators. Moreover, the paper reviews potential solutions to address these issues, serving as a guide for enhancing the credibility of simulators.
2023-11-17 09:54:35.000000000
14,516
Blockchain platforms, such as Ethereum, allow a set of actors to maintain a ledger of transactions without relying on a central authority and to deploy scripts, called smart contracts, that are executed whenever certain transactions occur. These features can be used as basic building blocks for executing collaborative business processes between mutually untrusting parties. However, implementing business processes using the low-level primitives provided by blockchain platforms is cumbersome and error-prone. In contrast, established business process management systems, such as those based on the standard Business Process Model and Notation (BPMN), provide convenient abstractions for rapid development of process-oriented applications. This article demonstrates how to combine the advantages of a business process management system with those of a blockchain platform. The article introduces a blockchain-based BPMN execution engine, namely Caterpillar. Like any BPMN execution engine, Caterpillar supports the creation of instances of a process model and allows users to monitor the state of process instances and to execute tasks thereof. The specificity of Caterpillar is that the state of each process instance is maintained on the (Ethereum) blockchain and the workflow routing is performed by smart contracts generated by a BPMN-to-Solidity compiler. The Caterpillar compiler supports a large array of BPMN constructs, including subprocesses, multi-instances activities and event handlers. The paper describes the architecture of Caterpillar, and the interfaces it provides to support the monitoring of process instances, the allocation and execution of work items, and the execution of service tasks.
2018-08-07 09:52:59.000000000
3,914
Modern computer systems are ubiquitous in contemporary life yet many of them remain opaque. This poses significant challenges in domains where desiderata such as fairness or accountability are crucial. We suggest that the best strategy for achieving system transparency varies depending on the specific source of opacity prevalent in a given context. Synthesizing and extending existing discussions, we propose a taxonomy consisting of eight sources of opacity that fall into three main categories: architectural, analytical, and socio-technical. For each source, we provide initial suggestions as to how to address the resulting opacity in practice. The taxonomy provides a starting point for requirements engineers and other practitioners to understand contextually prevalent sources of opacity, and to select or develop appropriate strategies for overcoming them.
2023-07-24 22:46:00.000000000
1,116
Domain-Specific Languages (DSLs) help practitioners in contributing solutions to challenges of specific domains. The efficient development of user-friendly DSLs suitable for industrial practitioners with little expertise in modelling still is challenging. For such practitioners, who often do not model on a daily basis, there is a need to foster reduction of repetitive modelling tasks and providing simplified visual representations of DSL parts. For industrial language engineers, there is no methodical support for providing such guidelines or documentation as part of reusable language modules. Previous research either addresses the reuse of languages or guidelines for modelling. For the efficient industrial deployment of DSLs, their combination is essential: the efficient engineering of DSLs from reusable modules that feature integrated documentation and guidelines for industrial practitioners. To solve these challenges, we propose a systematic approach for the industrial engineering of DSLs based on the concept of reusable DSL Building Blocks, which rests on several years of experience in the industrial engineering of DSLs and their deployment to various organizations. We investigated our approach via focus group methods consisting of five participants from industry and research qualitatively. Ultimately, DSL Building Blocks support industrial language engineers in developing better usable DSLs and industrial practitioners in more efficiently achieving their modelling.
2021-03-17 08:11:09.000000000
8,776
The requirements in automation, digitalization, and fast computations have loaded the IT sector with expectations of highly reliable, efficient, and cost-effective software. Given that the process of testing, verification, and validation of software products consumes 50-75% of the total revenue if the testing process is ineffective, "n" times the expenditure must be invested to mend the havoc caused. A delay in project completion is often attributed to the testing phase because of the numerous cycles of debugging process. The software testing process determines the face of the product released to the user. It sets the standard and reliability of a company's outputs. As the complexity increases, testing gets intense so as to examine all the outliers and various branches of the processing flow. The testing process is automated using software tools to avoid the tedious manual process of test input generation and validation criteria, which certifies the program only to a certain confidence level in the presence of outliers.
2022-09-06 02:47:20.000000000
7,483
Feature location attempts to assist developers in discovering functionality in source code. Many textual feature location techniques utilize information retrieval and rely on comments and identifiers of source code to describe software entities. An interesting alternative would be to employ the changeset descriptions of the code altered in that changeset as a data source to describe such software entities. To investigate this we implement a technique utilizing changeset descriptions and conduct an empirical study to observe this technique's overall performance. Moreover, we study how the granularity (i.e. file or method level of software entities) and changeset range inclusion (i.e. most recent or all historical changesets) affect such an approach. The results of a preliminary study with Rhino and Mylyn.Tasks systems suggest that the approach could lead to a potentially efficient feature location technique. They also suggest that it is advantageous in terms of the effort to configure the technique at method level granularity and that older changesets from older systems may reduce the effectiveness of the technique.
2024-02-08 00:27:56.000000000
4,380
ESBMC implements many state-of-the-art techniques for model checking. We report on new and improved features that allow us to obtain verification results for previously unsupported programs and properties. ESBMC employs a new static interval analysis of expressions in programs to increase verification performance. This includes interval-based reasoning over booleans and integers, forward and backward contractors, and particular optimizations related to singleton intervals because of their ubiquity. Other relevant improvements concern the verification of concurrent programs, as well as several operational models, internal ones, and also those of libraries such as pthread and the C mathematics library. An extended memory safety analysis now allows tracking of memory leaks that are considered still reachable.
2023-12-20 22:27:32.000000000
15,531
Nowadays, we are witnessing an increasing adoption of Artificial Intelligence (AI) to develop techniques aimed at improving the reliability, effectiveness, and overall quality of software systems. Deep reinforcement learning (DRL) has recently been successfully used for automation in complex tasks such as game testing and solving the job-shop scheduling problem. However, these specialized DRL agents, trained from scratch on specific tasks, suffer from a lack of generalizability to other tasks and they need substantial time to be developed and re-trained effectively. Recently, DRL researchers have begun to develop generalist agents, able to learn a policy from various environments and capable of achieving performances similar to or better than specialist agents in new tasks. In the Natural Language Processing or Computer Vision domain, these generalist agents are showing promising adaptation capabilities to never-before-seen tasks after a light fine-tuning phase and achieving high performance. This paper investigates the potential of generalist agents for solving SE tasks. Specifically, we conduct an empirical study aimed at assessing the performance of two generalist agents on two important SE tasks: the detection of bugs in games (for two games) and the minimization of makespan in a scheduling task, to solve the job-shop scheduling problem (for two instances). Our results show that the generalist agents outperform the specialist agents with very little effort for fine-tuning, achieving a 20% reduction of the makespan over specialized agent performance on task-based scheduling. In the context of game testing, some generalist agent configurations detect 85% more bugs than the specialist agents. Building on our analysis, we provide recommendations for researchers and practitioners looking to select generalist agents for SE tasks, to ensure that they perform effectively.
2023-12-22 21:14:37.000000000
896
The ever growing demand for remote sensing data products by user community has resulted in many Indian and foreign remote sensing satellites being launched. The diversity in the remote sensing sensors has resulted in heterogeneous software and hardware environments for generating geospatial data products. The workflow automation software knows as information management system is in place at National Remote Sensing Centre (NRSC) catering to the needs of the data processing and data dissemination. The software components of workflow are interfaced in different heterogeneous environments that get executed at data processing software in automated and semi automated modes. For every new satellite being launched, the software is modified or upgraded if new business processes are introduced. In this study, we propose a software architecture that gives more flexible automation with very less manageable code. The study also addresses utilization and extraction of useful information from historic production and customer details. A comparison of the current workflow software architecture with existing practices in industry like Service Oriented Architecture (SOA), Extensible Markup Languages (XML), and Event based architectures has been made. A new hybrid approach based on the industry practices is proposed to improve the existing workflow.
2015-09-24 11:46:20.000000000
5,802
C-O Diagrams have been introduced as a means to have a visual representation of normative texts and electronic contracts, where it is possible to represent the obligations, permissions and prohibitions of the different signatories, as well as what are the penalties in case of not fulfillment of their obligations and prohibitions. In such diagrams we are also able to represent absolute and relative timing constrains. In this paper we consider a formal semantics for C-O Diagrams based on a network of timed automata and we present several relations to check the consistency of a contract in terms of realizability, to analyze whether an implementation satisfies the requirements defined on its contract, and to compare several implementations using the executed permissions as criteria.
2012-09-08 09:16:58.000000000
3,957
Due to notable discoveries in the fast evolving field of complex networks, recent research in software engineering has also focused on representing software systems with networks. Previous work has observed that these networks follow scale-free degree distributions and reveal small-world phenomena, while we here explore another property commonly found in different complex networks, i.e. community structure. We adopt class dependency networks, where nodes represent software classes and edges represent dependencies among them, and show that these networks reveal a significant community structure, characterized by similar properties as observed in other complex networks. However, although intuitive and anticipated by different phenomena, identified communities do not exactly correspond to software packages. We empirically confirm our observations on several networks constructed from Java and various third party libraries, and propose different applications of community detection to software engineering.
2011-05-09 18:17:20.000000000
1,677
Context: Large-scale distributed projects are typically the results of collective efforts performed by multiple developers with heterogeneous personalities. Objective: We aim to find evidence that personalities can explain developers' behavior in large scale-distributed projects. For example, the propensity to trust others - a critical factor for the success of global software engineering - has been found to influence positively the result of code reviews in distributed projects. Method: In this paper, we perform a quantitative analysis of ecosystem-level data from the code commits and email messages contributed by the developers working on the Apache Software Foundation (ASF) projects, as representative of large scale-distributed projects. Results: We find that there are three common types of personality profiles among Apache developers, characterized in particular by their level of Agreeableness and Neuroticism. We also confirm that developers' personality is stable over time. Moreover, personality traits do not vary with their role, membership, and extent of contribution to the projects. We also find evidence that more open developers are more likely to make contributors to Apache projects. Conclusion: Overall, our findings reinforce the need for future studies on human factors in software engineering to use psychometric tools to control for differences in developers' personalities.
2019-05-28 21:56:21.000000000
3,348
The assessment of program functionality can generally be accomplished with straight-forward unit tests. However, assessing the design quality of a program is a much more difficult and nuanced problem. Design quality is an important consideration since it affects the readability and maintainability of programs. Assessing design quality and giving personalized feedback is very time consuming task for instructors and teaching assistants. This limits the scale of giving personalized feedback to small class settings. Further, design quality is nuanced and is difficult to concisely express as a set of rules. For these reasons, we propose a neural network model to both automatically assess the design of a program and provide personalized feedback to guide students on how to make corrections. The model's effectiveness is evaluated on a corpus of student programs written in Python. The model has an accuracy rate from 83.67% to 94.27%, depending on the dataset, when predicting design scores as compared to historical instructor assessment. Finally, we present a study where students tried to improve the design of their programs based on the personalized feedback produced by the model. Students who participated in the study improved their program design scores by 19.58%.
2021-06-02 04:03:35.000000000
1,172
Online Automated Privacy Policy Generators (APPGs) are tools used by app developers to quickly create app privacy policies which are required by privacy regulations to be incorporated to each mobile app. The creation of these tools brings convenience to app developers; however, the quality of these tools puts developers and stakeholders at legal risk. In this paper, we conduct an empirical study to assess the quality of online APPGs. We analyze the completeness of privacy policies, determine what categories and items should be covered in a complete privacy policy, and conduct APPG assessment with boilerplate apps. The results of assessment show that due to the lack of static or dynamic analysis of app's behavior, developers may encounter two types of issues caused by APPGs. First, the generated policies could be incomplete because they do not cover all the essential items required by a privacy policy. Second, some generated privacy policies contain unnecessary personal information collection or arbitrary commitments inconsistent with user input. Ultimately, the defects of APPGs may potentially lead to serious legal issues. We hope that the results and insights developed in this paper can motivate the healthy and ethical development of APPGs towards generating a more complete, accurate, and robust privacy policy.
2020-02-11 16:17:09.000000000
2,395
Software Engineering is the process of a systematic, disciplined, quantifiable approach that has significant impact on large-scale and complex software development. Scores of well-established software process models have long been adopted in the software development life cycle that pour stakeholders towards the completion of final software product development. Within the boundary of advanced technology, various emerging and futuristic technology is evolving that really need the attention of the software engineering community whether the conventional software process techniques are capable to inherit the core fundamental into futuristic software development. In this paper, we study the impact of existing software engineering processes and models including Agile, and DevOps in Blockchain-Oriented Software Engineering. We also examine the essentiality of adopting state-of-art concepts and evolving the current software engineering process for blockchain-oriented systems. We discuss the insight of software project management practices in BOS development. The findings of this study indicate that utilizing state-of-art techniques in software processes for futuristic technology would be challenging and promising research is needed extensively towards addressing and improving state-of-the-art software engineering processes and methodology for novel technologies.
2022-06-28 20:38:36.000000000
12,870
This paper presents a general xml-based distributed software architecture in the aim of accessing and sharing resources in an opened client/server environment. The paper is organized as follows : First, we introduce the idea of a "General Distributed Software Architecture". Second, we describe the general framework in which this architecture is used. Third, we describe the process of information exchange and we introduce some technical issues involved in the implementation of the proposed architecture. Finally, we present some projects which are currently using, or which should use, the proposed architecture.
2009-09-11 07:37:04.000000000
5,563
Systems are growing into more complex ones for developing and maintaining. Existing systems which do not have much in common on the first look are connected, due to the technical progress, even if it was never intended that way. It is an upcoming challenge to handle these large-scale and complex systems. A solution must be found to manage these "Interwoven Systems". Therefore it is discussed where approaches of "Organic Computing" can help, to handle some of these upcoming challenges.
2018-09-11 08:31:56.000000000
9,031
Static bug detection tools help developers detect code problems. However, it is known that they remain underutilized due to various reasons. Recent advances to incorporate static bug detectors in modern software development workflows can better motivate developers to fix the reported warnings on the fly. In this paper, we study the effectiveness of the state-of-the-art (SOA) solution in tracking warnings by static bug detectors and propose a better solution based on our analysis of the insufficiencies of the SOA solution. In particular, we examined four large-scale open-source systems and crafted a data set of 3,452 static code warnings by two static bug detectors. We manually uncover the ground-truth evolution status of the selected warnings: persistent, resolved, or newly-introduced. Moreover, upon manual analysis, we identified the critical reasons behind the insufficiencies of the SOA matching algorithm. Finally, we propose a better approach to improve the tracking of static warnings over software development history. Our evaluation shows that our proposed approach provides a significant improvement in the precision of the tracking, i.e., from 66.9% to 90.0%.
2021-03-24 15:28:32.000000000
12,943
This paper shows how maximum possible configuration efficiency of an indefinitely large software system is constrained by chosing a fixed upper limit to the number of program units per subsystem. It is then shown how the configuration efficiency of an indefinitely large software system depends on the ratio of the total number of informaiton hiding violational software units divided by the total number of program units.
2008-11-16 14:47:16.000000000
12,286
The diversity of data management systems affords developers the luxury of building systems with heterogeneous systems that address needs that are unique to the data. It allows one to mix-n-match systems that can store, query, update, and process data, based on specific use cases. However, this heterogeneity brings with it the burden of developing custom interfaces for each data management system. Developers are required to build high-performance APIs for data access while adopting best-practices governing security, data privacy, and access control. These include user authentication, data authorization, role-based access control, and audit mechanisms to avoid compromising the security standards mandated by data providers. In this paper, we present Bindaas, a secure, extensible big data middleware that offers uniform access to diverse data sources. By providing a standard RESTful web service interface to the data sources, Bindaas exposes query, update, store, and delete functionality of the data sources as data service APIs, while providing turn-key support for standard operations involving security, access control, and audit-trails. Bindaas consists of optional features, such as query and response modifiers as well as plugins that implement composable and reusable data operations on the data. The research community has deployed Bindaas in various production environments in healthcare. Our evaluations highlight the efficiency of Bindaas in serving concurrent requests to data source instances. We further observe that the overheads caused by Bindaas on the data sources are negligible.
2019-12-16 12:40:26.000000000
13,609
Formal Methods for the Informal Engineer (FMIE) was a workshop held at the Broad Institute of MIT and Harvard in 2021 to explore the potential role of verified software in the biomedical software ecosystem. The motivation for organizing FMIE was the recognition that the life sciences and medicine are undergoing a transition from being passive consumers of software and AI/ML technologies to fundamental drivers of new platforms, including those which will need to be mission and safety-critical. Drawing on conversations leading up to and during the workshop, we make five concrete recommendations to help software leaders organically incorporate tools, techniques, and perspectives from formal methods into their project planning and development trajectories.
2021-03-31 22:23:48.000000000
3,464
Web3 is leading a wave of the next generation of web services that even many Web2 applications are keen to ride. However, the lack of Web3 background for Web2 developers hinders easy and effective access and transition. On the other hand, Web3 applications desire for encouragement and advertisement from conventional Web2 companies and projects due to their low market shares. In this paper, we propose a seamless transition framework that transits Web2 to Web3, named WebttCom, after exploring the connotation of Web3 and the key differences between Web2 and Web3 applications. We also provide a full-stack implementation as a use case to support the proposed framework, followed by interviews with five participants that show four positive and one natural response. We confirm that the proposed framework WebttCom addresses the defined research question, and the implementation well satisfies the framework WebttCom in terms of strong necessity, usability, and completeness based on the interview results.
2022-10-11 02:39:06.000000000
13,480
Software documentation captures detailed knowledge about a software product, e.g., code, technologies, and design. It plays an important role in the coordination of development teams and in conveying ideas to various stakeholders. However, software documentation can be hard to comprehend if it is written with jargon and complicated sentence structure. In this study, we explored the potential of text simplification techniques in the domain of software engineering to automatically simplify GitHub README files. We collected software-related pairs of GitHub README files consisting of 14,588 entries, aligned difficult sentences with their simplified counterparts, and trained a Transformer-based model to automatically simplify difficult versions. To mitigate the sparse and noisy nature of the software-related simplification dataset, we applied general text simplification knowledge to this field. Since many general-domain difficult-to-simple Wikipedia document pairs are already publicly available, we explored the potential of transfer learning by first training the model on the Wikipedia data and then fine-tuning it on the README data. Using automated BLEU scores and human evaluation, we compared the performance of different transfer learning schemes and the baseline models without transfer learning. The transfer learning model using the best checkpoint trained on a general topic corpus achieved the best performance of 34.68 BLEU score and statistically significantly higher human annotation scores compared to the rest of the schemes and baselines. We conclude that using transfer learning is a promising direction to circumvent the lack of data and drift style problem in software README files simplification and achieved a better trade-off between simplification and preservation of meaning.
2023-08-18 11:32:23.000000000
2,149
Ajax applications are designed to have high user interactivity and low user-perceived latency. Real-time dynamic web data such as news headlines, stock tickers, and auction updates need to be propagated to the users as soon as possible. However, Ajax still suffers from the limitations of the Web's request/response architecture which prevents servers from pushing real-time dynamic web data. Such applications usually use a pull style to obtain the latest updates, where the client actively requests the changes based on a predefined interval. It is possible to overcome this limitation by adopting a push style of interaction where the server broadcasts data when a change occurs on the server side. Both these options have their own trade-offs. This paper explores the fundamental limits of browser-based applications and analyzes push solutions for Ajax technology. It also shows the results of an empirical study comparing push and pull.
2007-06-27 09:14:40.000000000
12,906
To refactor already working code while keeping reliability, compatibility and perhaps security, we can borrow ideas from micropass/nanopass compilers. By treating the procedure of software refactoring as composing code transformations, and compressing repetitive transformations with automation tools, we can often obtain representations of refactoring processes short enough that their correctness can be analysed manually. Unlike in compilers, in refactoring we usually only need to consider the codebase in question, so regular text processing can be extensively used, fully exploiting patterns only present in the codebase. Aside from the direct application of code transformations from compilers, many other kinds of equivalence properties may also be exploited. In this paper, two refactoring projects are given as the main examples, where 10-100 times simplification has been achieved with the application of a few kinds of useful transformations.
2023-08-10 17:00:46.000000000
9,548
We present a verification technique for program safety that combines Iterated Specialization and Interpolating Horn Clause Solving. Our new method composes together these two techniques in a modular way by exploiting the common Horn Clause representation of the verification problem. The Iterated Specialization verifier transforms an initial set of verification conditions by using unfold/fold equivalence preserving transformation rules. During transformation, program invariants are discovered by applying widening operators. Then the output set of specialized verification conditions is analyzed by an Interpolating Horn Clause solver, hence adding the effect of interpolation to the effect of widening. The specialization and interpolation phases can be iterated, and also combined with other transformations that change the direction of propagation of the constraints (forward from the program preconditions or backward from the error conditions). We have implemented our verification technique by integrating the VeriMAP verifier with the FTCLP Horn Clause solver, based on Iterated Specialization and Interpolation, respectively. Our experimental results show that the integrated verifier improves the precision of each of the individual components run separately.
2014-12-01 11:35:47.000000000
9,993
A test oracle determines whether a system behaves correctly for a given input. Automatic testing techniques rely on an automated test oracle to test the system without user interaction. Important families of automated test oracles include Differential Testing and Metamorphic Testing, which are both black-box approaches; that is, they provide a test oracle that is oblivious to the system's internals. In this work, we propose Intramorphic Testing as a white-box methodology to tackle the test oracle problem. To realize an Intramorphic Testing approach, a modified version of the system is created, for which, given a single input, a test oracle can be provided that relates the output of the original and modified systems. As a concrete example, by replacing a greater-equals operator in the implementation of a sorting algorithm with smaller-equals, it would be expected that the output of the modified implementation is the reverse output of the original implementation. In this paper, we introduce the methodology and illustrate it via a set of use cases.
2022-10-17 22:19:22.000000000
488
Kiwi is a minimalist and extendable Constraint Programming (CP) solver specifically designed for education. The particularities of Kiwi stand in its generic trailing state restoration mechanism and its modulable use of variables. By developing Kiwi, the author does not aim to provide an alternative to full featured constraint solvers but rather to provide readers with a basic architecture that will (hopefully) help them to understand the core mechanisms hidden under the hood of constraint solvers, to develop their own extended constraint solver, or to test innovative ideas.
2017-04-26 12:17:20.000000000
5,228
Advances in natural language processing have resulted in large language models (LLMs) that are capable of generating understandable and sensible written text. Recent versions of these models, such as OpenAI Codex and GPT-3, can generate code and code explanations. However, it is unclear whether and how students might engage with such explanations. In this paper, we report on our experiences generating multiple code explanation types using LLMs and integrating them into an interactive e-book on web software development. We modified the e-book to make LLM-generated code explanations accessible through buttons next to code snippets in the materials, which allowed us to track the use of the explanations as well as to ask for feedback on their utility. Three different types of explanations were available for students for each explainable code snippet; a line-by-line explanation, a list of important concepts, and a high-level summary of the code. Our preliminary results show that all varieties of explanations were viewed by students and that the majority of students perceived the code explanations as helpful to them. However, student engagement appeared to vary by code snippet complexity, explanation type, and code snippet length. Drawing on our experiences, we discuss future directions for integrating explanations generated by LLMs into existing computer science classrooms.
2022-11-01 10:37:26.000000000
8,884
We introduce refinement in the function-behaviour-structure framework for design, as described by John Gero, in order to deal with complexity. We do this by connecting the frameworks for the design of two models, one the refinement of the other. The result is a framework for the design of an object that supports levels of abstraction in the design. This framework can easily be extended for the design of an object on more than two levels of abstraction.
2013-09-09 22:48:24.000000000
5,735
Reproducibility is an increasing concern in Artificial Intelligence (AI), particularly in the area of Deep Learning (DL). Being able to reproduce DL models is crucial for AI-based systems, as it is closely tied to various tasks like training, testing, debugging, and auditing. However, DL models are challenging to be reproduced due to issues like randomness in the software (e.g., DL algorithms) and non-determinism in the hardware (e.g., GPU). There are various practices to mitigate some of the aforementioned issues. However, many of them are either too intrusive or can only work for a specific usage context. In this paper, we propose a systematic approach to training reproducible DL models. Our approach includes three main parts: (1) a set of general criteria to thoroughly evaluate the reproducibility of DL models for two different domains, (2) a unified framework which leverages a record-and-replay technique to mitigate software-related randomness and a profile-and-patch technique to control hardware-related non-determinism, and (3) a reproducibility guideline which explains the rationales and the mitigation strategies on conducting a reproducible training process for DL models. Case study results show our approach can successfully reproduce six open source and one commercial DL models.
2022-02-03 17:27:15.000000000
11,649
In this paper, we describe the research on how perceptual load can affect programming performance in people with symptoms of Attention Deficit / Hyperactivity Disorder (ADHD). We asked developers to complete the Barkley Deficits in Executive Functioning Scale, which indicates the presence and severity levels of ADHD symptoms. After that, participants solved mentally active programming tasks (coding) and monotonous ones (debugging) in the integrated development environment in high perceptual load modes (visually noisy) and low perceptual load modes (visually clear). The development environment was augmented with the plugin we wrote to track efficiency metrics, i.e. time, speed, and activity. We found that the perceptual load does affect programmers' efficiency. For mentally active tasks, the time of inserting the first character was shorter and the overall speed was higher in the low perceptual load mode. For monotonous tasks, the total time for the solution was less for the low perceptual load mode. Also, we found that the effect of perceptual load on programmers' efficiency differs between those with and without ADHD symptoms. This effect has a specificity: depending on efficiency measures and ADHD symptoms, one or another level of perceptual load might be beneficial. Our findings support the idea of behavioral assessment of users for providing appropriate accommodation for the workforce with special needs.
2023-02-12 19:11:43.000000000
12,648
The in-vehicle diagnostic and software update system, which supports remote diagnostic and Over-The-Air (OTA) software updates, is a critical attack goal in automobiles. Adversaries can inject malicious software into vehicles or steal sensitive information through communication channels. Therefore, security analysis, which identifies potential security issues, needs to be conducted in system design. However, existing security analyses of in-vehicle systems are threat-oriented, which start with threat identification and assess risks by brainstorming. In this paper, a system-oriented approach is proposed on the basis of the System-Theoretic Process Analysis (STPA). The proposed approach extends the original STPA from the perspective of data flows and is applicable for information-flow-based systems. Besides, we propose a general model for in-vehicle diagnostic and software update systems and use it to establish a security analysis guideline. In comparison with threat-oriented approaches, the proposed approach shifts from focusing on threats to system vulnerabilities and seems to be efficient to prevent the system from known or even unknown threats. Furthermore, as an extension of the STPA, which has been proven to be applicable to high level designs, the proposed approach can be well integrated into high-level analyses and perform co-design in different disciplines within a unified STPA framework.
2020-06-15 17:53:21.000000000
7,590
Reusable software components need expressive specifications. This paper outlines a rigorous foundation to model-based contracts, a method to equip classes with strong contracts that support accurate design, implementation, and formal verification of reusable components. Model-based contracts conservatively extend the classic Design by Contract with a notion of model, which underpins the precise definitions of such concepts as abstract equivalence and specification completeness. Experiments applying model-based contracts to libraries of data structures suggest that the method enables accurate specification of practical software.
2010-03-29 08:33:46.000000000
2,215
Test and evaluation is a necessary process for ensuring that engineered systems perform as intended under a variety of conditions, both expected and unexpected. In this work, we consider the unique challenges of developing a unifying test and evaluation framework for complex ensembles of cyber-physical systems with embedded artificial intelligence. We propose a framework that incorporates test and evaluation throughout not only the development life cycle, but continues into operation as the system learns and adapts in a noisy, changing, and contended environment. The framework accounts for the challenges of testing the integration of diverse systems at various hierarchical scales of composition while respecting that testing time and resources are limited. A generic use case is provided for illustrative purposes and research directions emerging as a result of exploring the use case via the framework are suggested.
2021-01-23 07:01:38.000000000
1,893
In this paper, we focus on studying duplicate logging statements, which are logging statements that have the same static text message. We manually studied over 4K duplicate logging statements and their surrounding code in five large-scale open source systems. We uncovered five patterns of duplicate logging code smells. For each instance of the duplicate logging code smell, we further manually identify the potentially problematic and justifiable cases. Then, we contact developers to verify our manual study result. We integrated our manual study result and the feedback of developers into our automated static analysis tool, DLFinder, which automatically detects problematic duplicate logging code smells. We evaluated DLFinder on the five manually studied systems and three additional systems. In total, combining the results of DLFinder and our manual analysis, we reported 91 problematic duplicate logging code smell instances to developers and all of them have been fixed. We further study the relationship between duplicate logging statements, including the problematic instances of duplicate logging code smells, and code clones. We find that 83% of the duplicate logging code smell instances reside in cloned code, but 17% of them reside in micro-clones that are difficult to detect using automated clone detection tools. We also find that more than half of the duplicate logging statements reside in cloned code snippets, and a large portion of them reside in very short code blocks which may not be effectively detected by existing code clone detection tools. Our study shows that, in addition to general source code that implements the business logic, code clones may also result in bad logging practices that could increase maintenance difficulties.
2021-05-31 14:36:06.000000000
6,512
Python libraries are widely used for machine learning and scientific computing tasks today. APIs in Python libraries are deprecated due to feature enhancements and bug fixes in the same way as in other languages. These deprecated APIs are discouraged from being used in further software development. Manually detecting and replacing deprecated APIs is a tedious and time-consuming task due to the large number of API calls used in the projects. Moreover, the lack of proper documentation for these deprecated APIs makes the task challenging. To address this challenge, we propose an algorithm and a tool APIScanner that automatically detects deprecated APIs in Python libraries. This algorithm parses the source code of the libraries using abstract syntax tree (ASTs) and identifies the deprecated APIs via decorator, hard-coded warning or comments. APIScanner is a Visual Studio Code Extension that highlights and warns the developer on the use of deprecated API elements while writing the source code. The tool can help developers to avoid using deprecated API elements without the execution of code. We tested our algorithm and tool on six popular Python libraries, which detected 838 of 871 deprecated API elements. Demo of APIScanner: [LINK]. Documentation, tool, and source code can be found here: [LINK].
2021-02-17 09:05:54.000000000
13,322
Autonomous racing in robotics combines high-speed dynamics with the necessity for reliability and real-time decision-making. While such racing pushes software and hardware to their limits, many existing full-system solutions necessitate complex, custom hardware and software, and usually focus on Time-Trials rather than full unrestricted Head-to-Head racing, due to financial and safety constraints. This limits their reproducibility, making advancements and replication feasible mostly for well-resourced laboratories with comprehensive expertise in mechanical, electrical, and robotics fields. Researchers interested in the autonomy domain but with only partial experience in one of these fields, need to spend significant time with familiarization and integration. The ForzaETH Race Stack addresses this gap by providing an autonomous racing software platform designed for F1TENTH, a 1:10 scaled Head-to-Head autonomous racing competition, which simplifies replication by using commercial off-the-shelf hardware. This approach enhances the competitive aspect of autonomous racing and provides an accessible platform for research and development in the field. The ForzaETH Race Stack is designed with modularity and operational ease of use in mind, allowing customization and adaptability to various environmental conditions, such as track friction and layout. Capable of handling both Time-Trials and Head-to-Head racing, the stack has demonstrated its effectiveness, robustness, and adaptability in the field by winning the official F1TENTH international competition multiple times.
2024-03-17 09:41:55.000000000
9,294
Accountability aims to provide explanations for why unwanted situations occurred, thus providing means to assign responsibility and liability. As such, accountability has slightly different meanings across the sciences. In computer science, our focus is on providing explanations for technical systems, in particular if they interact with their physical environment using sensors and actuators and may do serious harm. Accountability is relevant when considering safety, security and privacy properties and we realize that all these incarnations are facets of the same core idea. Hence, in this paper we motivate and propose a model for accountability infrastructures that is expressive enough to capture all of these domains. At its core, this model leverages formal causality models from the literature in order to provide a solid reasoning framework. We show how this model can be instantiated for several real-world use cases.
2016-08-26 22:10:01.000000000
5,468
Software development is a complex activity which depends on diverse technologies and people's expertise. The approaches to developing software highly depend on these different characteristics, which are the context developers are subject to. This context contains massive knowledge, and not capturing it means knowledge is continuously lost. Although extensively researched, context in software development is still not explicit, nor proposed into a broader view of the context needed by software developers and tools. Therefore, developers' productivity is affected, as the ability to reuse this rich context is hampered. This paper proposes a literature review on context for software development, through nine research questions. The purpose of this study is making the discovered context explicit into an integrated view and proposing a platform to aid software development using context information. We believe supporting contextual knowledge through its representation and mining for recommendation and real-time provision can significantly improve big data software project development.
2019-10-05 03:54:05.000000000
5,131
Decision making and requirements scoping occupy central roles in helping to develop products that are demanded by the customers and ensuring company strategies are accurately realized in product scope. Many companies experience continuous and frequent scope changes and fluctuations but struggle to measure the phenomena and correlate the measurement to the quality of the requirements process. We present the results from an exploratory interview study among 22 participants working with requirements management processes at a large company that develops embedded systems for a global market. Our respondents shared their opinions about the current set of requirements management process metrics as well as what additional metrics they envisioned as useful. We present a set of metrics that describe the quality of the requirements scoping process. The findings provide practical insights that can be used as input when introducing new measurement programs for requirements management and decision making.
2016-11-08 13:43:51.000000000
15,312
Software product lines have recently been presented as one of the best promising improvements for the efficient software development. Different research works contribute supportive parameters and negotiations regarding the problems of producing a perfect software scheme. Traditional approaches or recycling software are not effective to solve the problems concerning software competence. Since fast developments with software engineering in the past few years, studies show that some approaches are getting extensive attention in both industries and universities. This method is categorized as the software product line improvement; that supports reusing of software in big organizations. Different industries are adopting product lines to enhance efficiency and reduce operational expenses by way of emerging product developments. This research paper is formed to offer in-depth study regarding the software engineering issues such as complexity, conformity, changeability, invisibility, time constraints, budget constraints, and security. We have conducted various research surveys by visiting different professional software development organizations and took feedback from the professional software engineers to analyze the real-time problems that they are facing during the development process of software systems. Survey results proved that complexity is a most occurring issue that most software developers face while developing software applications. Moreover, invisibility is the problem that rarely happens according to the survey.
2017-03-07 18:24:37.000000000
15,200
Advancing ocean science has a significant impact to the development of the world, from operating a safe navigation for vessels to maintaining a healthy and diverse ocean ecosystem. Various ocean software systems have been extensively adopted for different purposes, for instance, predicting hourly sea level elevation across shorelines, simulating large-scale ocean circulations, as well as integrating into Earth system models for weather forecasts and climate projections. Regardless of their significance, guaranteeing the trustworthiness of ocean software and modelling systems is a long-standing challenge. The testing of ocean software suffers a lot from the so-called oracle problem, which refers to the absence of test oracles mainly due to the nonlinear interactions of multiple physical variables and the high complexity in computation. In the ocean, observed tidal signals are distorted by non-deterministic physical variables, hindering us from knowing the "true" astronomical tidal constituents existing in the timeseries. In this paper, we present how to test tidal analysis and prediction (TAP) software based on metamorphic testing (MT), a simple yet effective testing approach to the oracle problem. In particular, we construct metamorphic relations from the periodic property of astronomical tide, and then use them to successfully detect a real-life defect in an open-source TAP software. We also conduct a series of experiments to further demonstrate the applicability and effectiveness of MT in the testing of TAP software. Our study not only justifies the potential of MT in testing more complex ocean software and modelling systems, but also can be expanded to assess and improve the quality of a broader range of scientific simulation software systems.
2022-06-04 22:01:05.000000000
7,599
Current abstractive summarization models either suffer from a lack of clear interpretability or provide incomplete rationales by only highlighting parts of the source document. To this end, we propose the Summarization Program (SP), an interpretable modular framework consisting of an (ordered) list of binary trees, each encoding the step-by-step generative process of an abstractive summary sentence from the source document. A Summarization Program contains one root node per summary sentence, and a distinct tree connects each summary sentence (root node) to the document sentences (leaf nodes) from which it is derived, with the connecting nodes containing intermediate generated sentences. Edges represent different modular operations involved in summarization such as sentence fusion, compression, and paraphrasing. We first propose an efficient best-first search method over neural modules, SP-Search that identifies SPs for human summaries by directly optimizing for ROUGE scores. Next, using these programs as automatic supervision, we propose seq2seq models that generate Summarization Programs, which are then executed to obtain final summaries. We demonstrate that SP-Search effectively represents the generative process behind human summaries using modules that are typically faithful to their intended behavior. We also conduct a simulation study to show that Summarization Programs improve the interpretability of summarization models by allowing humans to better simulate model reasoning. Summarization Programs constitute a promising step toward interpretable and modular abstractive summarization, a complex task previously addressed primarily through blackbox end-to-end neural systems. Supporting code available at [LINK]
2022-09-20 15:49:35.000000000
10,040
Spreadsheets offer a supremely successful democratisation platform, placing the manipulation and presentation of numbers within the grasp of users that have little or no mathematical expertise or IT experience. What appears to be almost completely lacking within a "normal" solution built using Excel default settings is the deployment of any structure that extends beyond a single-cell formula. The structural elements that allow conventional code to scale without escalating errors appear to be absent. This paper considers the use of controversial or lesser-used techniques to create a coherent solution strategy in which the problem is solved by a sequence of formulas resembling the steps of a programmed language.
2017-04-04 04:30:20.000000000
8,411
Testing is an indispensable part of software development. However, a career in software testing is reported to be unpopular among students in computer science and related areas. This can potentially create a shortage of testers in the software industry in the future. The question is, whether the perception that undergraduate students have about software testing is accurate and whether it differs from the experience reported by those who work in testing activities in the software development industry. This investigation demonstrates that a career in software testing is more exciting and rewarding, as reported by professionals working in the field, than students may believe. Therefore, in order to guarantee a workforce focused on software quality, the academy and the software industry need to work together to better inform students about software testing and its essential role in software development.
2023-11-08 21:54:43.000000000
10,381
In recent years, technology has advanced considerably with the introduction of many systems including advanced robotics, big data analytics, cloud computing, machine learning and many more. The opportunities to exploit the yet to come security that comes with these systems are going toe to toe with new releases of security protocols to combat this exploitation to provide a secure system. The digitization of our lives proves to solve our human problems as well as improve quality of life but because it is digitalized, information and technology could be misused for other malicious gains. Hackers aim to steal the data of innocent people to use it for other causes such as identity fraud, scams and many more. This issue can be corrected during the software development life cycle, integrating security across the development phases, and testing of the software is done early to reduce the number of vulnerabilities that might or might not heavily impact an organisation depending on the range of the attack. The goal of a secured system software is to prevent such exploitations from ever happening by conducting a system life cycle where through planning and testing is done to maximise security while maintaining functionality of the system. In this paper, we are going to discuss the recent trends in security for system development as well as our predictions and suggestions to improve the current security practices in this industry.
2023-11-17 09:04:44.000000000
13,890
Modeling of work systems occurs for all sorts of reasons. Requirements need to be expressed. A pre-existing situation may need to be charted and analyzed. Early design decisions may be captured using architecture principles. Detailed design may be worked out. We all regard these activities as essentially being forms of modeling. In the work systems modeling library, we consider work system engineering from a modeling perspective. In the field of work system engineering, a whole plethora of modeling methods is available to system engineers and architects. Each of these methods can be used to model some (aspects) of a domain related to an existing and/or a planned work system. The aspects may refer to requirements, architecture, design, processing, data, etc, etc. In other words, these methodes are essentially all intended to model different aspects of work systems and/or their context. The aim of the work systems modeling library (WSML) is to bring together methodical knowledge concerning the modeling of work systems.
2021-05-13 03:14:01.000000000
5,515
This paper discusses how model checking, a technique used for the verification of behavioural requirements of dynamic systems, can be usefully deployed for the verification of contracts. A process view of agreements between parties is taken, whereby a contract is modelled as it evolves over time in terms of actions or more generally events that effect changes in its state. Modelling is done with Petri Nets in the spirit of other research work on the representation of trade procedures. The paper illustrates all the phases of the verification technique through an example and argues that the approach is useful particularly in the context of pre-contractual negotiation and contract drafting. The work reported here is part of a broader project on the development of logic-based tools for the analysis and representation of legal contracts.
2001-01-03 22:48:24.000000000
14,132
Contemporary DNN testing works are frequently conducted using metamorphic testing (MT). In general, de facto MT frameworks mutate DNN input images using semantics-preserving mutations and determine if DNNs can yield consistent predictions. Nevertheless, we find that DNNs may rely on erroneous decisions (certain components on the DNN inputs) to make predictions, which may still retain the outputs by chance. Such DNN defects would be neglected by existing MT frameworks. Erroneous decisions, however, would likely result in successive mis-predictions over diverse images that may exist in real-life scenarios. This research aims to unveil the pervasiveness of hidden DNN defects caused by incorrect DNN decisions (but retaining consistent DNN predictions). To do so, we tailor and optimize modern eXplainable AI (XAI) techniques to identify visual concepts that represent regions in an input image upon which the DNN makes predictions. Then, we extend existing MT-based DNN testing frameworks to check the consistency of DNN decisions made over a test input and its mutated inputs. Our evaluation shows that existing MT frameworks are oblivious to a considerable number of DNN defects caused by erroneous decisions. We conduct human evaluations to justify the validity of our findings and to elucidate their characteristics. Through the lens of DNN decision-based metamorphic relations, we re-examine the effectiveness of metamorphic transformations proposed by existing MT frameworks. We summarize lessons from this study, which can provide insights and guidelines for future DNN testing.
2022-10-08 09:55:59.000000000
14,993
Large language models are becoming increasingly practical for translating code across programming languages, a process known as $transpiling$. Even though automated transpilation significantly boosts developer productivity, a key concern is whether the generated code is correct. Existing work initially used manually crafted test suites to test the translations of a small corpus of programs; these test suites were later automated. In contrast, we devise the first approach for automated, functional, property-based testing of code translation models. Our general, user-provided specifications about the transpiled code capture a range of properties, from purely syntactic to purely semantic ones. As shown by our experiments, this approach is very effective in detecting property violations in popular code translation models, and therefore, in evaluating model quality with respect to given properties. We also go a step further and explore the usage scenario where a user simply aims to obtain a correct translation of some code with respect to certain properties without necessarily being concerned about the overall quality of the model. To this purpose, we develop the first property-guided search procedure for code translation models, where a model is repeatedly queried with slightly different parameters to produce alternative and potentially more correct translations. Our results show that this search procedure helps to obtain significantly better code translations.
2023-09-21 16:11:00.000000000
10,363
Parallel programming models exist as an abstraction of hardware and memory architectures. There are several parallel programming models in commonly use; they are shared memory model, thread model, message passing model, data parallel model, hybrid model, Flynn's models, embarrassingly parallel computations model, pipelined computations model. These models are not specific to a particular type of machine or memory architecture. This paper focuses the concurrent approach to Flynn's SPMD classification in single processing environment through java program.
2010-03-07 11:49:16.000000000
5,969
Business Process Reengineering increases enterprise's chance to survive in competition among organizations , but failure rate among reengineering efforts is high, so new methods that decrease failure, are needed, in this paper a business process reengineering method is presented that uses Enterprise Ontology for modelling the current system and its goal is to improve analysing current system and decreasing failure rate of BPR, and cost and time of performing processes, In this method instead of just modelling processes, processes with their : interactions and relations, environment, staffs and customers will be modelled in enterprise ontology. Also in choosing processes for reengineering step, after choosing them, processes which, according to the enterprise ontology, has the most connection with the chosen ones, will also be chosen to reengineer, finally this method is implemented on a company and As-Is and To-Be processes are simulated and compared by ARIS tools, Report and Simulation Experiment
2015-03-24 07:35:16.000000000
421
This study addresses the challenge of reverse engineering binaries from unknown instruction set architectures, a complex task with potential implications for software maintenance and cyber-security. We focus on the tasks of detecting candidate call and return opcodes for automatic extraction of call graphs in order to simplify the reverse engineering process. Empirical testing on a small dataset of binary files from different architectures demonstrates that the approach can accurately detect specific opcodes under conditions of noisy data. The method lays the groundwork for a valuable tool for reverse engineering where the reverse engineer has minimal a priori knowledge of the underlying instruction set architecture.
2024-01-13 11:48:45.000000000
3,028
Similarity metrics, e.g., signatures as used by anti-virus products, are the dominant technique to detect if a given binary is malware. The underlying assumption of this approach is that all instances of a malware (or even malware family) will be similar to each other. Software diversification is a probabilistic technique that uses code and data randomization and expressiveness in the target instruction set to generate large amounts of functionally equivalent but different binaries. Malware diversity builds on software diversity and ensures that any two diversified instances of the same malware have low similarity (according to a set of similarity metrics). An LLVM-based prototype implementation diversifies both code and data of binaries and our evaluation shows that signatures based on similarity only match one or few instances in a pool of diversified binaries generated from the same source code.
2014-09-25 12:49:19.000000000
11,390
Implementing smart contracts to automate the performance of high-value over-the-counter (OTC) financial derivatives is a formidable challenge. Due to the regulatory framework and the scale of financial risk if a contract were to go wrong, the performance of these contracts must be enforceable in law and there is an absolute requirement that the smart contract will be faithful to the intentions of the parties as expressed in the original legal documentation. Formal methods provide an attractive route for validation and assurance, and here we present early results from an investigation of the semantics of industry-standard legal documentation for OTC derivatives. We explain the need for a formal representation that combines temporal, deontic and operational aspects, and focus on the requirements for the temporal aspects as derived from the legal text. The relevance of this work extends beyond OTC derivatives and is applicable to understanding the temporal semantics of a wide range of legal documentation.
2018-05-27 23:23:17.000000000
14,962
Prognostics and Health Management (PHM) is a discipline focused on predicting the point at which systems or components will cease to perform as intended, typically measured as Remaining Useful Life (RUL). RUL serves as a vital decision-making tool for contingency planning, guiding the timing and nature of system maintenance. Historically, PHM has primarily been applied to hardware systems, with its application to software only recently explored. In a recent study we introduced a methodology and demonstrated how changes in software can impact the RUL of software. However, in practical software development, real-time performance is also influenced by various environmental attributes, including operating systems, clock speed, processor performance, RAM, machine core count and others. This research extends the analysis to assess how changes in environmental attributes, such as operating system and clock speed, affect RUL estimation in software. Findings are rigorously validated using real performance data from controlled test beds and compared with predictive model-generated data. Statistical validation, including regression analysis, supports the credibility of the results. The controlled test bed environment replicates and validates faults from real applications, ensuring a standardized assessment platform. This exploration yields actionable knowledge for software maintenance and optimization strategies, addressing a significant gap in the field of software health management.
2023-09-21 10:25:14.000000000
198
In software testing, a set of test cases is constructed according to some predefined selection criteria. The software is then examined against these test cases. Three interesting observations have been made on the current artifacts of software testing. Firstly, an error-revealing test case is considered useful while a successful test case which does not reveal software errors is usually not further investigated. Whether these successful test cases still contain useful information for revealing software errors has not been properly studied. Secondly, no matter how extensive the testing has been conducted in the development phase, errors may still exist in the software [5]. These errors, if left undetected, may eventually cause damage to the production system. The study of techniques for uncovering software errors in the production phase is seldom addressed in the literature. Thirdly, as indicated by Weyuker in [6], the availability of test oracles is pragmatically unattainable in most situations. However, the availability of test oracles is generally assumed in conventional software testing techniques. In this paper, we propose a novel test case selection technique that derives new test cases from the successful ones. The selection aims at revealing software errors that are possibly left undetected in successful test cases which may be generated using some existing strategies. As such, the proposed technique augments the effectiveness of existing test selection strategies. The technique also helps uncover software errors in the production phase and can be used in the absence of test oracles.
2020-02-24 12:58:25.000000000
14,507
Model-driven engineering is the automatic production of software artefacts from abstract models of structure and functionality. By targeting a specific class of system, it is possible to automate aspects of the development process, using model transformations and code generators that encode domain knowledge and implementation strategies. Using this approach, questions of correctness for a complex, software system may be answered through analysis of abstract models of lower complexity, under the assumption that the transformations and generators employed are themselves correct. This paper shows how formal techniques can be used to establish the correctness of model transformations used in the generation of software components from precise object models. The source language is based upon existing, formal techniques; the target language is the widely-used SQL notation for database programming. Correctness is established by giving comparable, relational semantics to both languages, and checking that the transformations are semantics-preserving.
2013-01-01 01:54:21.000000000
798
The use-case diagram is a software artifact. Thus, as with any software artifact, the use-case diagrams change across time through the software development life cycle. Therefore, several versions of the same diagram are existed at distinct times. Thus, comparing all use-case diagram variants to detect common and variable use-cases becomes one of the main challenges in the product line reengineering field. The contribution of this paper is to suggest an automatic approach to compare a collection of use-case diagram variants and detect both commonality and variability. In our work, every use-case represents a feature. The proposed approach visualizes the detected features using formal concept analysis, where common and variable features are introduced to software engineers. The proposed approach was applied on a mobile media case study to be validated. The findings confirm the importance and the performance of the suggested approach as all common and variable features were precisely detected via formal concept analysis and latent semantic indexing.
2022-01-28 17:31:38.000000000
15,138
In software engineering, deep learning models are increasingly deployed for critical tasks such as bug detection and code review. However, overfitting remains a challenge that affects the quality, reliability, and trustworthiness of software systems that utilize deep learning models. Overfitting can be (1) prevented (e.g., using dropout or early stopping) or (2) detected in a trained model (e.g., using correlation-based approaches). Both overfitting detection and prevention approaches that are currently used have constraints (e.g., requiring modification of the model structure, and high computing resources). In this paper, we propose a simple, yet powerful approach that can both detect and prevent overfitting based on the training history (i.e., validation losses). Our approach first trains a time series classifier on training histories of overfit models. This classifier is then used to detect if a trained model is overfit. In addition, our trained classifier can be used to prevent overfitting by identifying the optimal point to stop a model's training. We evaluate our approach on its ability to identify and prevent overfitting in real-world samples. We compare our approach against correlation-based detection approaches and the most commonly used prevention approach (i.e., early stopping). Our approach achieves an F1 score of 0.91 which is at least 5% higher than the current best-performing non-intrusive overfitting detection approach. Furthermore, our approach can stop training to avoid overfitting at least 32% of the times earlier than early stopping and has the same or a better rate of returning the best model.
2024-01-18 02:04:37.000000000
13,719
Verifying multi-threaded programs is becoming more and more important, because of the strong trend to increase the number of processing units per CPU socket. We introduce a new configurable program analysis for verifying multi-threaded programs with a bounded number of threads. We present a simple and yet efficient implementation as component of the existing program-verification framework CPAchecker. While CPAchecker is already competitive on a large benchmark set of sequential verification tasks, our extension enhances the overall applicability of the framework. Our implementation of handling multiple threads is orthogonal to the abstract domain of the data-flow analysis, and thus, can be combined with several existing analyses in CPAchecker, like value analysis, interval analysis, and BDD analysis. The new analysis is modular and can be used, for example, to verify reachability properties as well as to detect deadlocks in the program. This paper includes an evaluation of the benefit of some optimization steps (e.g., changing the iteration order of the reachability algorithm or applying partial-order reduction) as well as the comparison with other state-of-the-art tools for verifying multi-threaded programs.
2016-12-13 13:38:24.000000000
8,452
We introduce TAPHSIR, a tool for anaphoric ambiguity detection and anaphora resolution in requirements. TAPHSIR facilities reviewing the use of pronouns in a requirements specification and revising those pronouns that can lead to misunderstandings during the development process. To this end, TAPHSIR detects the requirements which have potential anaphoric ambiguity and further attempts interpreting anaphora occurrences automatically. TAPHSIR employs a hybrid solution composed of an ambiguity detection solution based on machine learning and an anaphora resolution solution based on a variant of the BERT language model. Given a requirements specification, TAPHSIR decides for each pronoun occurrence in the specification whether the pronoun is ambiguous or unambiguous, and further provides an automatic interpretation for the pronoun. The output generated by TAPHSIR can be easily reviewed and validated by requirements engineers. TAPHSIR is publicly available on Zenodo (DOI: 10.5281/zenodo.[HASH]).
2022-06-19 14:48:48.000000000
299
We report our initial investigations into reliability and path-finding based models and propose future areas of interest. Inspired by broken sidewalks during on-campus construction projects, we develop two models for navigating this "unreliable network." These are based on a concept of "accumulating risk" backward from the destination, and both operate on directed acyclic graphs with a probability of failure associated with each edge. The first serves to introduce and has faults addressed by the second, more conservative model. Next, we show a paradox when these models are used to construct polynomials on conceptual networks, such as design processes and software development life cycles. When the risk of a network increases uniformly, the most reliable path changes from wider and longer to shorter and narrower. If we let professional inexperience--such as with entry level cooks and software developers--represent probability of edge failure, does this change in path imply that the novice should follow instructions with fewer "back-up" plans, yet those with alternative routes should be followed by the expert?
2014-06-09 07:19:53.000000000
10,323
The increased utilization of Artificial Intelligence (AI) solutions brings with it inherent risks, such as misclassification and sub-optimal execution time performance, due to errors introduced in their deployment infrastructure because of problematic configuration and software faults. On top of that, AI methods such as Deep Neural Networks (DNNs) are utilized to perform demanding, resource-intensive and even safety-critical tasks, and in order to effectively increase the performance of the DNN models deployed, a variety of Machine Learning (ML) compilers have been developed, allowing compatibility of DNNs with a variety of hardware acceleration devices, such as GPUs and TPUs. Furthermore the correctness of the compilation process should be verified. In order to allow developers and researchers to explore the robustness of DNN models deployed on different hardware accelerators via ML compilers, in this paper we propose MutateNN, a tool that provides mutation testing and model analysis features in the context of deployment on different hardware accelerators. To demonstrate the capabilities of MutateNN, we focus on the image recognition domain by applying mutation testing to 7 well-established models utilized for image classification. We instruct 21 mutations of 6 different categories, and deploy our mutants on 4 different hardware acceleration devices of varying capabilities. Our results indicate that models are proven robust to changes related to layer modifications and arithmetic operators, while presenting discrepancies of up to 90.3% in mutants related to conditional operators. We also observed unexpectedly severe performance degradation on mutations related to arithmetic types of variables, leading the mutants to produce the same classifications for all dataset inputs.
2023-06-02 08:57:31.000000000
5,524
We study operational security in computer network security, including infrastructure, internal processes, resources, information, and physical environment. Current works on developing a security framework focus on a security ontology that contributes to applying common vocabulary, but such an approach does not assist in constructing a foundation for a holistic security methodology. We focus on defining the bounds and creating a representation of a security system by developing a diagrammatic representation (i.e. a model) as a means to describe computer network processes. The model, referred to a thinging machine, is a first step toward developing a security strategy and plan. The general aim is to demonstrate that the representation of the security system plays a key role in making thinking visible through conceptual description of the operational environment, a region in which active security operations are undertaken. We apply the proposed model for email security by conceptually describing a real email system.
2020-03-25 22:40:29.000000000
132
Quantum computers are becoming more mainstream. As more programmers are starting to look at writing quantum programs, they need to test and debug their code. In this paper, we discuss various use-cases for quantum computers, either standalone or as part of a System of Systems. Based on these use-cases, we discuss some testing and debugging tactics that one can leverage to ensure the quality of the quantum software. We also highlight quantum-computer-specific issues and list novel techniques that are needed to address these issues. The practitioners can readily apply some of these tactics to their process of writing quantum programs, while researchers can learn about opportunities for future work.
2021-03-16 09:44:45.000000000
6,086
The development of open source software (OSS) is a broad field which requires diverse skill sets. For example, maintainers help lead the project and promote its longevity, technical writers assist with documentation, bug reporters identify defects in software, and developers program the software. However, it is unknown which skills are used in OSS development as well as OSS contributors' general attitudes towards skills in OSS. In this paper, we address this gap by administering a survey to a diverse set of 455 OSS contributors. Guided by these responses as well as prior literature on software development expertise and social factors of OSS, we develop a model of skills in OSS that considers the many contexts OSS contributors work in. This model has 45 skills in the following 9 categories: technical skills, working styles, problem solving, contribution types, project-specific skills, interpersonal skills, external relations, management, and characteristics. Through a mix of qualitative and quantitative analyses, we find that OSS contributors are actively motivated to improve skills and perceive many benefits in sharing their skills with others. We then use this analysis to derive a set of design implications and best practices for those who incorporate skills into OSS tools and platforms, such as GitHub.
2022-09-04 02:58:37.000000000
1,187
In this paper, we propose an assertion-based approach to capture software evolution, through the notion of commit-relevant specification. A commit-relevant specification summarises the program properties that have changed as a consequence of a commit (understood as a specific software modification), via two sets of assertions, the delta-added assertions, properties that did not hold in the pre-commit version but hold on the post-commit, and the delta-removed assertions, those that were valid in the pre-commit, but no longer hold after the code change. We also present DeltaSpec, an approach that combines test generation and dynamic specification inference to automatically compute commit-relevant specifications from given commits. We evaluate DeltaSpec on two datasets that include a total of 57 commits (63 classes and 797 methods). We show that commit-relevant assertions can precisely describe the semantic deltas of code changes, providing a useful mechanism for validating the behavioural evolution of software. We also show that DeltaSpec can infer 88% of the manually written commit-relevant assertions expressible in the language supported by the tool. Moreover, our experiments demonstrate that DeltaSpec's inferred assertions are effective to detect regression faults. More precisely, we show that commit-relevant assertions can detect, on average, 78.3% of the artificially seeded faults that interact with the code changes. We also show that assertions in the delta are 58.3% more effective in detecting commit-relevant mutants than assertions outside the delta, and that it takes on average 169% fewer assertions when these are commit-relevant, compared to using general valid assertions, to achieve a same commit-relevant mutation score.
2023-01-27 14:08:50.000000000
11,078
Modern programming languages such as Java, JavaScript, and Rust encourage software reuse by hosting diverse and fast-growing repositories of highly interdependent packages (i.e., reusable libraries) for their users. The standard way to study the interdependence between software packages is to infer a package dependency network by parsing manifest data. Such networks help answer questions such as "How many packages have dependencies to packages with known security issues?" or "What are the most used packages?". However, an overlooked aspect in existing studies is that manifest-inferred relationships do not necessarily examine the actual usage of these dependencies in source code. To better model dependencies between packages, we developed Pr\"azi, an approach combining manifests and call graphs of packages. Pr\"azi constructs a dependency network at the more fine-grained function-level, instead of at the manifest level. This paper discusses a prototypical Pr\"azi implementation for the popular system programming language Rust. We use Pr\"azi to characterize Rust's package repository, Cratesio, at the function level and perform a comparative study with metadata-based networks. Our results show that metadata-based networks generalize how packages use their dependencies. Using Pr\"azi, we find packages call only 40% of their resolved dependencies, and that manual analysis of 34 cases reveals that not all packages use a dependency the same way. We argue that researchers and practitioners interested in understanding how developers or programs use dependencies should account for its context -- not the sum of all resolved dependencies.
2021-01-21 20:05:46.000000000
6,843
Software Reliability Growth Models (SRGMs) are based on underlying assumptions which make them typically more suited for quality evaluation of closed-source projects and their development lifecycles. Their usage in open-source software (OSS) projects is a subject of debate. Although the studies investigating the SRGMs applicability in OSS context do exist, they are limited by the number of models and projects considered which might lead to inconclusive results. In this paper, we present an experimental study of SRGMs applicability to a total of 88 OSS projects, comparing nine SRGMs, looking at the stability of the best models on the whole projects, on releases, on different domains, and according to different projects' attributes. With the aid of the STRAIT tool, we automated repository mining, data processing, and SRGM analysis for better reproducibility. Overall, we found good applicability of SRGMs to OSS, but with different performance when segmenting the dataset into releases and domains, highlighting the difficulty in generalizing the findings and in the search for one-fits-all models.
2022-05-03 07:25:59.000000000
13,622
Ocean science is a discipline that employs ocean models as an essential research asset. Such scientific modeling provides mathematical abstractions of real-world systems, e.g., the oceans. These models are then coded as implementations of the mathematical abstractions. The developed software systems are called models of the real-world system. To advance the state in engineering such ocean models, we intend to better understand how ocean models are developed and maintained in ocean science. In this paper, we present the results of semi-structured interviews and the Thematic Analysis~(TA) of the interview results to analyze the domain of ocean modeling. Thereby, we identified developer requirements and impediments to model development and evolution, and related themes. This analysis can help to understand where methods from software engineering should be introduced and which challenges need to be addressed. We suggest that other researchers extend and repeat our TA with model developers and research software engineers working in related domains to further advance our knowledge and skills in scientific modeling.
2022-01-29 15:07:02.000000000
3,342
Project productivity is a key factor for producing effort estimates from Use Case Points (UCP), especially when the historical dataset is absent. The first versions of UCP effort estimation models used a fixed number or very limited numbers of productivity ratios for all new projects. These approaches have not been well examined over a large number of projects so the validity of these studies was a matter for criticism. The newly available large software datasets allow us to perform further research on the usefulness of productivity for effort estimation of software development. Specifically, we studied the relationship between project productivity and UCP environmental factors, as they have a significant impact on the amount of productivity needed for a software project. Therefore, we designed four studies, using various classification and regression methods, to examine the usefulness of that relationship and its impact on UCP effort estimation. The results we obtained are encouraging and show potential improvement in effort estimation. Furthermore, the efficiency of that relationship is better over a dataset that comes from industry because of the quality of data collection. Our comment on the findings is that it is better to exclude environmental factors from calculating UCP and make them available only for computing productivity. The study also encourages project managers to understand how to better assess the environmental factors as they do have a significant impact on productivity
2017-05-23 10:43:53.000000000
5,572
As a particular case study of the formal verification of state-of-the-art, real software, we discuss the specification and verification of a corrected version of the implementation of a linked list as provided by the Java Collection framework. Keywords: Java, standard library, deductive verification, KeY, Java Modeling Language, case study, bug
2019-11-06 16:25:45.000000000
10,909
We conduct the first empirical study on using knowledge transfer to improve the generalization ability of large language models (LLMs) in software engineering tasks, which often require LLMs to generalize beyond their training data. Our proposed general knowledge transfer approach guides the LLM towards a similar and familiar API or code snippet it has encountered before, improving the model's generalization ability for unseen knowledge. We apply this approach to three software engineering tasks: API inference, code example generation, and FQN inference, and find transfer span, transfer strategy, and transfer architecture as key factors affecting the method. Our findings demonstrate the feasibility of knowledge transfer and its potential to enhance LLMs' performance in various software engineering tasks. The effectiveness of knowledge transfer varies depending on the target domain and task, with the hierarchical strategy being more effective than direct transfer, and AI-Chain outperforming CoT in prompt design. The implications of these findings extend beyond software engineering tasks and suggest that knowledge transfer can enhance LLMs' ability to handle unknowns in any natural language task.
2023-08-08 15:45:55.000000000
6,052
In this paper, we investigate the effect of TDD, as compared to a non-TDD approach, as well as its retainment (or retention) over a time span of (about) six months. To pursue these objectives, we conducted a (quantitative) longitudinal cohort study with 30 novice developers (i.e., third-year undergraduate students in Computer Science). We observed that TDD affects neither the external quality of software products nor developers' productivity. However, we observed that the participants applying TDD produced significantly more tests, with a higher fault-detection capability than those using a non-TDD approach. As for the retainment of TDD, we found that TDD is retained by novice developers for at least six months.
2021-05-06 12:20:52.000000000
4,492
Maintainers are now self-sabotaging their work in order to take political or economic stances, a practice referred to as "protestware". In this poster, we present our approach to understand how the discourse about such an attack went viral, how it is received by the community, and whether developers respond to the attack in a timely manner. We study two notable protestware cases, i.e., Colors.js and es5-ext, comparing with discussions of a typical security vulnerability as a baseline, i.e., Ua-parser, and perform a thematic analysis of more than two thousand protest-related posts to extract the different narratives when discussing protestware.
2024-01-29 17:13:44.000000000
2,306
This paper proposes a new metric for software functional size, which is derived from Function Point Analysis (FPA), but overcomes some of its known defi- ciencies. The statistical results show that the new metric, Functional Elements (EF), and its submetric, Functional Elements of Transaction (EFt), have higher correlation with the effort in software development than FPA in the context of the analyzed data. The paper illustrates the application of the new metric as a tool to improve IT governance specifically in assessment, monitoring, and giving directions to the software development area.
2018-10-12 00:05:33.000000000
5,527
A fundamental unit of work in programming is the code contribution ("commit") that a developer makes to the code base of the project in work. An author's commit frequency describes how often that author commits. Knowing the distribution of all commit frequencies is a fundamental part of understanding software development processes. This paper presents a detailed quantitative analysis of commit frequencies in open-source software development. The analysis is based on a large sample of open source projects, and presents the overall distribution of commit frequencies. We analyze the data to show the differences between authors and projects by project size; we also includes a comparison of successful and non successful projects and we derive an activity indicator from these analyses. By measuring a fundamental dimension of programming we help improve software development tools and our understanding of software development. We also validate some fundamental assumptions about software development.
2014-08-20 17:03:05.000000000
15,040
This paper describes work carried out on a model for the evolution of graph classes in complex objects. By defining evolution rules and propagation strategies on graph classes, we aim to define a user-definable means to manage data evolution model which tackles the complex nature of the classes managed, using the concepts defined in object systems. So, depending on their needs and on those of the targeted application, designers can choose the evolution mechanism they consider to suit them best. They can either create new evolutions or reuse predefined ones to respond to a given need.
2018-02-20 14:58:16.000000000
10,919
Program merging is standard practice when developers integrate their individual changes to a common code base. When the merge algorithm fails, this is called a merge conflict. The conflict either manifests in textual merge conflicts where the merge fails to produce code, or semantic merge conflicts where the merged code results in compiler or test breaks. Resolving these conflicts for large code projects is expensive because it requires developers to manually identify the sources of conflict and correct them. In this paper, we explore the feasibility of automatically repairing merge conflicts (both textual and semantic) using k-shot learning with large neural language models (LM) such as GPT-3. One of the challenges in leveraging such language models is to fit the examples and the queries within a small prompt (2048 tokens). We evaluate LMs and k-shot learning for two broad applications: (a) textual and semantic merge conflicts for a divergent fork Microsoft Edge, and (b) textual merge conflicts for a large number of JavaScript projects in GitHub. Our results are mixed: one one-hand, LMs provide the state-of-the-art (SOTA) performance on semantic merge conflict resolution for Edge compared to earlier symbolic approaches; on the other hand, LMs do not yet obviate the benefits of fine-tuning neural models (when sufficient data is available) or the design of special purpose domain-specific languages (DSL) for restricted patterns for program synthesis.
2021-11-19 20:28:34.000000000
12,731
The adoption of blockchain based distributed ledgers is growing fast due to their ability to provide reliability, integrity, and auditability without trusted entities. One of the key capabilities of these emerging platforms is the ability to create self-enforcing smart contracts. However, the development of smart contracts has proven to be error-prone in practice, and as a result, contracts deployed on public platforms are often riddled with security vulnerabilities. This issue is exacerbated by the design of these platforms, which forbids updating contract code and rolling back malicious transactions. In light of this, it is crucial to ensure that a smart contract is secure before deploying it and trusting it with significant amounts of cryptocurrency. To this end, we introduce the VeriSolid framework for the formal verification of contracts that are specified using a transition-system based model with rigorous operational semantics. Our model-based approach allows developers to reason about and verify contract behavior at a high level of abstraction. VeriSolid allows the generation of Solidity code from the verified models, which enables the correct-by-design development of smart contracts.
2019-01-02 11:21:29.000000000
7,795
Software maintenance and evolution involves critical activities for the success of software projects. To support such activities and keep code up-to-date and error-free, software communities make use of issue trackers, i.e., tools for signaling, handling, and addressing the issues occurring in software systems. However, in popular projects, tens or hundreds of issue reports are daily submitted. In this context, identifying the type of each submitted report (e.g., bug report, feature request, etc.) would facilitate the management and the prioritization of the issues to address. To support issue handling activities, in this paper, we propose Ticket Tagger, a GitHub app analyzing the issue title and description through machine learning techniques to automatically recognize the types of reports submitted on GitHub and assign labels to each issue accordingly. We empirically evaluated the tool's prediction performance on about 30,000 GitHub issues. Our results show that the Ticket Tagger can identify the correct labels to assign to GitHub issues with reasonably high effectiveness. Considering these results and the fact that the tool is designed to be easily integrated in the GitHub issue management process, Ticket Tagger consists in a useful solution for developers.
2021-07-19 11:34:09.000000000
13,063
One of the main reasons that cause seniors to face accessibility barriers when trying to use software applications is that the age-related user interface (UI) needs of seniors (e.g., physical and cognitive limitations) are not properly addressed in software user interfaces. The existing literature proposes model-driven engineering based UI adaptations as a prominent solution for this phenomenon. But in our exploration into the domain, we identified that the existing work lacks comprehensiveness when it comes to integrating accessibility into software modelling tools and methods when compared to a well-recognised accessibility standard such as the Web Content Accessibility Guidelines (WCAG). Thus in this paper, we outline a research roadmap that aims to use WCAG as a reference framework to design domain-specific languages that model the diverse accessibility scenarios of senior users via user context information and UI adaptation rules modelling so that they meet the accessibility standards specified in WCAG.
2023-04-28 20:03:48.000000000
6,172
Current cloud and network infrastructures do not employ privacy-preserving methods to protect their assets. Anonymous credential schemes are a cryptographic building block that enables the certification of data structures and prove properties over their representations without disclosing the innards of their data structures in zero-knowledge. The GRaph Signature (GRS) scheme enables the certification and proof methods to sign infrastructure topologies represented as graph data structures and use zero-knowledge to prove properties over their certificates. As such, they represent a powerful privacy-preserving method that proves properties over a signed topology graph to another party without disclosing the blueprint of its topology. In this paper, we report our efforts in designing, implementing and benchmarking a Graph Signature Library (GSL). GSL is a cryptographic library realized in Java that implements the graph signature scheme.
2020-05-23 18:46:44.000000000
4,745
The four Industry 4.0 design principles information transparency, technical assistance, interconnection, and decentralized decisions pose challenges in integrating information technology (IT) and operational technology (OT) solutions in industrial systems. These different solutions have conflicting requirements, making interfaces between them problematic for both systems and organizations. An Industrial Business Process Twin (IBPT) entity, acting as an intermediary between the realms of IT and OT, has been proposed in a previous work, to effectively reduce the amount of required IT/OT interfaces in an attempt of overcoming this situation. In this work, we investigate the effects of this approach during the design phase. We argue that, by eliminating interfaces between IT and OT components in the system design, this approach is therefore eliminating conflicting communication channels within the organization's communication structure. In order to verify our argument, we develop a model of our IBPT concept according to the Reference Architecture Model Industrie 4.0 (RAMI4.0) using an Industry 4.0 scenario addressing the four essential Industry 4.0 design principles. Results show that the IBPT approach indeed eliminates potentially conflicting IT/OT interfaces during the system design phase.
2023-05-29 20:50:27.000000000
6,938
This experience report outlines five tech transfer strategies developed over a period of 25 years at four Global 1000 companies (HP, Cisco, Qualcomm, and Nortel) to mitigate R&D challenges associated with duplicated effort, product quality, and time-to-market. The five strategies accelerate innovation through open knowledge sharing, rather than licensing intellectual property rights (IPR) such as patents, trade secrets, and copyrights. The strategies are based on corporate tech forums, conference panels, exploratory workshops, research reviews (at universities and companies), and talent exchanges. While the initial objective was to foster the corporate adoption of software best practices, over time the strategies had broader impact on company innovation, including incubating cross-company R&D collaborations, capturing organizational memory, cultivating and leveraging external research partnerships, and feeding company talent pipelines.
2024-02-01 19:10:05.000000000
11,796
The size of deep learning models in artificial intelligence (AI) software is increasing rapidly, which hinders the large-scale deployment on resource-restricted devices (e.g., smartphones). To mitigate this issue, AI software compression plays a crucial role, which aims to compress model size while keeping high performance. However, the intrinsic defects in the big model may be inherited by the compressed one. Such defects may be easily leveraged by attackers, since the compressed models are usually deployed in a large number of devices without adequate protection. In this paper, we try to address the safe model compression problem from a safety-performance co-optimization perspective. Specifically, inspired by the test-driven development (TDD) paradigm in software engineering, we propose a test-driven sparse training framework called SafeCompress. By simulating the attack mechanism as the safety test, SafeCompress can automatically compress a big model to a small one following the dynamic sparse training paradigm. Further, considering a representative attack, i.e., membership inference attack (MIA), we develop a concrete safe model compression mechanism, called MIA-SafeCompress. Extensive experiments are conducted to evaluate MIA-SafeCompress on five datasets for both computer vision and natural language processing tasks. The results verify the effectiveness and generalization of our method. We also discuss how to adapt SafeCompress to other attacks besides MIA, demonstrating the flexibility of SafeCompress.
2022-08-09 01:28:30.000000000
7,227
Design patterns being applied more and more to solve the software engineering difficulties in the object oriented software design procedures. So, the design pattern detection is widely used by software industries. Currently, many solutions presented to detect the design pattern in the system design. In this paper, we will propose a new one which first; we will use the graph implementation to implement both the system design UML diagram and the design pattern UML diagram. Second, we will implement the edges for each one of the both two graphs in a set of 4-tuple elements. Then, we will apply a new inexact graph isomorphic algorithm to detect the design pattern in the system design.
2014-08-26 14:12:22.000000000
12,969
Systems with artificial intelligence components, so-called AI-based systems, have gained considerable attention recently. However, many organizations have issues with achieving production readiness with such systems. As a means to improve certain software quality attributes and to address frequently occurring problems, design patterns represent proven solution blueprints. While new patterns for AI-based systems are emerging, existing patterns have also been adapted to this new context. The goal of this study is to provide an overview of design patterns for AI-based systems, both new and adapted ones. We want to collect and categorize patterns, and make them accessible for researchers and practitioners. To this end, we first performed a multivocal literature review (MLR) to collect design patterns used with AI-based systems. We then integrated the created pattern collection into a web-based pattern repository to make the patterns browsable and easy to find. As a result, we selected 51 resources (35 white and 16 gray ones), from which we extracted 70 unique patterns used for AI-based systems. Among these are 34 new patterns and 36 traditional ones that have been adapted to this context. Popular pattern categories include "architecture" (25 patterns), "deployment" (16), "implementation" (9), or "security & safety" (9). While some patterns with four or more mentions already seem established, the majority of patterns have only been mentioned once or twice (51 patterns). Our results in this emerging field can be used by researchers as a foundation for follow-up studies and by practitioners to discover relevant patterns for informing the design of AI-based systems.
2023-03-21 12:18:17.000000000
10,613
In this paper we propose an algorithm, Simple Hebbian PCA, and prove that it is able to calculate the principal component analysis (PCA) in a distributed fashion across nodes. It simplifies existing network structures by removing intralayer weights, essentially cutting the number of weights that need to be trained in half.
2017-08-08 19:34:24.000000000
13,290
The automatic collection of stack traces in bug tracking systems is an integral part of many software projects and their maintenance. However, such reports often contain a lot of duplicates, and the problem of de-duplicating them into groups arises. In this paper, we propose a new approach to solve the deduplication task and report on its use on the real-world data from JetBrains, a leading developer of IDEs and other software. Unlike most of the existing methods, which assign the incoming stack trace to a particular group in which a single most similar stack trace is located, we use the information about all the calculated similarities to the group, as well as the information about the timestamp of the stack traces. This approach to aggregating all available information shows significantly better results compared to existing solutions. The aggregation improved the results over the state-of-the-art solutions by 15 percentage points in the Recall Rate Top-1 metric on the existing NetBeans dataset and by 8 percentage points on the JetBrains data. Additionally, we evaluated a simpler k-Nearest Neighbors approach to aggregation and showed that it cannot reach the same levels of improvement. Finally, we studied what features from the aggregation contributed the most towards better quality to understand which of them to develop further. We publish the implementation of the suggested approach, and will release the newly collected industrial dataset upon acceptance to facilitate further research in the area.
2022-04-28 05:07:26.000000000
6,090
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
2
Edit dataset card