aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.12413
2965312352
Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
Recent years have seen a rise of interactive machine learning @cite_41 and such techniques are now commonly integrated into visual analytics systems, as recently surveyed by @cite_50 . Often, they are used to learn model refinements from user interaction @cite_42 or provide @cite_40 . Semantic interactions are typically performed with the intent of refining or steering a machine-learning model. In , expert users perform semantic interactions, as their primary goal is the annotation of argumentation. The result is a concealed machine teaching process @cite_14 that is not an end in itself, but a by-product'' of the annotation.
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_42", "@cite_40", "@cite_50" ], "mid": [ "2006710607", "2122514299", "2794719971", "2963611534" ], "abstract": [ "Machine learning offers a range of tools for training systems from data, but these methods are only as good as the underlying representation. This paper proposes to acquire representations for machine learning by reading text written to accommodate human learning. We propose a novel form of semantic analysis called reading to learn, where the goal is to obtain a high-level semantic abstract of multiple documents in a representation that facilitates learning. We obtain this abstract through a generative model that requires no labeled data, instead leveraging repetition across multiple documents. The semantic abstract is converted into a transformed feature space for learning, resulting in improved generalization on a relational learning task.", "Most previous work on trainable language generation has focused on two paradigms: (a) using a generation decisions of an existing generator. Both approaches rely on the existence of a handcrafted generation component, which is likely to limit their scalability to new domains. The first contribution of this article is to present Bagel, a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases, according to Factored Language Models (FLMs). As domain utterances are not readily available for most natural language generation tasks, a large creative effort is required to produce the data necessary to represent human linguistic variation for nontrivial domains. This article is based on the assumption that learning to produce paraphrases can be facilitated by collecting data from a large sample of untrained annotators using crowdsourcing—rather than a few domain experts—by relying on a coarse meaning representation. A second contribution of this article is to use crowdsourced data to show how dialogue naturalness can be improved by learning to vary the output utterances generated for a given semantic input. Two data-driven methods for generating paraphrases in dialogue are presented: (a) by sampling from the n-best list of realizations produced by Bagel's FLM reranker; and (b) by learning a structured perceptron predicting whether candidate realizations are valid paraphrases. We train Bagel on a set of 1,956 utterances produced by 137 annotators, which covers 10 types of dialogue acts and 128 semantic concepts in a tourist information system for Cambridge. An automated evaluation shows that Bagel outperforms utterance class LM baselines on this domain. A human evaluation of 600 resynthesized dialogue extracts shows that Bagel's FLM output produces utterances comparable to a handcrafted baseline, whereas the perceptron classifier performs worse. Interestingly, human judges find the system sampling from the n-best list to be more natural than a system always returning the first-best utterance. The judges are also more willing to interact with the n-best system in the future. These results suggest that capturing the large variation found in human language using data-driven methods is beneficial for dialogue interaction.", "Interaction and collaboration between humans and intelligent machines has become increasingly important as machine learning methods move into real-world applications that involve end users. While much prior work lies at the intersection of natural language and vision, such as image captioning or image generation from text descriptions, less focus has been placed on the use of language to guide or improve the performance of a learned visual processing algorithm. In this paper, we explore methods to flexibly guide a trained convolutional neural network through user input to improve its performance during inference. We do so by inserting a layer that acts as a spatio-semantic guide into the network. This guide is trained to modify the network's activations, either directly via an energy minimization scheme or indirectly through a recurrent model that translates human language queries to interaction weights. Learning the verbal interaction is fully automatic and does not require manual text annotations. We evaluate the method on two datasets, showing that guiding a pre-trained network can improve performance, and provide extensive insights into the interaction between the guide and the CNN.", "Despite the availability of a huge amount of video data accompanied by descriptive texts, it is not always easy to exploit the information contained in natural language in order to automatically recognize video concepts. Towards this goal, in this paper we use textual cues as means of supervision, introducing two weakly supervised techniques that extend the Multiple Instance Learning (MIL) framework: the Fuzzy Sets Multiple Instance Learning (FSMIL) and the Probabilistic Labels Multiple Instance Learning (PLMIL). The former encodes the spatio-temporal imprecision of the linguistic descriptions with Fuzzy Sets, while the latter models different interpretations of each description's semantics with Probabilistic Labels, both formulated through a convex optimization algorithm. In addition, we provide a novel technique to extract weak labels in the presence of complex semantics, that consists of semantic similarity computations. We evaluate our methods on two distinct problems, namely face and action recognition, in the challenging and realistic setting of movies accompanied by their screenplays, contained in the COGNIMUSE database. We show that, on both tasks, our method considerably outperforms a state-of-the-art weakly supervised approach, as well as other baselines." ] }
1907.12413
2965312352
Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
Several systems combine the close and distant reading metaphors to provide deeper insights into textual data, such as @cite_36 or @cite_43 . @cite_25 have developed a tool called , which combines focus- and context-techniques to support the analysis of large text documents. The tool enables exploration of text through novel navigation methods and allows the extraction of entities and other concepts. places all close and distant-reading views next to each other, following the metaphor by W "orner and Ertl @cite_51 . instead stacks'' the different views into task-dependent layers.
{ "cite_N": [ "@cite_36", "@cite_43", "@cite_51", "@cite_25" ], "mid": [ "2012118336", "2051088039", "2016630033", "1493108551" ], "abstract": [ "Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.", "Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.", "This paper describes a system and set of algorithms for automatically inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity taggers and morphological analyzers for an arbitrary foreign language. Case studies include French, Chinese, Czech and Spanish.Existing text analysis tools for English are applied to bilingual text corpora and their output projected onto the second language via statistically derived word alignments. Simple direct annotation projection is quite noisy, however, even with optimal alignments. Thus this paper presents noise-robust tagger, bracketer and lemmatizer training procedures capable of accurate system bootstrapping from noisy and incomplete initial projections.Performance of the induced stand-alone part-of-speech tagger applied to French achieves 96 core part-of-speech (POS) tag accuracy, and the corresponding induced noun-phrase bracketer exceeds 91 F-measure. The induced morphological analyzer achieves over 99 lemmatization accuracy on the complete French verbal system.This achievement is particularly noteworthy in that it required absolutely no hand-annotated training data in the given language, and virtually no language-specific knowledge or resources beyond raw text. Performance also significantly exceeds that obtained by direct annotation projection.", "This dissertation investigates the role of contextual information in the automated retrieval and display of full-text documents, using robust natural language processing algorithms to automatically detect structure in and assign topic labels to texts. Many long texts are comprised of complex topic and subtopic structure, a fact ignored by existing information access methods. I present two algorithms which detect such structure, and two visual display paradigms which use the results of these algorithms to show the interactions of multiple main topics, multiple subtopics, and the relations between main topics and subtopics. The first algorithm, called TextTiling , recognizes the subtopic structure of texts as dictated by their content. It uses domain-independent lexical frequency and distribution information to partition texts into multi-paragraph passages. The results are found to correspond well to reader judgments of major subtopic boundaries. The second algorithm assigns multiple main topic labels to each text, where the labels are chosen from pre-defined, intuitive category sets; the algorithm is trained on unlabeled text. A new iconic representation, called TileBars uses TextTiles to simultaneously and compactly display query term frequency, query term distribution and relative document length. This representation provides an informative alternative to ranking long texts according to their overall similarity to a query. For example, a user can choose to view those documents that have an extended discussion of one set of terms and a brief but overlapping discussion of a second set of terms. This representation also allows for relevance feedback on patterns of term distribution. TileBars display documents only in terms of words supplied in the user query. For a given retrieved text, if the query words do not correspond to its main topics, the user cannot discern in what context the query terms were used. For example, a query on contaminants may retrieve documents whose main topics relate to nuclear power, food, or oil spills. To address this issue, I describe a graphical interface, called Cougar , that displays retrieved documents in terms of interactions among their automatically-assigned main topics, thus allowing users to familiarize themselves with the topics and terminology of a text collection." ] }
1907.12413
2965312352
Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
In recent years, several web-based interfaces have been created to support users in various text annotation tasks. For example, @cite_46 can be used for the annotation of POS tags or named entities. In this interface, annotations are made directly in the text by dragging the mouse over multiple words or clicking on a single word. employs the same interactions for text annotation. Another web-based annotation tool is called @cite_65 ; it allows annotations of named entities and their relations. @cite_49 use automatic entity extraction for annotating relationships between media streams. TimeLineCurator @cite_28 automatically extracts temporal events from unstructured text data and enables users to curate them in a visual, annotated timeline. @cite_54 have presented a collaborative text annotation framework and emphasize the importance of pre-annotation to significantly reduce annotation costs. have presented a framework that creates BRAT-compatible pre-annotations @cite_22 and discuss (dis-)advantages of pre-annotation. The initial suggestions of could be seen as pre-annotations, but are automatically updated after each interaction.
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_54", "@cite_65", "@cite_49", "@cite_46" ], "mid": [ "2251026067", "1493490255", "1784290353", "2136409007" ], "abstract": [ "In this paper, we present a flexible approach to the efficient and exhaustive manual annotation of text documents. For this purpose, we extend WebAnno (, 2013) an open-source web-based annotation tool. 1 While it was previously limited to specific annotation layers, our extension allows adding and configuring an arbitrary number of layers through a web-based UI. These layers can be annotated separately or simultaneously, and support most types of linguistic annotations such as spans, semantic classes, dependency relations, lexical chains, and morphology. Further, we tightly integrate a generic machine learning component for automatic annotation suggestions of span annotations. In two case studies, we show that automatic annotation suggestions, combined with our split-pane UI concept, significantly reduces annotation time.", "Traditionally, Information Extraction (IE) has focused on satisfying precise, narrow, pre-specified requests from small homogeneous corpora (e.g., extract the location and time of seminars from a set of announcements). Shifting to a new domain requires the user to name the target relations and to manually create new extraction rules or hand-tag new training examples. This manual labor scales linearly with the number of target relations. This paper introduces Open IE (OIE), a new extraction paradigm where the system makes a single data-driven pass over its corpus and extracts a large set of relational tuples without requiring any human input. The paper also introduces TEXTRUNNER, a fully implemented, highly scalable OIE system where the tuples are assigned a probability and indexed to support efficient extraction and exploration via user queries. We report on experiments over a 9,000,000 Web page corpus that compare TEXTRUNNER with KNOWITALL, a state-of-the-art Web IE system. TEXTRUNNER achieves an error reduction of 33 on a comparable set of extractions. Furthermore, in the amount of time it takes KNOWITALL to perform extraction for a handful of pre-specified relations, TEXTRUNNER extracts a far broader set of facts reflecting orders of magnitude more relations, discovered on the fly. We report statistics on TEXTRUNNER's 11,000,000 highest probability tuples, and show that they contain over 1,000,000 concrete facts and over 6,500,000 more abstract assertions.", "Over the last decades, several billion Web pages have been made available on the Web. The ongoing transition from the current Web of unstructured data to the Web of Data yet requires scalable and accurate approaches for the extraction of structured data in RDF (Resource Description Framework) from these websites. One of the key steps towards extracting RDF from text is the disambiguation of named entities. While several approaches aim to tackle this problem, they still achieve poor accuracy. We address this drawback by presenting AGDISTIS, a novel knowledge-base-agnostic approach for named entity disambiguation. Our approach combines the Hypertext-Induced Topic Search (HITS) algorithm with label expansion strategies and string similarity measures. Based on this combination, AGDISTIS can efficiently detect the correct URIs for a given set of named entities within an input text. We evaluate our approach on eight different datasets against state-of-the-art named entity disambiguation frameworks. Our results indicate that we outperform the state-of-the-art approach by up to 29 F-measure.", "Today people typically read and annotate printed documents even if they are obtained from electronic sources like digital libraries If there is a reason for them to share these personal annotations online, they must re-enter them. Given the advent of better computer support for reading and annotation, including tablet interfaces, will people ever share their personal digital ink annotations as is, or will they make substantial changes to them? What can we do to anticipate and support the transition from personal to public annotations? To investigate these questions, we performed a study to characterize and compare students' personal annotations as they read assigned papers with those they shared with each other using an online system. By analyzing over 1, 700 annotations, we confirmed three hypotheses: (1) only a small fraction of annotations made while reading are directly related to those shared in discussion; (2) some types of annotations - those that consist of anchors in the text coupled with margin notes - are more apt to be the basis of public commentary than other types of annotations; and (3) personal annotations undergo dramatic changes when they are shared in discussion, both in content and in how they are anchored to the source document. We then use these findings to explore ways to support the transition from personal to public annotations." ] }
1907.12336
2965762627
Automatic data abstraction is an important capability for both benchmarking machine intelligence and supporting summarization applications. In the former one asks whether a machine can ‘understand’ enough about the meaning of input data to produce a meaningful but more compact abstraction. In the latter this capability is exploited for saving space or human time by summarizing the essence of input data. In this paper we study a general reinforcement learning based framework for learning to abstract sequential data in a goal-driven way. The ability to define different abstraction goals uniquely allows different aspects of the input data to be preserved according to the ultimate purpose of the abstraction. Our reinforcement learning objective does not require human-defined examples of ideal abstraction. Importantly our model processes the input sequence holistically without being constrained by the original input order. Our framework is also domain agnostic – we demonstrate applications to sketch, video and text data and achieve promising results in all domains.
Sketch recognition Early sketch recognition methods were developed to deal with professionally drawn sketches as in CAD or artistic drawings @cite_43 @cite_45 @cite_44 . The more challenging task of free-hand sketch recognition was first tackled in @cite_36 along with the release of the first large-scale dataset of amateur sketches. Since then the task has been well-studied using both classic vision @cite_3 @cite_24 as well as deep learning approaches @cite_53 . Recent successful deep learning approaches have spanned both primarily non-sequential CNN @cite_53 @cite_30 and sequential RNN @cite_39 @cite_10 recognizers. We use both CNN and RNN-based multi-class classifiers to provide rewards for our RL based sketch abstraction framework.
{ "cite_N": [ "@cite_30", "@cite_36", "@cite_53", "@cite_3", "@cite_39", "@cite_44", "@cite_43", "@cite_24", "@cite_45", "@cite_10" ], "mid": [ "2549858052", "2493181180", "2743832495", "2322404333" ], "abstract": [ "The paper presents a deep Convolutional Neural Network (CNN) framework for free-hand sketch recognition. One of the main challenges in free-hand sketch recognition is to increase the recognition accuracy on sketches drawn by different people. To overcome this problem, we use deep Convolutional Neural Networks (CNNs) that have dominated top results in the field of image recognition. And we use the contours of natural images for training, because sketches drawn by different people may be very different and databases of the sketch images for training are very limited. We propose a CNN training on contours that performs well on sketch recognition over different databases of the sketch images. And we make some adjustments to the contours for training and reach higher recognition accuracy. Experimental results show the effectiveness of the proposed approach.", "We propose a deep learning approach to free-hand sketch recognition that achieves state-of-the-art performance, significantly surpassing that of humans. Our superior performance is a result of modelling and exploiting the unique characteristics of free-hand sketches, i.e., consisting of an ordered set of strokes but lacking visual cues such as colour and texture, being highly iconic and abstract, and exhibiting extremely large appearance variations due to different levels of abstraction and deformation. Specifically, our deep neural network, termed Sketch-a-Net has the following novel components: (i) we propose a network architecture designed for sketch rather than natural photo statistics. (ii) Two novel data augmentation strategies are developed which exploit the unique sketch-domain properties to modify and synthesise sketch training data at multiple abstraction levels. Based on this idea we are able to both significantly increase the volume and diversity of sketches for training, and address the challenge of varying levels of sketching detail commonplace in free-hand sketches. (iii) We explore different network ensemble fusion strategies, including a re-purposed joint Bayesian scheme, to further improve recognition performance. We show that state-of-the-art deep networks specifically engineered for photos of natural objects fail to perform well on sketch recognition, regardless whether they are trained using photos or sketches. Furthermore, through visualising the learned filters, we offer useful insights in to where the superior performance of our network comes from.", "Recognizing freehand sketches with high arbitrariness is greatly challenging. Most existing methods either ignore the geometric characteristics or treat sketches as handwritten characters with fixed structural ordering. Consequently, they can hardly yield high recognition performance even though sophisticated learning techniques are employed. In this paper, we propose a sequential deep learning strategy that combines both shape and texture features. A coded shape descriptor is exploited to characterize the geometry of sketch strokes with high flexibility, while the outputs of constitutional neural networks (CNN) are taken as the abstract texture feature. We develop dual deep networks with memorable gated recurrent units (GRUs), and sequentially feed these two types of features into the dual networks, respectively. These dual networks enable the feature fusion by another gated recurrent unit (GRU), and thus accurately recognize sketches invariant to stroke ordering. The experiments on the TU-Berlin data set show that our method outperforms the average of human and state-of-the-art algorithms even when significant shape and appearance variations occur.", "The recognition of pen-based visual patterns such as sketched symbols is amenable to supervised machine learning models such as neural networks. However, a sizable, labeled training corpus is often required to learn the high variations of freehand sketches. To circumvent the costs associated with creating a large training corpus, improve the recognition accuracy with only a limited amount of training samples and accelerate the development of sketch recognition system for novel sketch domains, we present a neural network training protocol that consists of three steps. First, a large pool of unlabeled, synthetic samples are generated from a small set of existing, labeled training samples. Then, a Deep Belief Network (DBN) is pre-trained with those synthetic, unlabeled samples. Finally, the pre-trained DBN is fine-tuned using the limited amount of labeled samples for classification. The training protocol is evaluated against supervised baseline approaches such as the nearest neighbor classifier and the neural network classifier. The benchmark data sets used are partitioned such that there are only a few labeled samples for training, yet a large number of labeled test cases featuring rich variations. Results suggest that our training protocol leads to a significant error reduction compared to the baseline approaches." ] }
1907.12383
2966710536
Efficient transfers to many recipients present a host of issues on Ethereum. First, accounts are identified by long and incompressible constants. Second, these constants have to be stored and communicated for each payment. Third, the standard interface for token transfers does not support lists of recipients, adding repeated communication to the overhead. Since Ethereum charges resource usage, even small optimizations translate to cost savings. Airdrops, a popular marketing tool used to boost coin uptake, present a relevant example for the value of optimizing bulk transfers. Therefore, we review technical solutions for airdrops of Ethereum-based tokens, discuss features and prerequisites, and compare the operational costs by simulating 35 scenarios. We find that cost savings of factor two are possible, but require specific provisions in the smart contract implementing the token system. Pull-based approaches, which use on-chain interaction with the recipients, promise moderate savings for the distributor while imposing a disproportional cost on each recipient. Total costs are broadly linear in the number of recipients independent of the technical approach. We publish the code of the simulation framework for reproducibility, to support future airdrop decisions, and to benchmark innovative bulk payment solutions.
Howell @cite_5 study the success factors of 440 ICOs on based on propriety transaction data, presumably acquired from exchanges and other intermediaries, and manual labeling. Their main interest is in the relationship between issuer characteristics and indicators of success. The regression analyses find highly significant positive effects on liquidity and volume of the token for independent variables measuring the existence of a white paper, the availability of code on Github, the support by venture capitalists, the entrepreneurs' experience, the acceptance of Bitcoin, and the organization of a pre-sale. No significant effect is found for airdrops.
{ "cite_N": [ "@cite_5" ], "mid": [ "2811170310", "2952115223", "2402306069", "2267899723" ], "abstract": [ "Initial coin offerings (ICOs) have emerged as a new mechanism for entrepreneurial finance, with parallels to initial public offerings, venture capital, and pre-sale crowdfunding. In a sample of more than 1,500 ICOs that collectively raise $12.9 billion, we examine which issuer and ICO characteristics predict success, measured using real outcomes (employment and issuer failure) and financial outcomes (token liquidity and volume). Success is associated with disclosure, credible commitment to the project, and quality signals. An instrumental variables analysis finds that ICO token exchange listing causes higher future employment, indicating that access to liquidity has important real consequences for the enterprise.", "Abstract Cryptographic assets such as Bitcoin and Ethereum provide distributed consensus with a Proof-of-Work protocol and incentive-based engineering. The consensus is inherently dependent on the value of the asset due to the incentives. The value of these assets frequently fluctuates, which in turn influences the incentive component of the consensus mechanism. For a proof-of-work consensus to be secure, the participation reward must have a perceived real-world value. The future of this perception is not at all clear. The recent 70 drop in the value of Bitcoin versus the US Dollar may be precipitating a circle of declining security of the platform which we explore in depth in this paper. In this paper, we analyze the impact of fluctuations on the security of Bitcoin now, and in the future. We introduce a novel method to examine the rationale of a miner based on the price fluctuations. We integrate our method with an existing security evaluation framework and simulator. Using our approach, we determine and report on the impact of the value of the cryptographic asset on the security of the blockchain given the miner’s rationale. Our method allows us to evaluate the impact of different methods of incentive manipulation such as reduced block-reward and transaction fees, by simulation.", "Most popular blockchain solutions, like Bitcoin, rely on proof-of-work, guaranteeing that the output of the consensus is agreed upon with high probability. However, this probability depends on the delivery of messages and that the computational power of the system is sufficiently scattered among pools of nodes in the network so that no pool can mine more blocks faster than the crowd. New approaches, like Ethereum, generalise the proof-of-work approach by letting individuals deploy their own private blockchain with high transaction throughput. As companies are starting to deploy private chains, it has become crucial to better understand the guarantees blockchains offer in such a small and controlled environment. In this paper, we present the , an execution that we experienced when building our private chain at NICTA Data61. Even though this anomaly has never been acknowledged before, it may translate into dramatic consequences for the user of blockchains. Named after the infamous Paxos anomaly, this anomaly makes dependent transactions, like \"Bob sends money to Carole after he received money from Alice\" impossible. This anomaly relies on the fact that existing blockchains do not ensure consensus safety deterministically: there is no way for Bob to make sure that Alice actually sent him coins without Bob using an external mechanism, like converting these coins into a fiat currency that allows him to withdraw. We also explore smart contracts as a potential alternative to transactions in order to freeze coins, and show implementations of smart contract that can suffer from the Blockchain anomaly and others that may cope with it.", "Crowdfunding has gained widespread attention in recent years. Despite the huge success of crowdfunding platforms, the percentage of projects that succeed in achieving their desired goal amount is only around 40 . Moreover, many of these crowdfunding platforms follow \"all-or-nothing\" policy which means the pledged amount is collected only if the goal is reached within a certain predefined time duration. Hence, estimating the probability of success for a project is one of the most important research challenges in the crowdfunding domain. To predict the project success, there is a need for new prediction models that can potentially combine the power of both classification (which incorporate both successful and failed projects) and regression (for estimating the time for success). In this paper, we formulate the project success prediction as a survival analysis problem and apply the censored regression approach where one can perform regression in the presence of partial information. We rigorously study the project success time distribution of crowdfunding data and show that the logistic and log-logistic distributions are a natural choice for learning from such data. We investigate various censored regression models using comprehensive data of 18K Kickstarter (a popular crowdfunding platform) projects and 116K corresponding tweets collected from Twitter. We show that the models that take complete advantage of both the successful and failed projects during the training phase will perform significantly better at predicting the success of future projects compared to the ones that only use the successful projects. We provide a rigorous evaluation on many sets of relevant features and show that adding few temporal features that are obtained at the project's early stages can dramatically improve the performance." ] }
1907.12383
2966710536
Efficient transfers to many recipients present a host of issues on Ethereum. First, accounts are identified by long and incompressible constants. Second, these constants have to be stored and communicated for each payment. Third, the standard interface for token transfers does not support lists of recipients, adding repeated communication to the overhead. Since Ethereum charges resource usage, even small optimizations translate to cost savings. Airdrops, a popular marketing tool used to boost coin uptake, present a relevant example for the value of optimizing bulk transfers. Therefore, we review technical solutions for airdrops of Ethereum-based tokens, discuss features and prerequisites, and compare the operational costs by simulating 35 scenarios. We find that cost savings of factor two are possible, but require specific provisions in the smart contract implementing the token system. Pull-based approaches, which use on-chain interaction with the recipients, promise moderate savings for the distributor while imposing a disproportional cost on each recipient. Total costs are broadly linear in the number of recipients independent of the technical approach. We publish the code of the simulation framework for reproducibility, to support future airdrop decisions, and to benchmark innovative bulk payment solutions.
Chen @cite_12 identify underpriced instructions (even after the 2016 gas price adjustment) and propose an adaptive pricing scheme. Their main interest is to raise economic barriers against congestion, which in the worst case enables denial of service attacks on the systemic level.
{ "cite_N": [ "@cite_12" ], "mid": [ "2015555034", "2133068139", "2963220038", "2061926759" ], "abstract": [ "While electric vehicles (EVs) are expected to provide environmental and economical benefit, judicious coordination of EV charging is necessary to prevent overloading of the distribution grid. Leveraging the smart grid infrastructure, the utility company can adjust the electricity price intelligently for individual customers to elicit desirable load curves. In this context, this paper addresses the problem of predicting the EV charging behavior of the consumers at different prices, which is a prerequisite for optimal price adjustment. The dependencies on price responsiveness among consumers are captured by a conditional random field (CRF) model. To account for temporal dynamics potentially in a strategic setting, the framework of online convex optimization is adopted to develop an efficient online algorithm for tracking the CRF parameters. The proposed model is then used as an input to a stochastic profit maximization module for real-time price setting. Numerical tests using simulated and semi-real data verify the effectiveness of the proposed approach.", "The adaptive cubic overestimation algorithm described in Cartis, Gould and Toint (2007) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, but without any Lipschitz continuity requirement on the objective’s Hessian. A worst-case complexity analysis in terms of evaluations of the problem’s function and derivatives is also presented for the Lipschitz continuous case and for a variant of the resulting algorithm. This analysis extends the best known bound for general unconstrained problems to nonlinear problems with convex constraints.", "The gas mechanism in Ethereum charges the execution of every operation to ensure that smart contracts running in EVM (Ethereum Virtual Machine) will be eventually terminated. Failing to properly set the gas costs of EVM operations allows attackers to launch DoS attacks on Ethereum. Although Ethereum recently adjusted the gas costs of EVM operations to defend against known DoS attacks, it remains unknown whether the new setting is proper and how to configure it to defend against unknown DoS attacks. In this paper, we make the first step to address this challenging issue by first proposing an emulation-based framework to automatically measure the resource consumptions of EVM operations. The results reveal that Ethereum’s new setting is still not proper. Moreover, we obtain an insight that there may always exist exploitable under-priced operations if the cost is fixed. Hence, we propose a novel gas cost mechanism, which dynamically adjusts the costs of EVM operations according to the number of executions, to thwart DoS attacks. This method punishes the operations that are executed much more frequently than before and lead to high gas costs. To make our solution flexible and secure and avoid frequent update of Ethereum client, we design a special smart contract that collaborates with the updated EVM for dynamic parameter adjustment. Experimental results demonstrate that our method can effectively thwart both known and unknown DoS attacks with flexible parameter settings. Moreover, our method only introduces negligible additional gas consumption for benign users.", "A large penetration of electric and plug-in hybrid electric vehicles would likely result in increased system peaks and overloading of power system assets if the charging of vehicles is left uncontrolled. In this paper we propose both a centralized and a decentralized smart-charging scheme which seek to minimize system-wide generation costs while respecting grid constraints. Under the centralized scheme, vehicles' batteries are aggregated to virtual storage resources at each network node, which are optimally dispatched with a multiperiod Optimal Power Flow. On the other hand, under the decentralized scheme, price profiles broadcasted to vehicles day-ahead are determined so that the optimal response of individual vehicles to this tariff achieves the goal of cost minimization. Two alternative tariffs are explored, one where the same price profile applies system-wide, and another where different prices can be defined at different nodes. Results show that compared with uncontrolled charging, these smart-charging schemes successfully avoid asset overloading, displace most charging to valley hours and reduce generation costs. Moreover they are robust in the face of forecast errors in vehicle behavior." ] }
1907.12383
2966710536
Efficient transfers to many recipients present a host of issues on Ethereum. First, accounts are identified by long and incompressible constants. Second, these constants have to be stored and communicated for each payment. Third, the standard interface for token transfers does not support lists of recipients, adding repeated communication to the overhead. Since Ethereum charges resource usage, even small optimizations translate to cost savings. Airdrops, a popular marketing tool used to boost coin uptake, present a relevant example for the value of optimizing bulk transfers. Therefore, we review technical solutions for airdrops of Ethereum-based tokens, discuss features and prerequisites, and compare the operational costs by simulating 35 scenarios. We find that cost savings of factor two are possible, but require specific provisions in the smart contract implementing the token system. Pull-based approaches, which use on-chain interaction with the recipients, promise moderate savings for the distributor while imposing a disproportional cost on each recipient. Total costs are broadly linear in the number of recipients independent of the technical approach. We publish the code of the simulation framework for reproducibility, to support future airdrop decisions, and to benchmark innovative bulk payment solutions.
Airdrops are a rather new topic. We are aware of one academic paper only. Harrigan @cite_18 raises awareness for privacy implications of airdrops when identifiers of one chain () are reused to distribute coins on another chain (). Sharing identifiers between chains in general gives additional clues for address clustering.
{ "cite_N": [ "@cite_18" ], "mid": [ "2891594175", "2249121382", "2104261947", "2124071994" ], "abstract": [ "Airdrops are a popular method of distributing cryptocurrencies and tokens. While often considered risk-free from the point of view of recipients, their impact on privacy is easily overlooked. We examine the Clam airdrop of 2014, a forerunner to many of today's airdrops, that distributed a new cryptocurrency to every address with a non-dust balance on the Bitcoin, Litecoin and Dogecoin blockchains. Specifically, we use address clustering to try to construct the one-to-many mappings from entities to addresses on the blockchains, individually and in combination. We show that the sharing of addresses between the blockchains is a privacy risk. We identify instances where an entity has disclosed information about their address ownership on the Bitcoin, Litecoin and Dogecoin blockchains, exclusively via their activity on the Clam blockchain.", "We introduce autonomous gossiping (A G), a new genre epidemic algorithm for selective dissemination of information in contrast to previous usage of epidemic algorithms which flood the whole network. A G is a paradigm which suits well in a mobile ad-hoc networking (MANET) environment because it does not require any infrastructure or middleware like multicast tree and (un)subscription maintenance for publish subscribe, but uses ecological and economic principles in a self-organizing manner in order to achieve any arbitrary selectivity (flexible casting). The trade-off of using a stateless self-organizing mechanism like A G is that it does not guarantee completeness deterministically as is one of the original objectives of alternate selective dissemination schemes like publish subscribe. We argue that such incompleteness is not a problem in many non-critical real-life civilian application scenarios and realistic node mobility patterns, where the overhead of infrastructure maintenance may outweigh the benefits of completeness, more over, at present there exists no mechanism to realize publish subscribe or other paradigms for selective dissemination in MANET environments.", "Recent web-based applications offer users free service in exchange for access to personal communication, such as on-line email services and instant messaging. The inspection and retention of user communication is generally intended to enable targeted marketing. However, unless specifically stated otherwise by the collecting service's privacy policy, such records have an indefinite lifetime and may be later used or sold without restriction. In this paper, we show that it is possible to protect a user's privacy from these risks by exploiting mutually oblivious, competing communication channels. We create virtual channels over online services (e.g., Google's Gmail, Microsoft's Hotmail) through which messages and cryptographic keys are delivered. The message recipient uses a shared secret to identify the shares and ultimately recover the original plaintext. In so doing, we create a wired “spread-spectrum” mechanism for protecting the privacy of web-based communication. We discuss the design and implementation of our open-source Java applet, Aquinas, and consider ways that the myriad of communication channels present on the Internet can be exploited to preserve privacy.", "Secure communications can be impeded by eavesdroppers in conventional relay systems. This paper proposes cooperative jamming strategies for two-hop relay networks where the eavesdropper can wiretap the relay channels in both hops. In these approaches, the normally inactive nodes in the relay network can be used as cooperative jamming sources to confuse the eavesdropper. Linear precoding schemes are investigated for two scenarios where single or multiple data streams are transmitted via a decode-and-forward (DF) relay, under the assumption that global channel state information (CSI) is available. For the case of single data stream transmission, we derive closed-form jamming beamformers and the corresponding optimal power allocation. Generalized singular value decomposition (GSVD)-based secure relaying schemes are proposed for the transmission of multiple data streams. The optimal power allocation is found for the GSVD relaying scheme via geometric programming. Based on this result, a GSVD-based cooperative jamming scheme is proposed that shows significant improvement in terms of secrecy rate compared to the approach without jamming. Furthermore, the case involving an eavesdropper with unknown CSI is also investigated in this paper. Simulation results show that the secrecy rate is dramatically increased when inactive nodes in the relay network participate in cooperative jamming." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
The term originates in control system theory and measures the degree to which a system's internal state can be determined from its output @cite_21 . In cloud environments, observability indicates to what degree infrastructure and applications and their interactions can be monitored. Outputs used are for example logs, metrics and traces @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_21" ], "mid": [ "1938602245", "2134058295", "2762781394", "2157054253" ], "abstract": [ "Controllability and observability have long been recognized as fundamental structural properties of dynamical systems, but have recently seen renewed interest in the context of large, complex networks of dynamical systems. A basic problem is sensor and actuator placement: choose a subset from a finite set of possible placements to optimize some real-valued controllability and observability metrics of the network. Surprisingly little is known about the structure of such combinatorial optimization problems. In this paper, we show that several important classes of metrics based on the controllability and observability Gramians have a strong structural property that allows for either efficient global optimization or an approximation guarantee by using a simple greedy heuristic for their maximization. In particular, the mapping from possible placements to several scalar functions of the associated Gramian is either a modular or submodular set function. The results are illustrated on randomly generated systems and on a problem of power-electronic actuator placement in a model of the European power grid.", "Context: Many metrics are used in software engineering research as surrogates for maintainability of software systems. Aim: Our aim was to investigate whether such metrics are consistent among themselves and the extent to which they predict maintenance effort at the entire system level. Method: The Maintainability Index, a set of structural measures, two code smells (Feature Envy and God Class) and size were applied to a set of four functionally equivalent systems. The metrics were compared with each other and with the outcome of a study in which six developers were hired to perform three maintenance tasks on the same systems. Results: The metrics were not mutually consistent. Only system size and low cohesion were strongly associated with increased maintenance effort. Conclusion: Apart from size, surrogate maintainability measures may not reflect future maintenance effort. Surrogates need to be evaluated in the contexts for which they will be used. While traditional metrics are used to identify problematic areas in the code, the improvements of the worst areas may, inadvertently, lead to more problems for the entire system. Our results suggest that local improvements should be accompanied by an evaluation at the system level.", "The increasing complexity in nowadays engineered systems requires great attention to safety hazards and occurrence of faults, which must be readily detected to possibly restore nominal behavior of the system. The notion of diagnosability plays a key role in this regard, since it corresponds to the possibility of detecting within a finite delay if a fault, or in general a hazardous situation, did occur. In this letter, we introduce and characterize the notion of approximate diagnosability for the general class of metric systems, which are typically used in the research community working on hybrid systems to deal with complex heterogeneous processes in, e.g., cyber-physical systems. This notion captures the possibility of detecting faults on the basis of measurements corrupted by errors, always introduced by non-ideal sensors in a real environment. Relations are established between approximate diagnosability of a given metric system and approximate diagnosability of a system that approximately simulates the given one. Application of the proposed results to the analysis of approximate diagnosability of nonlinear systems is finally discussed.", "In large scale and self-managing systems the autonomy of the system has a major impact on the management of such systems. Even if the system can run in a high degree of autonomy a certain level of control over the properties of the system at run-time might be useful, to ensure the system does the right things. In particular, in dynamic environment, where system structure often changes it is desirable to monitor the status of a system to be sure it is still in a normal condition. This paper describes an approach how to use model driven system management for evaluation of global constraints. This allows an automatic or semi-automatic monitoring of autonomous systems, which helps to identify malfunction and wrong self-management of such systems. The approach has been used to check constrains on distributed system realized as CORBA components" ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_12 investigate the capturing of service execution paths in distributed systems. While capturing the execution path is challenging, as each request may cross many components of several servers, they introduce a generic end-to-end methodology to capture the entire request. During our interviews we found a need for transparency of execution paths as well as more generally interdependencies between services.
{ "cite_N": [ "@cite_12" ], "mid": [ "2899648294", "2021011293", "2139126523", "1928396776" ], "abstract": [ "Distributed platforms are widely deployed to provide services in various trades. With the increasing scale and complexity of these distributed platforms, it is becoming more and more challenging to understand and diagnose a service request’s processing in a distributed platform, as even one simple service request may traverse numerous heterogeneous components across multiple hosts. Thus, it is highly demanded to capture the complete end-to-end execution path of service requests among all involved components accurately. This paper presents REPTrace, a generic methodology for capturing the complete request execution path (REP) in a transparent fashion. We propose principles for identifying causal relationships among events for a comprehensive list of execution scenarios, and stitch all events to generate complete request execution paths based on library system calls tracing and network labelling. The experiments on different distributed platforms with different workloads show that REPTrace transparently captures the accurate request execution path with reasonable latency and negligible network overhead.", "We consider a distributed server system in which heterogeneous servers operate under the processor sharing (PS) discipline. Exponentially distributed jobs arrive to a dispatcher, which assigns each task to one of the servers. In the so-called size-aware system, the dispatcher is assumed to know the remaining service requirements of some or all of the existing jobs in each server. The aim is to minimize the mean sojourn time, i.e., the mean response time. To this end, we first analyze an M M 1-PS queue in the framework of Markov decision processes, and derive the so-called size-aware relative value of state, which sums up the deviation from the average rate at which sojourn times are accumulated in the infinite time horizon. This task turns out to be non-trivial. The exact analysis yields an infinite system of first order differential equations, for which an explicit solution is derived. The relative values are then utilized to develop efficient dispatching policies by means of the first policy iteration (FPI). Numerically, we show that for the exponentially distributed job sizes the myopic approach, ignoring the future arrivals, yields an efficient and robust policy when compared to other heuristics. However, in the case of highly asymmetric service rates, an FPI based policy outperforms it. Additionally, the size-aware relative value of an M G 1-PS queue is shown to be sensitive with respect to the form of job size distribution, and indeed, the numerical experiments with constant job sizes confirm that the optimal decision depends on the job size distribution.", "High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately no standard mechanism exists for organizing or accessing such information. Consequently different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.", "We consider a distributed server system with m servers operating under the processor sharing (PS) discipline. A stream of fixed size tasks arrives to a dispatcher, which assigns each task to one of the servers. We are interested in minimizing the mean sojourn time, i.e., the mean response time. To this end, we first analyze an M D l-PS queue in the MDP framework. In particular, we derive a closed form expression for the so-called size-aware relative value of state, which sums up the deviation from the average rate at which sojourn times are accumulated in the infinite time horizon. This result can be applied in numerous situations. Here we give an example in the context of dispatching problems by deriving efficient and robust state-dependent dispatching policies for homogeneous and heterogeneous server systems. The obtained policies are further demonstrated by numerical examples." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
The current trend towards more flexible and modular distributed systems is characterized by using independent services, such as micro- or web services. While systems consisting of web services provide better observability than monolithic systems, services have the potential to enhance their observability and monitoring by giving relevant information about their internal behaviour. @cite_2 deal with the challenge that web service definitions do not have any information about their behaviour. They extend the web service definition by adding a behaviour logic description based on a constraint-based model-driven testing approach. During our interviews we identified that the behaviour especially of third-party services needs to be more clearly communicated to assess the impact on service levels and to detect and diagnose faults.
{ "cite_N": [ "@cite_2" ], "mid": [ "2100716485", "2165204405", "2894638380", "1010989072" ], "abstract": [ "For a system of distributed processes, correctness can be ensured by (statically) checking whether their composition satisfies properties of interest. In contrast, Web services are being designed so that each partner discovers properties of others dynamically, through a published interface. Since the overall system may not be available statically and since each business process is supposed to be relatively simple, we propose to use runtime monitoring of conversations between partners as a means of checking behavioural correctness of the entire web service system. Specifically, we identify a subset of UML 2.0 Sequence Diagrams as a property specification language and show that it is sufficiently expressive for capturing safety and liveness properties. By transforming these diagrams to automata, we enable conformance checking of finite execution traces against the specification. We describe an implementation of our approach as part of an industrial system and report on preliminary experience.", "Several recently proposed infrastructures permit client applications to interact with distributed network-accessible services by simply \"plugging in\" into a substrate that provides essential functionality, such as naming, discovery, and multi-protocol binding. However much work remains before the interaction can be considered truly seamless in the sense of adapting to the characteristics of the heterogeneous environments in which clients and services operate. This paper describes a novel approach for addressing this shortcoming: the partitionable services framework, which enables services to be flexibly assembled from multiple components, and facilitates transparent migration and replication of these components at locations closer to the client while still appearing as a single monolithic service. The framework consists of three pieces: (1) declarative specification of services in terms of constituent components; (2) run-time support for dynamic component deployment; and (3) planning policies, which steer the deployment to accomodate underlying environment characteristics. We demonstrate the salient features of the framework and highlight its usability and performance benefits with a case study involving a security-sensitive mail service.", "A real-world distributed system is rarely implemented as a standalone monolithic system. Instead, it is composed of multiple independent interacting components that together ensure the desired system-level specification. One can scale systematic testing to large, industrial-scale implementations by decomposing the system-level testing problem into a collection of simpler component-level testing problems. This paper proposes techniques for compositional programming and testing of distributed systems with two central contributions: (1) We propose a module system based on the theory of compositional trace refinement for dynamic systems consisting of asynchronously-communicating state machines, where state machines can be dynamically created, and communication topology of the existing state machines can change at runtime; (2) We present ModP, a programming system that implements our module system to enable compositional reasoning (assume-guarantee) of distributed systems. We demonstrate the efficacy of our framework by building two practical fault-tolerant distributed systems, a transaction-commit service and a replicated hash-table. ModP helps implement these systems modularly and validate them via compositional testing. We empirically demonstrate that the abstraction-based compositional reasoning approach helps amplify the coverage during testing and scale it to real-world distributed systems. The distributed services built using ModP achieve performance comparable to open-source equivalents.", "With the emergence of new paradigms for computing, such as peer-to-peer technologies, grid computing, autonomic computing and other approaches, it is becoming increasingly natural to view large systems in terms of the services they offer, and consequently in terms of the entities or agents providing or consuming services. For example, web services technologies provide a standard means of interoperating between different software applications, running on a variety of platforms. More generally, web services standards now serve as potential convergence point for diverse technology efforts in support of more general service-oriented architectures. Here, distributed systems are increasingly viewed as collections of service provider and service consumer components interlinked by dynamically defined workflows. Web services must thus be realised by concrete entities or agents that send and receive messages, while the services themselves are the resources characterised by the functionality provided. The important characteristics of these emerging domains and environments are that they are open and dynamic so that new agents may join and existing ones leave. In this view, agents act on behalf of service owners, managing access to services, and ensuring that contracts are fulfilled. They also act on behalf of service consumers, locating services, agreeing contracts, and receiving and presenting results. In these domains, agents are required to engage in interactions, negotiate with one another, make agreements, and make proactive run-time decisions, individually and collectively, while responding to changing circumstances. In particular, agents need to collaborate and to form coalitions of agents with different capabilities in support of new virtual organisations." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
Besides monitoring individual service calls, it is important to predict the runtime performance of distributed systems. @cite_4 show that two techniques, benchmarking and simulation, have shortcomings if they are used separately and introduce and validate a complementary approach. Their approach presents a process which maps benchmark ontologies of simulations. This prove to be inexpensive, fast and reliable. Similarly, Lin et. al @cite_17 propose a novel way of root cause detection in microservice architectures utilizing causal graphs. In our interviews we found that performance is often only known when a system goes live, as the interdependencies between different services and their individual performance are not assessed beforehand.
{ "cite_N": [ "@cite_4", "@cite_17" ], "mid": [ "2344277659", "2021931360", "2136340918", "2049298676" ], "abstract": [ "We consider the problem of identifying the source of failure in a network after receiving alarms or having observed symptoms. To locate the root cause accurately and timely in a large communication system is challenging because a single fault can often result in a large number of alarms, and multiple faults can occur concurrently. In this paper, we present a new fault localization method using a machine-learning approach. We propose to use logistic regression to study the correlation among network events based on end-to-end measurements. Then based on the regression model, we develop fault hypothesis that best explains the observed symptoms. Unlike previous work, the machine-learning algorithm requires neither the knowledge of dependencies among network events, nor the probabilities of faults, nor the conditional probabilities of fault propagation as input. The “low requirement” feature makes it suitable for large complex networks where accurate dependencies and prior probabilities are difficult to obtain. We then evaluate the performance of the learning algorithm with respect to the accuracy of fault hypothesis and the concentration property. Experimental results and theoretical analysis both show satisfactory performance.", "In this work, we study information leakage in timing side channels that arise in the context of shared event schedulers. Consider two processes, one of them an innocuous process (referred to as Alice) and the other a malicious one (referred to as Bob), using a common scheduler to process their jobs. Based on when his jobs get processed, Bob wishes to learn about the pattern (size and timing) of jobs of Alice. Depending on the context, knowledge of this pattern could have serious implications on Alice's privacy and security. For instance, shared routers can reveal traffic patterns, shared memory access can reveal cloud usage patterns, and suchlike. We present a formal framework to study the information leakage in shared resource schedulers using the pattern estimation error as a performance metric. In this framework, a uniform upper bound is derived to benchmark different scheduling policies. The first-come-first-serve scheduling policy is analyzed, and shown to leak significant information when the scheduler is loaded heavily. To mitigate the timing information leakage, we propose an “Accumulate-and-Serve” policy which trades in privacy for a higher delay. The policy is analyzed under the proposed framework and is shown to leak minimum information to the attacker, and is shown to have comparatively lower delay than a fixed scheduler that preemptively assigns service times irrespective of traffic patterns.", "We consider the problem of resource allocation in downlink OFDMA systems for multi service and unknown environment. Due to users' mobility and intercell interference, the base station cannot predict neither the Signal to Noise Ratio (SNR) of each user in future time slots nor their probability distribution functions. In addition, the traffic is bursty in general with unknown arrival. The probability distribution functions of the SNR, channel state and traffic arrival density are then unknown. Achieving a multi service Quality of Service (QoS) while optimizing the performance of the system (e.g. total throughput) is a hard and interesting task since it depends on the unknown future traffic and SNR values. In this paper we solve this problem by modeling the multiuser queuing system as a discrete time linear dynamic system. We develop a robust H∞ controller to regulate the queues of different users. The queues and Packet Drop Rates (PDR) are controlled by proposing a minimum data rate according to the demanded service type of each user. The data rate vector proposed by the controller is then fed as a constraint to an instantaneous resource allocation framework. This instantaneous problem is formulated as a convex optimization problem for instantaneous subcarrier and power allocation decisions. Simulation results show small delays and better fairness among users.", "Large-scale websites are predominantly built as a service-oriented architecture. Here, services are specialized for a certain task, run on multiple machines, and communicate with each other to serve a user's request. An anomalous change in a metric of one service can propagate to other services during this communication, resulting in overall degradation of the request. As any such degradation is revenue impacting, maintaining correct functionality is of paramount concern: it is important to find the root cause of any anomaly as quickly as possible. This is challenging because there are numerous metrics or sensors for a given service, and a modern website is usually composed of hundreds of services running on thousands of machines in multiple data centers. This paper introduces MonitorRank, an algorithm that can reduce the time, domain knowledge, and human effort required to find the root causes of anomalies in such service-oriented architectures. In the event of an anomaly, MonitorRank provides a ranked order list of possible root causes for monitoring teams to investigate. MonitorRank uses the historical and current time-series metrics of each sensor as its input, along with the call graph generated between sensors to build an unsupervised model for ranking. Experiments on real production outage data from LinkedIn, one of the largest online social networks, shows a 26 to 51 improvement in mean average precision in finding root causes compared to baseline and current state-of-the-art methods." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_8 addresses runtime monitoring on continuous deployment in software development as a crucial task, especially in rapidly changing software solutions. While current runtime monitoring approaches of previous and newly deployed versions lack in capturing and monitoring differences at runtime, they present an approach which automatically discovers an execution behaviour model by mining execution logs. Approaches like this that gather information automatically instead of necessitating manual definition are crucial with growing complexity and dynamics of distributed systems.
{ "cite_N": [ "@cite_8" ], "mid": [ "2899695116", "2054506507", "2134716336", "1886625064" ], "abstract": [ "Continuous deployment techniques support rapid deployment of new software versions. Usually a new version is deployed on a limited scale, its behavior is monitored and compared against the previously deployed version and either the deployment of the new version is broadened, or one reverts to the previous version. The existing monitoring approaches, however, do not capture the differences in the execution behavior between the new and the previously deployed versions. We propose an approach to automatically discover execution behavior models for the deployed and the new version using the execution logs. Differences between the two models are identified and enriched such that spurious differences, e.g., due to logging statement modifications, are mitigated. The remaining differences are visualized as cohesive diff regions within the discovered behavior model, allowing one to effectively analyze them for, e.g., anomaly detection and release decision making. To evaluate the proposed approach, we conducted case study on Nutch, an open source application, and an industrial application. We discovered the execution behavior models for the two versions of applications and identified the diff regions between them. By analyzing the regions, we detected bugs introduced in the new versions of these applications. The bugs have been reported and later fixed by the developers, thus, confirming the effectiveness of our approach.", "Automated tools for understanding application behavior and its changes during the application lifecycle are essential for many performance analysis and debugging tasks. Application performance issues have an immediate impact on customer experience and satisfaction. A sudden slowdown of enterprise-wide application can effect a large population of customers, lead to delayed projects, and ultimately can result in company financial loss. Significantly shortened time between new software releases further exacerbates the problem of thoroughly evaluating the performance of an updated application. Our thesis is that online performance modeling should be a part of routine application monitoring. Early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service. We propose a novel framework for automated anomaly detection and application change analysis. It is based on integration of two complementary techniques: (i) a regression-based transaction model that reflects a resource consumption model of the application, and (ii) an application performance signature that provides a compact model of runtime behavior of the application. The proposed integrated framework provides a simple and powerful solution for anomaly detection and analysis of essential performance changes in application behavior. An additional benefit of the proposed approach is its simplicity: It is not intrusive and is based on monitoring data that is typically available in enterprise production environments. The introduced solution further enables the automation of capacity planning and resource provisioning tasks of multitier applications in rapidly evolving IT environments.", "Continuous availability is a critical requirement for an important class of software systems. For these systems, runtime system evolution can mitigate the costs and risks associated with shutting down and restarting the system for an update. We present an architecture-based approach to runtime software evolution and highlight the role of software connectors in supporting runtime change. An initial implementation of a tool suite for supporting the runtime modification of software architectures, called ArchStudio, is presented.", "SUMMARY Substantial research in software engineering focuses on understanding the dynamic nature of software systems in order to improve software maintenance and program comprehension. This research typically makes use of automated instrumentation and profiling techniques after the fact, that is, without considering domain knowledge. In this paper, we examine another source of dynamic information that is generated from statements that have been inserted into the code base during development to draw the system administrators' attention to important run-time phenomena. We call this source communicated information (CI). Examples of CI include execution logs and system events. The availability of CI has sparked the development of an ecosystem of Log Processing Apps (LPAs) that surround the software system under analysis to monitor and document various run-time constraints. The dependence of LPAs on the timeliness, accuracy and granularity of the CI means that it is important to understand the nature of CI and how it evolves over time, both qualitatively and quantitatively. Yet, to our knowledge, little empirical analysis has been performed on CI and its evolution. In a case study on two large open source and one industrial software systems, we explore the evolution of CI by mining the execution logs of these systems and the logging statements in the source code. Our study illustrates the need for better traceability between CI and the LPAs that analyze the CI. In particular, we find that the CI changes at a high rate across versions, which could lead to fragile LPAs. We found that up to 70 of these changes could have been avoided and the impact of 15 to 80 of the changes can be controlled through the use of robust analysis techniques by LPAs. We also found that LPAs that track implementation-level CI (e.g. performance analysis) and the LPAs that monitor error messages (system health monitoring) are more fragile than LPAs that track domain-level CI (e.g. workload modelling), because the latter CI tends to be long-lived. Copyright © 2013 John Wiley & Sons, Ltd." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_14 conducted a survey of 62 multinational companies on public cloud adoption. While use of public cloud infrastructure is on the rise, barriers like security, regulatory compliance, and monitoring remain. Regarding monitoring, the survey has shown that half of the companies rely solely on their cloud providers' monitoring dashboard. Participants noted a crucial need for quality of service monitoring integrated with their monitoring tool.
{ "cite_N": [ "@cite_14" ], "mid": [ "1978800709", "2891766162", "154256836", "2182742055" ], "abstract": [ "The group comparison reveals three potential adoption inhibitors, security, data privacy, and portability.Security concern results in an up to 26-fold increase in the non-adoption likelihood.Concerns about availability, integration, and migration complexity are not likely to hold back cloud adoption initiatives. In the context of cloud computing, risks associated with underlying technologies, risks involving service models and outsourcing, and enterprise readiness have been recognized as potential barriers for the adoption. To accelerate cloud adoption, the concrete barriers negatively influencing the adoption decision need to be identified. Our study aims at understanding the impact of technical and security-related barriers on the organizational decision to adopt the cloud. We analyzed data collected through a web survey of 352 individuals working for enterprises consisting of decision makers as well as employees from other levels within an organization. The comparison of adopter and non-adopter sample reveals three potential adoption inhibitor, security, data privacy, and portability. The result from our logistic regression analysis confirms the criticality of the security concern, which results in an up to 26-fold increase in the non-adoption likelihood. Our study underlines the importance of the technical and security perspectives for research investigating the adoption of technology.", "Cloud Computing is radically changing the way of providing and managing IT services. Big enterprises are continuously investing on Cloud technologies to streamline IT processes and substantially reduce the time to market of new services. The current Cloud service model enables companies, with a low initial investment, to easily test new services and technologies, like IoT and Big Data, on a \"ready to go\" virtualized infrastructure. However, large organizations are still facing multiple challenges in migrating business-critical services and sensitive data to Public Cloud environments. To investigate the current adoption of Public Cloud services, we interviewed IT managers and cloud architects of over sixty multinational organizations. The survey assesses both business and technical issues and requirements of current and future Cloud strategies. Our analysis shows that Cloud Service Providers (CSPs) are not yet perceived as fully able to address critical points in security, regulatory constraints and performance management. Hence, to control their public cloud services and to overcome such limitations, multinational organizations must adopt structured SLM approaches.", "Nowadays, Cloud Computing is widely used to deliver services over the Internet for both technical and economical reasons. The number of Cloud-based services has increased rapidly and strongly in the last years, and so is increased the complexity of the infrastructures behind these services. To properly operate and manage such complex infrastructures effective and efficient monitoring is constantly needed. Many works in literature have surveyed Cloud properties, features, underlying technologies (e.g. virtualization), security and privacy. However, to the best of our knowledge, these surveys lack a detailed analysis of monitoring for the Cloud. To fill this gap, in this paper we provide a survey on Cloud monitoring. We start analyzing motivations for Cloud monitoring, providing also definitions and background for the following contributions. Then, we carefully analyze and discuss the properties of a monitoring system for the Cloud, the issues arising from such properties and how such issues have been tackled in literature. We also describe current platforms, both commercial and open source, and services for Cloud monitoring, underlining how they relate with the properties and issues identified before. Finally, we identify open issues, main challenges and future directions in the field of Cloud monitoring.", "Research has indicated that cloud computing will become the mainstream in computing technology and an effective tool for businesses. Traditionally, companies build corporate data centers, install applications and are responsible for maintaining their IT infrastructures. However, cloud computing removes the need for organizations to own corporate data centers and install enterprise applications. Instead, cloud provides businesses with the advantage of scalability, ondemand service, flexibility and reduced cost of computing, an increase has been identified in the acceptance and adoption of this new computing model in developed and developing countries. So then this research was carried out to investigate the perception of employees in IT & Telecommunication companies and users of devices that support cloud computing, regarding cloud computing being the next generation of computing technology, the extent of cloud computing adoption and to identify the motivating factors, current issues affecting the adoption of cloud computing in Nigeria. These objectives were achieved through Quantitative and qualitative research methodologies, the basis of the research consists of two separate questionnaires that was designed and administered. The exclusion criteria are Non-IT firms, Telecommunication companies and those who are not aware of cloud computing. While the inclusion criteria are IT & Telecommunication employees, IT managers and people who are aware of cloud computing. Questionnaires were designed and distributed using survey monkey, an online survey application. A number of semi-structured interviews were conducted through Skype with some employees and IT managers to produce a further, in-depth investigation. Analysis of the findings from both interviews and questionnaire served to provide an insight to the objectives of this research. Following the outcome of the research, Proper awareness by the cloud service providers on the risk and benefits of cloud, availability of more cloud service providers and free trial of cloud services to clients for a stipulated period will encourage adoption of cloud computing." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
Similarly, Knoche and Hasselbring @cite_1 conducted a survey of German experts on microservice adoption. Drivers for microservice adoption are scalability, maintainability and development speed. On the other hand, barriers to adoption are mainly operational in nature. Operations department resist microservices due to the change in their tasks. On the technical level, running distributed applications prone to partial failures and monitoring them is a significant challenge.
{ "cite_N": [ "@cite_1" ], "mid": [ "2907869797", "1978800709", "2736340806", "2082539579" ], "abstract": [ "Microservices are an architectural style for software which currently receives a lot of attention in both industry and academia. Several companies employ microservice architectures with great success, and there is a wealth of blog posts praising their advantages. Especially so-called Internet-scale systems use them to satisfy their enormous scalability requirements and to rapidly deliver new features to their users. But microservices are not only popular with large, Internet-scale systems. Many traditional companies are also considering whether microservices are a viable option for their applications. However, these companies may have other motivations to employ microservices, and see other barriers which could prevent them from adopting microservices. Furthermore, these drivers and barriers possibly differ among industry sectors. In this article, we present the results of a survey on drivers and barriers for microservice adoption among professionals in Germany. In addition to overall drivers and barriers, we particularly focus on the use of microservices to modernize existing software, with special emphasis on implications for runtime performance and transactionality. We observe interesting differences between early adopters who emphasize scalability of their Internet-scale systems, compared to traditional companies which emphasize maintainability of their IT systems.", "The group comparison reveals three potential adoption inhibitors, security, data privacy, and portability.Security concern results in an up to 26-fold increase in the non-adoption likelihood.Concerns about availability, integration, and migration complexity are not likely to hold back cloud adoption initiatives. In the context of cloud computing, risks associated with underlying technologies, risks involving service models and outsourcing, and enterprise readiness have been recognized as potential barriers for the adoption. To accelerate cloud adoption, the concrete barriers negatively influencing the adoption decision need to be identified. Our study aims at understanding the impact of technical and security-related barriers on the organizational decision to adopt the cloud. We analyzed data collected through a web survey of 352 individuals working for enterprises consisting of decision makers as well as employees from other levels within an organization. The comparison of adopter and non-adopter sample reveals three potential adoption inhibitor, security, data privacy, and portability. The result from our logistic regression analysis confirms the criticality of the security concern, which results in an up to 26-fold increase in the non-adoption likelihood. Our study underlines the importance of the technical and security perspectives for research investigating the adoption of technology.", "Many large applications are now built using collections of microservices, each of which is deployed in isolated containers and which interact with each other through the use of remote procedure calls (RPCs). The use of microservices improves scalability -- each component of an application can be scaled independently -- and deployability. However, such applications are inherently distributed and current tools do not provide mechanisms to reason about and ensure their global behavior. In this paper we argue that recent advances in formal methods and software packet processing pave the path towards building mechanisms that can ensure correctness for such systems, both when they are being built and at runtime. These techniques impose minimal runtime overheads and are amenable to production deployments.", "Display Omitted ContextNumerous open source software projects are based on volunteers collaboration and require a continuous influx of newcomers for their continuity. Newcomers face barriers that can lead them to give up. These barriers hinder both developers willing to make a single contribution and those willing to become a project member. ObjectiveThis study aims to identify and classify the barriers that newcomers face when contributing to open source software projects. MethodWe conducted a systematic literature review of papers reporting empirical evidence regarding the barriers that newcomers face when contributing to open source software (OSS) projects. We retrieved 291 studies by querying 4 digital libraries. Twenty studies were identified as primary. We performed a backward snowballing approach, and searched for other papers published by the authors of the selected papers to identify potential studies. Then, we used a coding approach inspired by open coding and axial coding procedures from Grounded Theory to categorize the barriers reported by the selected studies. ResultsWe identified 20 studies providing empirical evidence of barriers faced by newcomers to OSS projects while making a contribution. From the analysis, we identified 15 different barriers, which we grouped into five categories: social interaction, newcomers' previous knowledge, finding a way to start, documentation, and technical hurdles. We also classified the problems with regard to their origin: newcomers, community, or product. ConclusionThe results are useful to researchers and OSS practitioners willing to investigate or to implement tools to support newcomers. We mapped technical and non-technical barriers that hinder newcomers' first contributions. The most evidenced barriers are related to socialization, appearing in 75 (15 out of 20) of the studies analyzed, with a high focus on interactions in mailing lists (receiving answers and socialization with other members). There is a lack of in-depth studies on technical issues, such as code issues. We also noticed that the majority of the studies relied on historical data gathered from software repositories and that there was a lack of experiments and qualitative studies in this area." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
Gamez- @cite_9 performed an analysis of RestFUL APIs of cloud providers, identifying requirements for API governance and noting a lack of standardization.
{ "cite_N": [ "@cite_9" ], "mid": [ "2114850763", "2243050748", "2130739136", "2083647394" ], "abstract": [ "Cloud infrastructure providers may form Cloud federations to cope with peaks in resource demand and to make large-scale service management simpler for service providers. To realize Cloud federations, a number of technical and managerial difficulties need to be solved. We present ongoing work addressing three related key management topics, namely, specification, scheduling, and monitoring of services. Service providers need to be able to influence how their resources are placed in Cloud federations, as federations may cross national borders or include companies in direct competition with the service provider. Based on related work in the RESERVOIR project, we propose a way to define service structure and placement restrictions using hierarchical directed acyclic graphs. We define a model for scheduling in Cloud federations that abides by the specified placement constraints and minimizes the risk of violating Service-Level Agreements. We present a heuristic that helps the model determine which virtual machines (VMs) are suitable candidates for migration. To aid the scheduler, and to provide unified data to service providers, we also propose a monitoring data distribution architecture that introduces cross-site compatibility by means of semantic metadata annotations.", "RESTful API documentation is expensive to produce and maintain due to the lack of reusable tools and automated solutions. Most RESTful APIs are documented manually and the API developers are responsible for keeping the documentation up to date as the API evolves making the process both costly and error-prone. In this paper we introduce a novel technique using an HTTP proxy server that can be used to automatically generate RESTful API documentation and demonstrate SpyREST, an example implementation of the proposed technique. SpyREST uses a proxy to intercept example API calls and intelligently produces API documentation for RESTful Web APIs by processing the request and response data. Using the proposed HTTP proxy server based technique, RESTful API developers can significantly reduce the cost of producing and maintaining API documentation by replacing a large manual process with an automated process.", "Representational State Transfer (ReST) architecture provides a set of constraints that drive design decisions towards architectural properties such as interoperability, evolvability and scalability. Designing a ReSTful service API involves finding resources and their relationships, selecting uniform operations for each resource, and defining data formats for them. It is often a non-trivial exercise to refine a functional specification, expressed in terms of arbitrary actions, to a resource-oriented, descriptive state information content. We argue that this process can be described as a series of model transformations, starting from service functionality and gradually refining the phase products until a ReSTful service API is reached. This paper outlines the process phases, transformations and intermediate models based on our experiences in developing ReSTful services and service APIs at Nokia Research Center. The process captures our understanding on how to systematically transform functional specifications into ReSTful Web service interfaces.", "The elasticity promised by cloud computing does not come for free. Providers need to reserve resources to allow users to scale on demand, and cope with workload variations, which results in low utilization. The current response to this low utilization is to re-sell unused resources with no Service Level Objectives (SLOs) for availability. In this paper, we show how to make some of these reclaimable resources more valuable by providing strong, long-term availability SLOs for them. These SLOs are based on forecasts of how many resources will remain unused during multi-month periods, so users can do capacity planning for their long-running services. By using confidence levels for the predictions, we give service providers control over the risk of violating the availability SLOs, and allow them trade increased risk for more resources to make available. We evaluated our approach using 45 months of workload data from 6 production clusters at Google, and show that 6--17 of the resources can be re-offered with a long-term availability of 98.9 or better. A conservative analysis shows that doing so may increase the profitability of selling reclaimed resources by 22--60 ." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
While not an empirical study, @cite_16 show monitoring challenges of holistic cloud applications. Scale and complexity of applications is identified as a main challenge. Related to observability, incomplete and inaccurate views of the total system as well as fault localization are other identified challenges.
{ "cite_N": [ "@cite_16" ], "mid": [ "2107557955", "2047659792", "2587422534", "154256836" ], "abstract": [ "Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools.", "Although cloud computing has become an important topic over the last couple of years, the development of cloud-specific monitoring systems has been neglected. This is surprising considering their importance for metering services and, thus, being able to charge customers. In this paper we introduce a monitoring architecture that was developed and is currently implemented in the EASI-CLOUDS project. The demands on cloud monitoring systems are manifold. Regular checks of the SLAs and the precise billing of the resource usage, for instance, require the collection and converting of infrastructure readings in short intervals. To ensure the scalability of the whole cloud, the monitoring system must scale well without wasting resources. In our approach, the monitoring data is therefore organized in a distributed and easily scalable tree structure and it is based on the Device Management Specification of the OMA and the DMT Admin Specification of the OSGi. Its core component includes the interface, the root of the tree and extension points for sub trees which are implemented and locally managed by the data suppliers themselves. In spite of the variety and the distribution of the data, their access is generic and location-transparent. Besides simple suppliers of monitoring data, we outline a component that provides the means for storing and preprocessing data. The motivation for this component is that the monitoring system can be adjusted to its subscribers - while it usually is the other way round. In EASI-CLOUDS, the so-called Context Stores aggregate and prepare data for billing and other cloud components.", "Online monitoring, providing the real-time status information of servers, is indispensable for the management of distributed systems, e.g. failure detection and resource scheduling. The main design challenges for distributed monitoring systems include scalability, fine granularity, reliability and low overheads. And the challenges are growing with the increase of the scales of the distributed systems. To address the above problems, this paper studies improvements to online distributed monitoring systems (ODMSs) from three aspects: online compression algorithm, online compression reliability, and data representation for information interchanges. We summarize and classify the existing online compression algorithms to identify some research gaps that may represent opportunities for future research. A simple solution is proposed to address the problem that the inaccuracy of compression algorithms may be caused by some failures of distributed systems. A bitmap-like data format is presented to reduce the per-node overheads and the overheads of the management node in ODMSs, and compared with other existing formats used in the monitoring system both in mathematical analysis and practical experiment. The results show that the bitmap-like data format achieves best performance overall.", "Nowadays, Cloud Computing is widely used to deliver services over the Internet for both technical and economical reasons. The number of Cloud-based services has increased rapidly and strongly in the last years, and so is increased the complexity of the infrastructures behind these services. To properly operate and manage such complex infrastructures effective and efficient monitoring is constantly needed. Many works in literature have surveyed Cloud properties, features, underlying technologies (e.g. virtualization), security and privacy. However, to the best of our knowledge, these surveys lack a detailed analysis of monitoring for the Cloud. To fill this gap, in this paper we provide a survey on Cloud monitoring. We start analyzing motivations for Cloud monitoring, providing also definitions and background for the following contributions. Then, we carefully analyze and discuss the properties of a monitoring system for the Cloud, the issues arising from such properties and how such issues have been tackled in literature. We also describe current platforms, both commercial and open source, and services for Cloud monitoring, underlining how they relate with the properties and issues identified before. Finally, we identify open issues, main challenges and future directions in the field of Cloud monitoring." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_10 give an overview of the state-of-the-art in application performance monitoring (APM), describing typical capabilities and available APM software. They found APM to be a solution to monitoring and analyzing cloud environments, but note future challenges in root cause detection, setup effort and interoperability. APM cannot be understood as a purely technical topic anymore but needs to incorporate business and organizational aspects as well.
{ "cite_N": [ "@cite_10" ], "mid": [ "2606883211", "2054506507", "2107557955", "2003529142" ], "abstract": [ "The performance of application systems has a direct impact on business metrics. For example, companies lose customers and revenue in case of poor performance such as high response times. Application performance management (APM) aims to provide the required processes and tools to have a continuous and up-to-date picture of relevant performance measures during operations, as well as to support the detection and resolution of performance-related incidents. In this tutorial paper, we provide an overview of the state of the art in APM in industrial practice and academic research, highlight current challenges, and outline future research directions.", "Automated tools for understanding application behavior and its changes during the application lifecycle are essential for many performance analysis and debugging tasks. Application performance issues have an immediate impact on customer experience and satisfaction. A sudden slowdown of enterprise-wide application can effect a large population of customers, lead to delayed projects, and ultimately can result in company financial loss. Significantly shortened time between new software releases further exacerbates the problem of thoroughly evaluating the performance of an updated application. Our thesis is that online performance modeling should be a part of routine application monitoring. Early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service. We propose a novel framework for automated anomaly detection and application change analysis. It is based on integration of two complementary techniques: (i) a regression-based transaction model that reflects a resource consumption model of the application, and (ii) an application performance signature that provides a compact model of runtime behavior of the application. The proposed integrated framework provides a simple and powerful solution for anomaly detection and analysis of essential performance changes in application behavior. An additional benefit of the proposed approach is its simplicity: It is not intrusive and is based on monitoring data that is typically available in enterprise production environments. The introduced solution further enables the automation of capacity planning and resource provisioning tasks of multitier applications in rapidly evolving IT environments.", "Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools.", "Performance analysis is a crucial step in HPC architectures including clouds. Traditional performance analysis methodologies that were proposed, implemented, and enacted are functional with the objective of identifying bottlenecks or issues related to memory, programming languages, hardware, and virtualization aspects. However, the need for energy efficient architectures in highly scalable computing environments, such as, Grid or Cloud, has widened the research thrust on developing performance analysis methodologies that analyze the energy inefficiency of HPC applications or their associated hardware. This paper surveys the performance analysis methodologies that investigates into the available energy monitoring and energy awareness mechanisms for HPC architectures. In addition, the paper validates the existing tools in terms of overhead, portability, and user-friendly parameters by conducting experiments at HPCCLoud Research Laboratory at our premise. This research work will promote HPC application developers to select an apt monitoring mechanism and HPC tool developers to augment required energy monitoring mechanisms which fit well with their basic monitoring infrastructures." ] }
1907.12240
2966580058
Business success of companies heavily depends on the availability and performance of their client applications. Due to modern development paradigms such as DevOps and microservice architectural styles, applications are decoupled into services with complex interactions and dependencies. Although these paradigms enable individual development cycles with reduced delivery times, they cause several challenges to manage the services in distributed systems. One major challenge is to observe and monitor such distributed systems. This paper provides a qualitative study to understand the challenges and good practices in the field of observability and monitoring of distributed systems. In 28 semi-structured interviews with software professionals we discovered increasing complexity and dynamics in that field. Especially observability becomes an essential prerequisite to ensure stable services and further development of client applications. However, the participants mentioned a discrepancy in the awareness regarding the importance of the topic, both from the management as well as from the developer perspective. Besides technical challenges, we identified a strong need for an organizational concept including strategy, roles and responsibilities. Our results support practitioners in developing and implementing systematic observability and monitoring for distributed systems.
@cite_3 give an insight into commercial cloud monitoring tools, showing state-of-the-art features, identifying shortcomings and, connected with that, future areas of research. Information aggregation across different layers of abstraction, a broad range of measurable metrics and extensibility are seen as critical success factors. Tools were found to be lacking in standardization regarding monitoring processes and metrics.
{ "cite_N": [ "@cite_3" ], "mid": [ "2107557955", "2028837511", "154256836", "2047659792" ], "abstract": [ "Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools.", "In this paper, we address the cloud service trustworthiness evaluation problem, which in essence is a multi-attribute decision-making problem, by proposing a novel evaluation model based on the fuzzy gap measurement and the evidential reasoning approach. There are many sources of uncertainties in the process of cloud service trustworthiness evaluation. In addition to the intrinsic uncertainties, cloud service providers face the problem of discrepant evaluation information given by different users from different perspectives. To address these problems, we develop a novel fuzzy gap evaluation approach to assess cloud service trustworthiness and to provide evaluation values from different perspectives. From the evaluation values, the perception-importance, delivery-importance, and perception-delivery gaps are generated. These three gaps reflect the discrepancy evaluation of cloud service trustworthiness in terms of perception utility, delivery utility, and importance utility, respectively. Finally, the gap measurement of each perspective is represented by a belief structure and aggregated using the evidential reasoning approach to generate final evaluation results for informative and robust decision making. From this hybrid two-stage evaluation process, cloud service providers can get improvement suggestions from intermediate information derived from the gap measurement, which is the main advantage of this evaluation process.", "Nowadays, Cloud Computing is widely used to deliver services over the Internet for both technical and economical reasons. The number of Cloud-based services has increased rapidly and strongly in the last years, and so is increased the complexity of the infrastructures behind these services. To properly operate and manage such complex infrastructures effective and efficient monitoring is constantly needed. Many works in literature have surveyed Cloud properties, features, underlying technologies (e.g. virtualization), security and privacy. However, to the best of our knowledge, these surveys lack a detailed analysis of monitoring for the Cloud. To fill this gap, in this paper we provide a survey on Cloud monitoring. We start analyzing motivations for Cloud monitoring, providing also definitions and background for the following contributions. Then, we carefully analyze and discuss the properties of a monitoring system for the Cloud, the issues arising from such properties and how such issues have been tackled in literature. We also describe current platforms, both commercial and open source, and services for Cloud monitoring, underlining how they relate with the properties and issues identified before. Finally, we identify open issues, main challenges and future directions in the field of Cloud monitoring.", "Although cloud computing has become an important topic over the last couple of years, the development of cloud-specific monitoring systems has been neglected. This is surprising considering their importance for metering services and, thus, being able to charge customers. In this paper we introduce a monitoring architecture that was developed and is currently implemented in the EASI-CLOUDS project. The demands on cloud monitoring systems are manifold. Regular checks of the SLAs and the precise billing of the resource usage, for instance, require the collection and converting of infrastructure readings in short intervals. To ensure the scalability of the whole cloud, the monitoring system must scale well without wasting resources. In our approach, the monitoring data is therefore organized in a distributed and easily scalable tree structure and it is based on the Device Management Specification of the OMA and the DMT Admin Specification of the OSGi. Its core component includes the interface, the root of the tree and extension points for sub trees which are implemented and locally managed by the data suppliers themselves. In spite of the variety and the distribution of the data, their access is generic and location-transparent. Besides simple suppliers of monitoring data, we outline a component that provides the means for storing and preprocessing data. The motivation for this component is that the monitoring system can be adjusted to its subscribers - while it usually is the other way round. In EASI-CLOUDS, the so-called Context Stores aggregate and prepare data for billing and other cloud components." ] }
1907.12253
2965663253
One major challenge in 3D reconstruction is to infer the complete shape geometry from partial foreground occlusions. In this paper, we propose a method to reconstruct the complete 3D shape of an object from a single RGB image, with robustness to occlusion. Given the image and a silhouette of the visible region, our approach completes the silhouette of the occluded region and then generates a point cloud. We show improvements for reconstruction of non-occluded and partially occluded objects by providing the predicted complete silhouette as guidance. We also improve state-of-the-art for 3D shape prediction with a 2D reprojection loss from multiple synthetic views and a surface-based smoothing and refinement step. Experiments demonstrate the efficacy of our approach both quantitatively and qualitatively on synthetic and real scene datasets.
Most of these approaches are applied to non-occluded objects with clean backgrounds and no occlusions, which may prevent their application to natural images. Sun al @cite_41 conduct experiments on real images from Pix3D, a large-scale dataset with aligned ground-truth 3D shapes, but do not consider the problem of occlusion. We are concerned with predicting shape of objects in natural scenes, which may be partly occluded. Our approach improves the state-of-the-art for object point set generation, and is extended to reconstruct beyond occlusion with the guidance of completed silhouettes. Our silhouettes guidance is closely related to the human depth estimation by Rematas al @cite_29 . However, Rematas al use the visible silhouette (semantic segmentation) rather than a complete silhouette, making it hard to predict overlapped (occluded) regions. Differently, our approach conditions on predicted silhouette to resolve occlusion ambiguity, and is able to predict complete 3D shape rather than 2.5D depth points.
{ "cite_N": [ "@cite_41", "@cite_29" ], "mid": [ "2055686029", "2604236302", "2011792403", "2797394534" ], "abstract": [ "We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds.", "We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a “holistic” approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7 [2] to 89.3 of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1 ] using color images only. We obtain 54 of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67 of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously.", "We present a novel approach for detecting objects and estimating their 3D pose in single images of cluttered scenes. Objects are given in terms of 3D models without accompanying texture cues. A deformable parts-based model is trained on clusters of silhouettes of similar poses and produces hypotheses about possible object locations at test time. Objects are simultaneously segmented and verified inside each hypothesis bounding region by selecting the set of superpixels whose collective shape matches the model silhouette. A final iteration on the 6-DOF object pose minimizes the distance between the selected image contours and the actual projection of the 3D model. We demonstrate successful grasps using our detection and pose estimate with a PR2 robot. Extensive evaluation with a novel ground truth dataset shows the considerable benefit of using shape-driven cues for detecting objects in heavily cluttered scenes.", "We introduce a novel method for robust and accurate 3D object pose estimation from a single color image under large occlusions. Following recent approaches, we first predict the 2D projections of 3D points related to the target object and then compute the 3D pose from these correspondences using a geometric method. Unfortunately, as the results of our experiments show, predicting these 2D projections using a regular CNN or a Convolutional Pose Machine is highly sensitive to partial occlusions, even when these methods are trained with partially occluded examples. Our solution is to predict heatmaps from multiple small patches independently and to accumulate the results to obtain accurate and robust predictions. Training subsequently becomes challenging because patches with similar appearances but different positions on the object correspond to different heatmaps. However, we provide a simple yet effective solution to deal with such ambiguities. We show that our approach outperforms existing methods on two challenging datasets: The Occluded LineMOD dataset and the YCB-Video dataset, both exhibiting cluttered scenes with highly occluded objects." ] }
1907.12253
2965663253
One major challenge in 3D reconstruction is to infer the complete shape geometry from partial foreground occlusions. In this paper, we propose a method to reconstruct the complete 3D shape of an object from a single RGB image, with robustness to occlusion. Given the image and a silhouette of the visible region, our approach completes the silhouette of the occluded region and then generates a point cloud. We show improvements for reconstruction of non-occluded and partially occluded objects by providing the predicted complete silhouette as guidance. We also improve state-of-the-art for 3D shape prediction with a 2D reprojection loss from multiple synthetic views and a surface-based smoothing and refinement step. Experiments demonstrate the efficacy of our approach both quantitatively and qualitatively on synthetic and real scene datasets.
Occlusions have long been an obstacle in multi-view reconstruction. Solutions have been proposed to recover portions of surfaces from single views, with synthetic apertures @cite_6 @cite_31 , or to otherwise improve robustness of matching and completion functions from multiple views @cite_38 @cite_23 @cite_13 . Other work decompose a scene into layered depth maps from RGBD @cite_27 images or video @cite_9 and then seek to complete the occluded portions of the maps. But errors in layered segmentation can severely degrade the recovery of the occluded region. Learning-based approaches @cite_32 @cite_25 @cite_15 have posed recovery from occlusion as a 2D semantic segmentation completion task. Ehsani al @cite_10 propose to complete the silhouette and texture of an occluded object. Our silhouette completion network is most similar to Ehsani al, but we ease the task by predicting the complete silhouette rather than the full texture. We demonstrate better performance with our up-sampling based convolution decoder instead of fully connected layers used in Ehsani al. Moreover, We go further to try to predict the complete 3D shape of the occluded object.
{ "cite_N": [ "@cite_38", "@cite_15", "@cite_10", "@cite_9", "@cite_32", "@cite_6", "@cite_27", "@cite_23", "@cite_31", "@cite_13", "@cite_25" ], "mid": [ "2055686029", "244217497", "2101092098", "2462462929" ], "abstract": [ "We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds.", "The availability of commodity depth sensors such as Kinect has enabled development of methods which can densely reconstruct arbitrary scenes. While the results of these methods are accurate and visually appealing, they are quite often incomplete. This is either due to the fact that only part of the space was visible during the data capture process or due to the surfaces being occluded by other objects in the scene. In this paper, we address the problem of completing and refining such reconstructions. We propose a method for scene completion that can infer the layout of the complete room and the full extent of partially occluded objects. We propose a new probabilistic model, Contour Completion Random Fields, that allows us to complete the boundaries of occluded surfaces. We evaluate our method on synthetic and real world reconstructions of 3D scenes and show that it quantitatively and qualitatively outperforms standard methods. We created a large dataset of partial and complete reconstructions which we will make available to the community as a benchmark for the scene completion task. Finally, we demonstrate the practical utility of our algorithm via an augmented-reality application where objects interact with the completed reconstructions inferred by our method.", "We present an approach for 3D reconstruction from multiple video streams taken by static, synchronized and calibrated cameras that is capable of enforcing temporal consistency on the reconstruction of successive frames. Our goal is to improve the quality of the reconstruction by finding corresponding pixels in subsequent frames of the same camera using optical flow, and also to at least maintain the quality of the single time-frame reconstruction when these correspondences are wrong or cannot be found. This allows us to process scenes with fast motion, occlusions and self- occlusions where optical flow fails for large numbers of pixels. To this end, we modify the belief propagation algorithm to operate on a 3D graph that includes both spatial and temporal neighbors and to be able to discard messages from outlying neighbors. We also propose methods for introducing a bias and for suppressing noise typically observed in uniform regions. The bias encapsulates information about the background and aids in achieving a temporally consistent reconstruction and in the mitigation of occlusion related errors. We present results on publicly available real video sequences. We also present quantitative comparisons with results obtained by other researchers.", "In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail." ] }
1907.12398
2965570822
User authentication can rely on various factors (e.g., a password, a cryptographic key, biometric data) but should not reveal any secret or private information. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes.
The last couple of decades has seen a plethora of proposals for user authentication. In general, existing schemes suffer from at least one of the following drawbacks: (a) they require a dedicated device, (b) they are proprietary, (c) they involve a shared secret, and or (d) they still require a traditional password. Herein we only discuss a small subset of schemes and refer to the paper by @cite_9 for an extensive evaluation of related work. Using their framework we evaluated ZeroTwo and present the results in Table .
{ "cite_N": [ "@cite_9" ], "mid": [ "2030112111", "1719934069", "1989085188", "2023924043" ], "abstract": [ "We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.", "Can you securely prove your identity to a host computer by using no dedicated software at your terminal and no dedicated token at your hands? Conventional password checking schemes do not need such a software and hardware but have a disadvantage that an attacker who has correctly observed an input password by peeping or wiretapping can perfectly impersonate the corresponding user. Conventional dynamic (one-time) password schemes or zero-knowledge identification schemes can be securely implemented but require special software or hardware or memorandums. This paper develops human-friendly identification schemes such that a human prover knowing a secret key in her or his brain is asked a visual question by a machine verifier, who then checks if an answer sent from the prover matches the question with respect to the key. The novelty of these schemes lies in their ways of displaying questions. This paper also examines an application of the human identification schemes to human-computer cryptographic communication protocols.", "User authentication systems are at an impasse. The most ubiquitous method -- the password -- has numerous problems, including susceptibility to unintentional exposure via phishing and cross-site password reuse. Second-factor authentication schemes have the potential to increase security but face usability and deployability challenges. For example, conventional second-factor schemes change the user authentication experience. Furthermore, while more secure than passwords, second-factor schemes still fail to provide sufficient protection against (single-use) phishing attacks. We present PhoneAuth, a system intended to provide security assurances comparable to or greater than that of conventional two-factor authentication systems while offering the same authentication experience as traditional passwords alone. Our work leverages the following key insights. First, a user's personal device (eg a phone) can communicate directly with the user's computer (and hence the remote web server) without any interaction with the user. Second, it is possible to provide a layered approach to security, whereby a web server can enact different policies depending on whether or not the user's personal device is present. We describe and evaluate our server-side, Chromium web browser, and Android phone implementations of PhoneAuth.", "Mutual authentication is important in a mobile pay-TV system. Traditional authentication schemes make use of one-to-one delivery, that is, one authentication message per request is delivered from a head-end system to subscriber. This delivery occupies too much bandwidth and therefore is inefficient and costly. One-to-many authentication scheme for access control in mobile pay-TV systems was proposed by in 2009. In one-to-many authentication scheme, only one authentication message for multiple requests is broadcasted from the head-end system (HES) to subscribers. claimed that their scheme is secure and provides anonymous authentication for protecting user privacy. However, the authors demonstrate that their scheme has a critical weakness. An attacker without any secret information can not only successfully impersonate mobile set (MS) to cheat the HES but also impersonate HES to cheat MS. The authors result is important for security engineers who design and develop user authentication systems. Afterwards, the authors design a novel one-to-many authentication scheme from bilinear pairings. They give the formal security proof in the random oracle model. In addition, they present the performance analysis of our scheme. The analysis results showed that their novel authentication scheme has shorter transmission message and can be applied in the environment which has limited bandwidth. At the same time, their scheme is also the first secure one-to-many authentication scheme for access control in pay-TV systems." ] }
1907.12398
2965570822
User authentication can rely on various factors (e.g., a password, a cryptographic key, biometric data) but should not reveal any secret or private information. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes.
Bonneau @cite_17 had previously proposed a password-based authentication protocol, designed to avoid revealing the password to the server (using Javascript), which requires neither a software update on the client side nor a separate authentication device. @cite_8 similarly focused on restrictions imposed by legacy systems to address the issues of weak passwords.
{ "cite_N": [ "@cite_8", "@cite_17" ], "mid": [ "1542316315", "2058107309", "93892664", "2030112111" ], "abstract": [ "In this paper we address the problem of secure communication and authentication in ad-hoc wireless networks. This is a difficult problem, as it involves bootstrapping trust between strangers. We present a user-friendly solution, which provides secure authentication using almost any established public-key-based key exchange protocol, as well as inexpensive hash-based alternatives. In our approach, devices exchange a limited amount of public information over a privileged side channel, which will then allow them to complete an authenticated key exchange protocol over the wireless link. Our solution does not require a public key infrastructure, is secure against passive attacks on the privileged side channel and all attacks on the wireless link, and directly captures users’ intuitions that they want to talk to a particular previously unknown device in their physical proximity. We have implemented our system in Java for a variety of different devices, communication media, and key", "Generally, if a user wants to use numerous different network services, he she must register himself herself to every service providing server. It is extremely hard for users to remember these different identities and passwords. In order to resolve this problem, various multi-server authentication protocols have been proposed. Recently, analyzed Hsiang and Shih's multi-server authentication protocol and proposed an improved dynamic identity based authentication protocol for multi-server architecture. They claimed that their protocol provides user's anonymity, mutual authentication, the session key agreement and can resist several kinds of attacks. However, through careful analysis, we find that 's protocol is still vulnerable to leak-of-verifier attack, stolen smart card attack and impersonation attack. Besides, since there is no way for the control server CS to know the real identity of the user, the authentication and session key agreement phase of 's protocol is incorrect. We propose an efficient and security dynamic identity based authentication protocol for multi-server architecture that removes the aforementioned weaknesses. The proposed protocol is extremely suitable for use in distributed multi-server architecture since it provides user's anonymity, mutual authentication, efficient, and security.", "Traditional password based authentication scheme is vulnerable to shoulder surfing attack. So if an attacker sees a legitimate user to enter password then it is possible for the attacker to use that credentials later to illegally login into the system and may do some malicious activities. Many methodologies exist to prevent such attack. These methods are either partially observable or fully observable to the attacker. In this paper we have focused on detection of shoulder surfing attack rather than prevention. We have introduced the concept of tag digit to create a trap known as honeypot. Using the proposed methodology if the shoulder surfers try to login using others’ credentials then there is a high chance that they will be caught red handed. Comparative analysis shows that unlike the existing preventive schemes, the proposed methodology does not require much computation from users end. Thus from security and usability perspective the proposed scheme is quite robust and powerful.", "We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals." ] }
1907.12398
2965570822
User authentication can rely on various factors (e.g., a password, a cryptographic key, biometric data) but should not reveal any secret or private information. This seemingly paradoxical feat can be achieved through zero-knowledge proofs. Unfortunately, naive password-based approaches still prevail on the web. Multi-factor authentication schemes address some of the weaknesses of the traditional login process, but generally have deployability issues or degrade usability even further as they assume users do not possess adequate hardware. This assumption no longer holds: smartphones with biometric sensors, cameras, short-range communication capabilities, and unlimited data plans have become ubiquitous. In this paper, we show that, assuming the user has such a device, both security and usability can be drastically improved using an augmented password-authenticated key agreement (PAKE) protocol and message authentication codes.
Sound-Proof @cite_10 is a recent system that relies on sound for the smartphone and browser to communicate. One of the main goals of Sound-Proof is to provide a seamless experience to users, i.e., the phone need not even be handled for the authentication process to complete. However, the user still has to type a password in the browser, which comes with the issues we discussed previously. Moreover, the complete seamlessness of Sound-Proof is not compatible with our view that certain actions should be explicitly authorized on a trusted device.
{ "cite_N": [ "@cite_10" ], "mid": [ "1955134710", "2953351357", "2087355434", "2400528202" ], "abstract": [ "Two-factor authentication protects online accounts even if passwords are leaked. Most users, however, prefer password-only authentication. One reason why two-factor authentication is so unpopular is the extra steps that the user must complete in order to log in. Currently deployed two-factor authentication mechanisms require the user to interact with his phone to, for example, copy a verification code to the browser. Two-factor authentication schemes that eliminate user-phone interaction exist, but require additional software to be deployed. In this paper we propose Sound-Proof, a usable and deployable two-factor authentication mechanism. Sound-Proof does not require interaction between the user and his phone. In Sound-Proof the second authentication factor is the proximity of the user's phone to the device being used to log in. The proximity of the two devices is verified by comparing the ambient noise recorded by their microphones. Audio recording and comparison are transparent to the user, so that the user experience is similar to the one of password-only authentication. Sound-Proof can be easily deployed as it works with current phones and major browsers without plugins. We build a prototype for both Android and iOS. We provide empirical evidence that ambient noise is a robust discriminant to determine the proximity of two devices both indoors and outdoors, and even if the phone is in a pocket or purse. We conduct a user study designed to compare the perceived usability of Sound-Proof with Google 2-Step Verification. Participants ranked Sound-Proof as more usable and the majority would be willing to use Sound-Proof even for scenarios in which two-factor authentication is optional.", "Two-factor authentication protects online accounts even if passwords are leaked. Most users, however, prefer password-only authentication. One reason why two-factor authentication is so unpopular is the extra steps that the user must complete in order to log in. Currently deployed two-factor authentication mechanisms require the user to interact with his phone to, for example, copy a verification code to the browser. Two-factor authentication schemes that eliminate user-phone interaction exist, but require additional software to be deployed. In this paper we propose Sound-Proof, a usable and deployable two-factor authentication mechanism. Sound-Proof does not require interaction between the user and his phone. In Sound-Proof the second authentication factor is the proximity of the user's phone to the device being used to log in. The proximity of the two devices is verified by comparing the ambient noise recorded by their microphones. Audio recording and comparison are transparent to the user, so that the user experience is similar to the one of password-only authentication. Sound-Proof can be easily deployed as it works with current phones and major browsers without plugins. We build a prototype for both Android and iOS. We provide empirical evidence that ambient noise is a robust discriminant to determine the proximity of two devices both indoors and outdoors, and even if the phone is in a pocket or purse. We conduct a user study designed to compare the perceived usability of Sound-Proof with Google 2-Step Verification. Participants ranked Sound-Proof as more usable and the majority would be willing to use Sound-Proof even for scenarios in which two-factor authentication is optional.", "We propose to establish a secure communication channel among devices based on similar audio patterns. Features from ambient audio are used to generate a shared cryptographic key between devices without exchanging information about the ambient audio itself or the features utilized for the key generation process. We explore a common audio-fingerprinting approach and account for the noise in the derived fingerprints by employing error correcting codes. This fuzzy-cryptography scheme enables the adaptation of a specific value for the tolerated noise among fingerprints based on environmental conditions by altering the parameters of the error correction and the length of the audio samples utilized. In this paper, we experimentally verify the feasibility of the protocol in four different realistic settings and a laboratory experiment. The case studies include an office setting, a scenario where an attacker is capable of reproducing parts of the audio context, a setting near a traffic loaded road, and a crowded canteen environment. We apply statistical tests to show that the entropy of fingerprints based on ambient audio is high. The proposed scheme constitutes a totally unobtrusive but cryptographically strong security mechanism based on contextual information.", "We explore the threat of smartphone malware with access to on-board sensors, which opens new avenues for illicit collection of private information. While existing work shows that such “sensory malware” can convey raw sensor data (e.g., video and audio) to a remote server, these approaches lack stealthiness, incur significant communication and computation overhead during data transmission and processing, and can easily be defeated by existing protections like denying installation of applications with access to both sensitive sensors and the network. We present Soundcomber, a Trojan with few and innocuous permissions, that can extract a small amount of targeted private information from the audio sensor of the phone. Using targeted profiles for context-aware analysis, Soundcomber intelligently “pulls out” sensitive data such as credit card and PIN numbers from both toneand speech-based interaction with phone menu systems. Soundcomber performs efficient, stealthy local extraction, thereby greatly reducing the communication cost for delivering stolen data. Soundcomber automatically infers the destination phone number by analyzing audio, circumvents known security defenses, and conveys information remotely without direct network access. We also design and implement a defensive architecture that foils Soundcomber, identify new covert channels specific to smartphones, and provide a video demonstration" ] }
1907.12353
2966108228
We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available.
Cascade approaches have been involved in a variety of domains of computer vision, e.g., cascaded pose regression progressively refines a pose estimation learned from supervised training data @cite_32 , cascaded classifiers speed up the process of object detection @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_32" ], "mid": [ "2147474239", "2164598857", "2100807570", "2473640056" ], "abstract": [ "We describe a cascaded method for object detection. This approach uses a novel organization of the first cascade stage called \"feature-centric\" evaluation which re-uses feature evaluations across multiple candidate windows. We minimize the cost of this evaluation through several simplifications: (1) localized lighting normalization, (2) representation of the classifier as an additive model and (3) discrete-valued features. Such a method also incorporates a unique feature representation. The early stages in the cascade use simple fast feature evaluations and the later stages use more complex discriminative features. In particular, we propose features based on sparse coding and ordinal relationships among filter responses. This combination of cascaded feature-centric evaluation with features of increasing complexity achieves both computational efficiency and accuracy. We describe object detection experiments on ten objects including faces and automobiles. These results include 97 recognition at equal error rate on the UIUC image database for car detection.", "This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.", "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.", "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training." ] }
1907.12353
2966108228
We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available.
Deep learning also benefits from cascade architectures. For example, deep deformation network @cite_4 cascades two stages and predicts a deformation for landmark localization. Other applications include object detection @cite_9 , semantic segmentation @cite_47 , and image super-resolution @cite_51 . There are also several works specified to medical images, e.g., 3D image reconstruction for MRIs @cite_42 @cite_10 , liver segmentation @cite_44 and mitosis detection @cite_45 . Note that shallow, non-recursive network cascades are usually proposed in those works.
{ "cite_N": [ "@cite_4", "@cite_10", "@cite_9", "@cite_42", "@cite_44", "@cite_45", "@cite_47", "@cite_51" ], "mid": [ "2952074561", "2788295695", "2770205545", "2518965973" ], "abstract": [ "We propose a novel cascaded framework, namely deep deformation network (DDN), for localizing landmarks in non-rigid objects. The hallmarks of DDN are its incorporation of geometric constraints within a convolutional neural network (CNN) framework, ease and efficiency of training, as well as generality of application. A novel shape basis network (SBN) forms the first stage of the cascade, whereby landmarks are initialized by combining the benefits of CNN features and a learned shape basis to reduce the complexity of the highly nonlinear pose manifold. In the second stage, a point transformer network (PTN) estimates local deformation parameterized as thin-plate spline transformation for a finer refinement. Our framework does not incorporate either handcrafted features or part connectivity, which enables an end-to-end shape prediction pipeline during both training and testing. In contrast to prior cascaded networks for landmark localization that learn a mapping from feature space to landmark locations, we demonstrate that the regularization induced through geometric priors in the DDN makes it easier to train, yet produces superior results. The efficacy and generality of the architecture is demonstrated through state-of-the-art performances on several benchmarks for multiple tasks such as facial landmark localization, human body pose estimation and bird part localization.", "In this paper, we propose a novel approach for efficient training of deep neural networks in a bottom-up fashion using a layered structure. Our algorithm, which we refer to as deep cascade learning, is motivated by the cascade correlation approach of Fahlman and Lebiere, who introduced it in the context of perceptrons. We demonstrate our algorithm on networks of convolutional layers, though its applicability is more general. Such training of deep networks in a cascade directly circumvents the well-known vanishing gradient problem by ensuring that the output is always adjacent to the layer being trained. We present empirical evaluations comparing our deep cascade training with standard end–end training using back propagation of two convolutional neural network architectures on benchmark image classification tasks (CIFAR-10 and CIFAR-100). We then investigate the features learned by the approach and find that better, domain-specific, representations are learned in early layers when compared to what is learned in end–end training. This is partially attributable to the vanishing gradient problem that inhibits early layer filters to change significantly from their initial settings. While both networks perform similarly overall, recognition accuracy increases progressively with each added layer, with discriminative features learned in every stage of the network, whereas in end–end training, no such systematic feature representation was observed. We also show that such cascade training has significant computational and memory advantages over end–end training, and can be used as a pretraining algorithm to obtain a better performance.", "Recovering images from undersampled linear measurements typically leads to an ill-posed linear inverse problem, that asks for proper statistical priors. Building effective priors is however challenged by the low train and test overhead dictated by real-time tasks; and the need for retrieving visually \"plausible\" and physically \"feasible\" images with minimal hallucination. To cope with these challenges, we design a cascaded network architecture that unrolls the proximal gradient iterations by permeating benefits from generative residual networks (ResNet) to modeling the proximal operator. A mixture of pixel-wise and perceptual costs is then deployed to train proximals. The overall architecture resembles back-and-forth projection onto the intersection of feasible and plausible images. Extensive computational experiments are examined for a global task of reconstructing MR images of pediatric patients, and a more local task of superresolving CelebA faces, that are insightful to design efficient architectures. Our observations indicate that for MRI reconstruction, a recurrent ResNet with a single residual block effectively learns the proximal. This simple architecture appears to significantly outperform the alternative deep ResNet architecture by 2dB SNR, and the conventional compressed-sensing MRI by 4dB SNR with 100x faster inference. For image superresolution, our preliminary results indicate that modeling the denoising proximal demands deep ResNets.", "This paper is on human pose estimation using Convolutional Neural Networks. Our main contribution is a CNN cascaded architecture specifically designed for learning part relationships and spatial context, and robustly inferring pose even for the case of severe part occlusions. To this end, we propose a detection-followed-by-regression CNN cascade. The first part of our cascade outputs part detection heatmaps and the second part performs regression on these heatmaps. The benefits of the proposed architecture are multi-fold: It guides the network where to focus in the image and effectively encodes part constraints and context. More importantly, it can effectively cope with occlusions because part detection heatmaps for occluded parts provide low confidence scores which subsequently guide the regression part of our network to rely on contextual information in order to predict the location of these parts. Additionally, we show that the proposed cascade is flexible enough to readily allow the integration of various CNN architectures for both detection and regression, including recent ones based on residual learning. Finally, we illustrate that our cascade achieves top performance on the MPII and LSP data sets. Code can be downloaded from http: www.cs.nott.ac.uk psxab5 ." ] }
1907.12353
2966108228
We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available.
In respect of registration, traditional algorithms iteratively optimize some energy functions in common @cite_39 @cite_53 @cite_23 @cite_2 @cite_33 @cite_13 @cite_54 @cite_56 . Those methods are also recursive in general, i.e., similarly functioned alignments with respect to the current warped images are performed during iterations. Iterative Closest Point is an iterative, recursive approach for registering point clouds @cite_18 @cite_31 , where the closest pairs of points are matched at each iteration and a rigid transformation that minimizes the difference is solved. In deformable image registration, most traditional algorithms basically works like this but in a much more complex way. Standard symmetric normalization (SyN) @cite_23 maximizes the cross-correlation within the space of diffeomorphic maps during iterations. Optimizing free-form deformations using B-spline @cite_17 is another standard approach.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_53", "@cite_54", "@cite_39", "@cite_56", "@cite_23", "@cite_2", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "2109323956", "2064358676", "2610614679", "2004312117" ], "abstract": [ "We propose a novel algorithm to register multiple 3D point sets within a common reference frame using a manifold optimization approach. The point sets are obtained with multiple laser scanners or a mobile scanner. Unlike most prior algorithms, our approach performs an explicit optimization on the manifold of rotations, allowing us to formulate the registration problem as an unconstrained minimization on a constrained manifold. This approach exploits the Lie group structure of SO 3 and the simple representation of its associated Lie algebra so 3 in terms of R3. Our contributions are threefold. We present a new analytic method based on singular value decompositions that yields a closed-form solution for simultaneous multiview registration in the noise-free scenario. Secondly, we use this method to derive a good initial estimate of a solution in the noise-free case. This initialization step may be of use in any general iterative scheme. Finally, we present an iterative scheme based on Newton's method on SO 3 that has locally quadratic convergence. We demonstrate the efficacy of our scheme on scan data taken both from the Digital Michelangelo project and from scans extracted from models, and compare it to some of the other well known schemes for multiview registration. In all cases, our algorithm converges much faster than the other approaches, (in some cases orders of magnitude faster), and generates consistently higher quality registrations.", "In this paper, we propose a new algorithm for pairwise rigid point set registration with unknown point correspondences. The main properties of our method are noise robustness, outlier resistance and global optimal alignment. The problem of registering two point clouds is converted to a minimization of a nonlinear cost function. We propose a new cost function based on an inverse distance kernel that significantly reduces the impact of noise and outliers. In order to achieve a global optimal registration without the need of any initial alignment, we develop a new stochastic approach for global minimization. It is an adaptive sampling method which uses a generalized BSP tree and allows for minimizing nonlinear scalar fields over complex shaped search spaces like, e.g., the space of rotations. We introduce a new technique for a hierarchical decomposition of the rotation space in disjoint equally sized parts called spherical boxes. Furthermore, a procedure for uniform point sampling from spherical boxes is presented. Tests on a variety of point sets show that the proposed registration method performs very well on noisy, outlier corrupted and incomplete data. For comparison, we report how two state-of-the-art registration algorithms perform on the same data sets.", "In this paper, we present a robust global approach for point cloud registration from uniformly sampled points. Based on eigenvalues and normals computed from multiple scales, we design fast descriptors to extract local structures of these points. The eigenvalue-based descriptor is effective at finding seed matches with low precision using nearest neighbor search. Generally, recovering the transformation from matches with low precision is rather challenging. Therefore, we introduce a mechanism named correspondence propagation to aggregate each seed match into a set of numerous matches. With these sets of matches, multiple transformations between point clouds are computed. A quality function formulated from distance errors is used to identify the best transformation and fulfill a coarse alignment of the point clouds. Finally, we refine the alignment result with the trimmed iterative closest point algorithm. The proposed approach can be applied to register point clouds with significant or limited overlaps and small or large transformations. More encouragingly, it is rather efficient and very robust to noise. A comparison to traditional descriptor-based methods and other global algorithms demonstrates the fine performance of the proposed approach. We also show its promising application in large-scale reconstruction with the scans of two real scenes. In addition, the proposed approach can be used to register low-resolution point clouds captured by Kinect as well.", "Abstract This paper introduces a new method of registering point sets. The registration error is directly minimized using general-purpose non-linear optimization (the Levenberg–Marquardt algorithm). The surprising conclusion of the paper is that this technique is comparable in speed to the special-purpose Iterated Closest Point algorithm, which is most commonly used for this task. Because the routine directly minimizes an energy function, it is easy to extend it to incorporate robust estimation via a Huber kernel, yielding a basin of convergence that is many times wider than existing techniques. Finally, we introduce a data structure for the minimization based on the chamfer distance transform, which yields an algorithm that is both faster and more robust than previously described methods." ] }
1907.12353
2966108228
We present recursive cascaded networks, a general architecture that enables learning deep cascades, for deformable image registration. The proposed architecture is simple in design and can be built on any base network. The moving image is warped successively by each cascade and finally aligned to the fixed image; this procedure is recursive in a way that every cascade learns to perform a progressive deformation for the current warped image. The entire system is end-to-end and jointly trained in an unsupervised manner. In addition, enabled by the recursive architecture, one cascade can be iteratively applied for multiple times during testing, which approaches a better fit between each of the image pairs. We evaluate our method on 3D medical images, where deformable registration is most commonly applied. We demonstrate that recursive cascaded networks achieve consistent, significant gains and outperform state-of-the-art methods. The performance reveals an increasing trend as long as more cascades are trained, while the limit is not observed. Our code will be made publicly available.
Learning-based methods are presented recently. Supervised methods entail much effort on the labeled data that can hardly meet the realistic demands, resulting in the limited performance @cite_55 @cite_15 @cite_38 @cite_57 . Unsupervised methods are proposed to solve this problem. Several initial works shows the possibility of unsupervised learning @cite_35 @cite_43 @cite_1 @cite_49 , among which DLIR @cite_43 performs on par with the B-spline method implemented in SimpleElastix @cite_29 (a multi-language extension of Elastix @cite_0 , which is selected as one of our baseline methods). VoxelMorph @cite_3 and VTN @cite_16 achieve better performance by predicting a dense flow field using deconvolutional layers @cite_21 , whereas DLIR only predicts a sparse displacement grid interpolated by a third order B-spline kernel. VoxelMorph only evaluates their method on brain MRI datasets @cite_3 @cite_41 , but shown deficiency on other datasets such as liver CT scans by later work @cite_16 . Additionally, VTN proposes an initial convolutional network which performs an affine transformation before predicting deformation fields, leading to a truly end-to-end framework by substituting the traditional affine stage.
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_41", "@cite_55", "@cite_29", "@cite_1", "@cite_21", "@cite_3", "@cite_57", "@cite_43", "@cite_0", "@cite_49", "@cite_15", "@cite_16" ], "mid": [ "2785325870", "2962742544", "2153378748", "2891179298" ], "abstract": [ "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification.", "Feedforward multilayer networks trained by supervised learning have recently demonstrated state of the art performance on image labeling problems such as boundary prediction and scene parsing. As even very low error rates can limit practical usage of such systems, methods that perform closer to human accuracy remain desirable. In this work, we propose a new type of network with the following properties that address what we hypothesize to be limiting aspects of existing methods: (1) a wide' structure with thousands of features, (2) a large field of view, (3) recursive iterations that exploit statistical dependencies in label space, and (4) a parallelizable architecture that can be trained in a fraction of the time compared to benchmark multilayer convolutional networks. For the specific image labeling problem of boundary prediction, we also introduce a novel example weighting algorithm that improves segmentation accuracy. Experiments in the challenging domain of connectomic reconstruction of neural circuity from 3d electron microscopy data show that these \"Deep And Wide Multiscale Recursive\" (DAWMR) networks lead to new levels of image labeling performance. The highest performing architecture has twelve layers, interwoven supervised and unsupervised stages, and uses an input field of view of 157,464 voxels ( @math ) to make a prediction at each image location. We present an associated open source software package that enables the simple and flexible creation of DAWMR networks.", "We present an adversarial domain adaptation based deep learning approach for automatic tumor segmentation from T2-weighted MRI. Our approach is composed of two steps: (i) a tumor-aware unsupervised cross-domain adaptation (CT to MRI), followed by (ii) semi-supervised tumor segmentation using Unet trained with synthesized and limited number of original MRIs. We introduced a novel target specific loss, called tumor-aware loss, for unsupervised cross-domain adaptation that helps to preserve tumors on synthesized MRIs produced from CT images. In comparison, state-of-the art adversarial networks trained without our tumor-aware loss produced MRIs with ill-preserved or missing tumors. All networks were trained using labeled CT images from 377 patients with non-small cell lung cancer obtained from the Cancer Imaging Archive and unlabeled T2w MRIs from a completely unrelated cohort of 6 patients with pre-treatment and 36 on-treatment scans. Next, we combined 6 labeled pre-treatment MRI scans with the synthesized MRIs to boost tumor segmentation accuracy through semi-supervised learning. Semi-supervised training of cycle-GAN produced a segmentation accuracy of 0.66 computed using Dice Score Coefficient (DSC). Our method trained with only synthesized MRIs produced an accuracy of 0.74 while the same method trained in semi-supervised setting produced the best accuracy of 0.80 on test. Our results show that tumor-aware adversarial domain adaptation helps to achieve reasonably accurate cancer segmentation from limited MRI data by leveraging large CT datasets." ] }
1907.12212
2965633139
This paper is concerned with voting processes on graphs where each vertex holds one of two different opinions. In particular, we study the and the . Here at each synchronous and discrete time step, each vertex updates its opinion to match the majority among the opinions of two random neighbors and itself (the Best-of-two) or the opinions of three random neighbors (the Best-of-three). Previous studies have explored these processes on complete graphs and expander graphs, but we understand significantly less about their properties on graphs with more complicated structures. In this paper, we study the Best-of-two and the Best-of-three on the stochastic block model @math , which is a random graph consisting of two distinct Erdős-Renyi graphs @math joined by random edges with density @math . We obtain two main results. First, if @math and @math is a constant, we show that there is a phase transition in @math with threshold @math (specifically, @math for the Best-of-two, and @math for the Best-of-three). If @math , the process reaches consensus within @math steps for any initial opinion configuration with a bias of @math . By contrast, if @math , we show that, for any initial opinion configuration, the process reaches consensus within @math steps. To the best of our knowledge, this is the first result concerning multiple-choice voting for arbitrary initial opinion configurations on non-complete graphs.
Other studies have focused on voting processes with more general updating rules. Cooper and Rivera @cite_5 studied the linear voting model , whose updating rule is characterized by a set of @math binary matrices. This model covers the synchronous pull and the asynchronous push pull voting processes. However, it does not cover the Best-of-two and the Best-of-three. Schoenebeck and Yu @cite_0 studied asynchronous voting processes whose updating functions are majority-like (including the asynchronous Best-of- @math voting processes). They gave upper bounds on the consensus times of such models on dense Erd o s-R 'enyi random graphs using a potential technique.
{ "cite_N": [ "@cite_0", "@cite_5" ], "mid": [ "2533034710", "1494224459", "1485127598", "2237112965" ], "abstract": [ "We study voting models on graphs. In the beginning, the vertices of a given graph have some initial opinion. Over time, the opinions on the vertices change by interactions between graph neighbours. Under suitable conditions the system evolves to a state in which all vertices have the same opinion. In this work, we consider a new model of voting, called the Linear Voting Model. This model can be seen as a generalization of several models of voting, including among others, pull voting and push voting. One advantage of our model is that, even though it is very general, it has a rich structure making the analysis tractable. In particular we are able to solve the basic question about voting, the probability that certain opinion wins the poll, and furthermore, given appropriate conditions, we are able to bound the expected time until some opinion wins.", "We introduce and study the reverse voter model, a dynamics for spin variables similar to the well‐known voter dynamics. The difference is in the way neighbors influence each other: once a node is selected and one among its neighbors chosen, the neighbor is made equal to the selected node, while in the usual voter dynamics the update goes in the opposite direction. The reverse voter dynamics is studied analytically, showing that on networks with degree distribution decaying as k−v, the time to reach consensus is linear in the system size N for all v > 2. The consensus time for link‐update voter dynamics is computed as well. We verify the results numerically on a class of uncorrelated scale‐free graphs.", "We study convergence properties of iterative voting procedures. Such procedures are defined by a voting rule and a (restricted) iterative process, where at each step one agent can modify his vote towards a better outcome for himself. It is already known that if the iteration dynamics (the manner in which voters are allowed to modify their votes) are unrestricted, then the voting process may not converge. For most common voting rules this may be observed even under the best response dynamics limitation. It is therefore important to investigate whether and which natural restrictions on the dynamics of iterative voting procedures can guarantee convergence. To this end, we provide two general conditions on the dynamics based on iterative myopic improvements, each of which is sufficient for convergence. We then identify several classes of voting rules (including Positional Scoring Rules, Maximin, Copeland and Bucklin), along with their corresponding iterative processes, for which at least one of these conditions hold.", "Distributed voting is a fundamental topic in distributed computing. In the standard model of pull voting, at each step every vertex chooses a neighbour uniformly at random and adopts its opinion. The voting is completed when all vertices hold the same opinion. In the simplest case, each vertex initially holds one of two different opinions. This partitions the vertices into arbitrary sets A and B. For many graphs, including regular graphs and irrespective of their expansion properties, if both A and B are sufficiently large sets, then pull voting requires @math expected steps, where n is the number of vertices of the graph. In this paper we consider a related class of voting processes based on sampling two opinions. In the simplest case, every vertex v chooses two random neighbours at each step. If both these neighbours have the same opinion, then v adopts this opinion. Otherwise, v keeps its own opinion. Let G be a connected graph with n vertices and m edges. Let P be the transition matrix of a simple random walk on G with second largest eigenvalue @math . We show that if the initial imbalance in degree between the two opinions satisfies @math , then with high probability voting completes in @math steps, and the opinion with the larger initial degree wins. The condition that @math , or only a bound on the conductance of the graph is known, the sampling process can be modified so that voting still provably completes in @math steps with high probability. The modification uses two sampling based on probing to a fixed depth @math from any vertex. In its most general form our voting process allows vertices to bias their sampling of opinions among their neighbours to achieve a desired outcome. This is done by allocating weights to edges." ] }
1907.12079
2964825438
Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results.
Various information visualization techniques have been applied to improve user interfaces for search. Some systems augment search result lists with additional small visualizations. For example, TileBars @cite_26 , INSYDER @cite_5 , and HotMap @cite_25 visualize query-document relationships as icons or glyphs alongside search results. Another approach is to visualize search results in a spatial layout where proximity represents similarity. Systems such as InfoSky @cite_20 and IN-SPIRE @cite_19 are examples. FacetAtlas @cite_3 overlays additional heatmaps to visualize density. ProjSnippet @cite_34 visualizes text snippets in a 2-D layout. Many others cluster the search results and offer faceted navigation. FacetMap @cite_4 and ResultMap @cite_27 utilizes treemap-style visualizations to represent facets. These systems may guide users well in exploring search results, but they are mostly based on static search queries. Our system goes beyond search results exploration and offers interactive target (query) building.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_3", "@cite_19", "@cite_27", "@cite_5", "@cite_34", "@cite_25", "@cite_20" ], "mid": [ "137863291", "2663075978", "2158943277", "2105397135" ], "abstract": [ "Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.", "In this paper, we present the results of a user study on exploratory search activities in a social science digital library. We conducted a user study with 32 participants with a social sciences background—16 postdoctoral researchers and 16 students—who were asked to solve a task on searching related work to a given topic. The exploratory search task was performed in a 10-min time slot. The use of certain search activities is measured and compared to gaze data recorded with an eye tracking device. We use a novel tree graph representation to visualise the users’ search patterns and introduce a way to combine multiple search session trees. The tree graph representation is capable of creating one single tree for multiple users and identifying common search patterns. In addition, the information behaviour of students and postdoctoral researchers is being compared. The results show that search activities on the stratagem level are frequently utilised by both user groups. The most heavily used search activities were keyword search, followed by browsing through references and citations, and author searching. The eye tracking results showed an intense examination of documents metadata, especially on the level of citations and references. When comparing the group of students and postdoctoral researchers, we found significant differences regarding gaze data on the area of the journal name of the seed document. In general, we found a tendency of the postdoctoral researchers to examine the metadata records more intensively with regard to dwell time and the number of fixations. By creating combined session trees and deriving subtrees from those, we were able to identify common patterns like economic (explorative) and exhaustive (navigational) behaviour. Our results show that participants utilised multiple search strategies starting from the seed document, which means that they examined different paths to find related publications.", "In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.", "In this paper, we have developed a novel framework called JustClick to enable personalized image recommendation via exploratory search from large-scale collections of Flickr images. First, a topic network is automatically generated to summarize large-scale collections of Flickr images at a semantic level. Hyperbolic visualization is further used to enable interactive navigation and exploration of the topic network, so that users can gain insights of large-scale image collections at the first glance, build up their mental query models interactively and specify their queries (i.e., image needs) more precisely by selecting the image topics on the topic network directly. Thus, our personalized query recommendation framework can effectively address both the problem of query formulation and the problem of vocabulary discrepancy and null returns. Second, a small set of most representative images are recommended for the given image topic according to their representativeness scores. Kernel principal component analysis and hyperbolic visualization are seamlessly integrated to organize and layout the recommended images (i.e., most representative images) according to their nonlinear visual similarity contexts, so that users can assess the relevance between the recommended images and their real query intentions interactively. An interactive interface is implemented to allow users to express their time-varying query intentions precisely and to direct our JustClick system to more relevant images according to their personal preferences. Our experiments on large-scale collections of Flickr images show very positive results." ] }
1907.12079
2964825438
Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results.
Although topic summarization has been studied for a long time, discovering topic summary of a specific aspect (or targets) is a relatively new research problem. TTM @cite_12 is the first work to propose the term targeted topic modeling'. This work proposes a probabilistic model that is a variation of latent Dirichlet allocation (LDA) @cite_10 . Given a static keyword list defining a particular aspect, the model identifies topic keywords related to this aspect. @cite_23 identifies a list of target words from review data and disentangles aspect words and opinion words from the list. APSUM @cite_24 assigns aspects to each word in a generative process. Since the aforementioned model generates topic keywords based on a static keyword list, a dynamic model is desired. An automatic method to generate keyword dynamically has been proposed @cite_41 . This method focuses on the on-line environment of Twitter and automatically generates keywords based on the time-evolving word graph.
{ "cite_N": [ "@cite_41", "@cite_24", "@cite_23", "@cite_10", "@cite_12" ], "mid": [ "2251734881", "1714665356", "2104483432", "2611254175" ], "abstract": [ "Topic Model such as Latent Dirichlet Allocation(LDA) makes assumption that topic assignment of different words are conditionally independent. In this paper, we propose a new model Extended Global Topic Random Field (EGTRF) to model non-linear dependencies between words. Specifically, we parse sentences into dependency trees and represent them as a graph, and assume the topic assignment of a word is influenced by its adjacent words and distance-2 words. Word similarity information learned from large corpus is incorporated to enhance word topic assignment. Parameters are estimated efficiently by variational inference and experimental results on two datasets show EGTRF achieves lower perplexity and higher log predictive probability.", "Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.", "Statistical approaches to automatic text summarization based on term frequency continue to perform on par with more complex summarization methods. To compute useful frequency statistics, however, the semantically important words must be separated from the low-content function words. The standard approach of using an a priori stopword list tends to result in both undercoverage, where syntactical words are seen as semantically relevant, and overcoverage, where words related to content are ignored. We present a generative probabilistic modeling approach to building content distributions for use with statistical multi-document summarization where the syntax words are learned directly from the data with a Hidden Markov Model and are thereby deemphasized in the term frequency statistics. This approach is compared to both a stopword-list and POS-tagging approach and our method demonstrates improved coverage on the DUC 2006 and TAC 2010 datasets using the ROUGE metric.", "Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encode-attend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28 (absolute) in ROUGE-L scores." ] }
1907.12079
2964825438
Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results.
Interactive topic models allow users to steer the topics to improve the topic modeling results. Various topic steering interactions such as adding, editing, deleting, splitting, and merging topics have been introduced @cite_32 @cite_18 @cite_16 @cite_42 @cite_11 @cite_28 @cite_37 @cite_33 @cite_15 . These interactions can be applied to refine relevant topics and remove irrelevant topics to identify targeted topics when most of the data items are relevant and only a small portion is irrelevant. However, in our large-scale search space reduction setting, a more tailored approach is needed. In this paper, we propose interactive targeted topic modeling to steer the topics to discover the target-relevant topics and documents.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_33", "@cite_28", "@cite_42", "@cite_32", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "2171319841", "2106035193", "2169681319", "1995866178" ], "abstract": [ "Topic models have the potential to improve search and browsing by extracting useful semantic themes from web pages and other text documents. When learned topics are coherent and interpretable, they can be valuable for faceted browsing, results set diversity analysis, and document retrieval. However, when dealing with small collections or noisy text (e.g. web search result snippets or blog posts), learned topics can be less coherent, less interpretable, and less useful. To overcome this, we propose two methods to regularize the learning of topic models. Our regularizers work by creating a structured prior over words that reflect broad patterns in the external data. Using thirteen datasets we show that both regularizers improve topic coherence and interpretability while learning a faithful representation of the collection of interest. Overall, this work makes topic models more useful across a broader range of text data.", "Topic modeling has been commonly used to discover topics from document collections. However, unsupervised models can generate many incoherent topics. To address this problem, several knowledge-based topic models have been proposed to incorporate prior domain knowledge from the user. This work advances this research much further and shows that without any user input, we can mine the prior knowledge automatically and dynamically from topics already found from a large number of domains. This paper first proposes a novel method to mine such prior knowledge dynamically in the modeling process, and then a new topic model to use the knowledge to guide the model inference. What is also interesting is that this approach offers a novel lifelong learning algorithm for topic discovery, which exploits the big (past) data and knowledge gained from such data for subsequent modeling. Our experimental results using product reviews from 50 domains demonstrate the effectiveness of the proposed approach.", "Topic modeling provides a powerful way to analyze the content of a collection of documents. It has become a popular tool in many research areas, such as text mining, information retrieval, natural language processing, and other related fields. In real-world applications, however, the usefulness of topic modeling is limited due to scalability issues. Scaling to larger document collections via parallelization is an active area of research, but most solutions require drastic steps, such as vastly reducing input vocabulary. In this article we introduce Regularized Latent Semantic Indexing (RLSI)---including a batch version and an online version, referred to as batch RLSI and online RLSI, respectively---to scale up topic modeling. Batch RLSI and online RLSI are as effective as existing topic modeling techniques and can scale to larger datasets without reducing input vocabulary. Moreover, online RLSI can be applied to stream data and can capture the dynamic evolution of topics. Both versions of RLSI formalize topic modeling as a problem of minimizing a quadratic loss function regularized by e1 and or e2 norm. This formulation allows the learning process to be decomposed into multiple suboptimization problems which can be optimized in parallel, for example, via MapReduce. We particularly propose adopting e1 norm on topics and e2 norm on document representations to create a model with compact and readable topics and which is useful for retrieval. In learning, batch RLSI processes all the documents in the collection as a whole, while online RLSI processes the documents in the collection one by one. We also prove the convergence of the learning of online RLSI. Relevance ranking experiments on three TREC datasets show that batch RLSI and online RLSI perform better than LSI, PLSI, LDA, and NMF, and the improvements are sometimes statistically significant. Experiments on a Web dataset containing about 1.6 million documents and 7 million terms, demonstrate a similar boost in performance.", "Topic modeling has been widely used to mine topics from documents. However, a key weakness of topic modeling is that it needs a large amount of data (e.g., thousands of documents) to provide reliable statistics to generate coherent topics. However, in practice, many document collections do not have so many documents. Given a small number of documents, the classic topic model LDA generates very poor topics. Even with a large volume of data, unsupervised learning of topic models can still produce unsatisfactory results. In recently years, knowledge-based topic models have been proposed, which ask human users to provide some prior domain knowledge to guide the model to produce better topics. Our research takes a radically different approach. We propose to learn as humans do, i.e., retaining the results learned in the past and using them to help future learning. When faced with a new task, we first mine some reliable (prior) knowledge from the past learning modeling results and then use it to guide the model inference to generate more coherent topics. This approach is possible because of the big data readily available on the Web. The proposed algorithm mines two forms of knowledge: must-link (meaning that two words should be in the same topic) and cannot-link (meaning that two words should not be in the same topic). It also deals with two problems of the automatically mined knowledge, i.e., wrong knowledge and knowledge transitivity. Experimental results using review documents from 100 product domains show that the proposed approach makes dramatic improvements over state-of-the-art baselines." ] }
1907.12047
2956646654
Abstract The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student’s predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance.
Our work relates to several areas of research in student modeling. Several approaches within the educational data mining community have used computational methods for sequencing students' learning items. Pardos and Heffernan @cite_39 infer order over questions by predicting students' skill levels over action pairs using Bayesian knowledge tracing. They show the efficacy of this approach on a test-set comprising random sequences of three questions as well as simulated data. This approach explicitly considers each possible order sequence and does not scale to handling a large number of sequences, as in the student ranking problem we consider in this paper.
{ "cite_N": [ "@cite_39" ], "mid": [ "2129373702", "1597703949", "160448112", "2339576062" ], "abstract": [ "Researchers who make tutoring systems would like to know which sequences of educational content lead to the most effective learning by their students. The majority of data collected in many ITS systems consist of answers to a group of questions of a given skill often presented in a random sequence. Following work that identifies which items produce the most learning we propose a Bayesian method using similar permutation analysis techniques to determine if item learning is context sensitive and if so which orderings of questions produce the most learning. We confine our analysis to random sequences with three questions. The method identifies question ordering rules such as, question A should go before B, which are statistically reliably beneficial to learning. Real tutor data from five random sequence problem sets were analyzed. Statistically reliable orderings of questions were found in two of the five real data problem sets. A simulation consisting of 140 experiments was run to validate the method's accuracy and test its reliability. The method succeeded in finding 43 of the underlying item order effects with a 6 false positive rate using a p value threshold of <= 0.05. Using this method, ITS researchers can gain valuable knowledge about their problem sets and feasibly let the ITS automatically identify item order effects and optimize student learning by restricting assigned sequences to those prescribed as most beneficial to learning.", "Modeling students' knowledge is a fundamental part of intelligent tutoring systems. One of the most popular methods for estimating students' knowledge is Corbett and Anderson's [6] Bayesian Knowledge Tracing model. The model uses four parameters per skill, fit using student performance data, to relate performance to learning. Beck [1] showed that existing methods for determining these parameters are prone to the Identifiability Problem:the same performance data can be fit equally well by different parameters, with different implications on system behavior. Beck offered a solution based on Dirichlet Priors [1], but, we show this solution is vulnerable to a different problem, Model Degeneracy, where parameter values violate the model's conceptual meaning (such as a student being more likely to get a correct answer if he she does not know a skill than if he she does).We offer a new method for instantiating Bayesian Knowledge Tracing, using machine learning to make contextual estimations of the probability that a student has guessed or slipped. This method is no more prone to problems with Identifiability than Beck's solution, has less Model Degeneracy than competing approaches, and fits student performance data better than prior methods. Thus, it allows for more accurate and reliable student modeling in ITSs that use knowledge tracing.", "Student modeling is an important component of ITS research because it can help guide the behavior of a running tutor and help researchers understand how students learn. Due to its predictive accuracy, interpretability and ability to infer student knowledge, Corbett & Anderson’s Bayesian Knowledge Tracing is one of the most popular student models. However, researchers have discovered problems with some of the most popular methods of fitting it. These problems include: multiple sets of highly dissimilar parameters predicting the data equally well (identifiability), local minima, degenerate parameters, and computational cost during fitting. Some researchers have proposed new fitting procedures to combat these problems, but are more complex and not completely successful at eliminating the problems they set out to prevent. We instead fit parameters by estimating the mostly likely point that each student learned the skill, developing a new method that avoids the above problems while achieving similar predictive accuracy.", "Despite the prevalence of e-learning systems in schools, most of today's systems do not personalize educational data to the individual needs of each student. This paper proposes a new algorithm for sequencing questions to students that is empirically shown to lead to better performance and engagement in real schools when compared to a baseline approach. It is based on using knowledge tracing to model students' skill acquisition over time, and to select questions that advance the student's learning within the range of the student's capabilities, as determined by the model. The algorithm is based on a Bayesian Knowledge Tracing (BKT) model that incorporates partial credit scores, reasoning about multiple attempts to solve problems, and integrating item difficulty. This model is shown to outperform other BKT models that do not reason about (or reason about some but not all) of these features. The model was incorporated into a sequencing algorithm and deployed in two classes in different schools where it was compared to a baseline sequencing algorithm that was designed by pedagogical experts. In both classes, students using the BKT sequencing approach solved more difficult questions and attributed higher performance than did students who used the expert-based approach. Students were also more engaged using the BKT approach, as determined by their interaction time and number of log-ins to the system, as well as their reported opinion. We expect our approach to inform the design of better methods for sequencing and personalizing educational content to students that will meet their individual learning needs." ] }
1907.12047
2956646654
Abstract The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student’s predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance.
Multiple researchers have used Bayesian knowledge tracing as a way to infer students' skill acquisition (i.e., mastery level) over time given their performance levels on different question sequences @cite_25 . These researchers reason about students' prior knowledge of skills and also account for slips and guessing on test problems. The models are trained on large data sets from multiple students using machine learning algorithms that account for latent variables @cite_18 @cite_17 . We solve a different problem --- using other students' performance to personalize ranking over test-questions. In addition, these methods measure students' performance dichotomously (i.e., success or failure) whereas we reason about additional features such as students' grade and number of attempts to solve the question. We intend to infer students' skill levels to improve the ranking prediction in future work.
{ "cite_N": [ "@cite_18", "@cite_25", "@cite_17" ], "mid": [ "1737233464", "1597703949", "160448112", "1897162008" ], "abstract": [ "Bayesian knowledge tracing has been used widely to model student learning. However, the name “Bayesian knowledge tracing” has been applied to two related, but distinct, models: The first is the Bayesian knowledge tracing Markov chain which predicts the student-averaged probability of a correct application of a skill. We present an analytical solution to this model and show that it is a function of three parameters and has the functional form of an exponential. The second form is the Bayesian knowledge tracing hidden Markov model which can use the individual student's performance at each opportunity to apply a skill to update the conditional probability that the student has learned that skill. We use a fixed point analysis to study solutions of this model and find a range of parameters where it has the desired behavior. *Revised version, Feb 2017", "Modeling students' knowledge is a fundamental part of intelligent tutoring systems. One of the most popular methods for estimating students' knowledge is Corbett and Anderson's [6] Bayesian Knowledge Tracing model. The model uses four parameters per skill, fit using student performance data, to relate performance to learning. Beck [1] showed that existing methods for determining these parameters are prone to the Identifiability Problem:the same performance data can be fit equally well by different parameters, with different implications on system behavior. Beck offered a solution based on Dirichlet Priors [1], but, we show this solution is vulnerable to a different problem, Model Degeneracy, where parameter values violate the model's conceptual meaning (such as a student being more likely to get a correct answer if he she does not know a skill than if he she does).We offer a new method for instantiating Bayesian Knowledge Tracing, using machine learning to make contextual estimations of the probability that a student has guessed or slipped. This method is no more prone to problems with Identifiability than Beck's solution, has less Model Degeneracy than competing approaches, and fits student performance data better than prior methods. Thus, it allows for more accurate and reliable student modeling in ITSs that use knowledge tracing.", "Student modeling is an important component of ITS research because it can help guide the behavior of a running tutor and help researchers understand how students learn. Due to its predictive accuracy, interpretability and ability to infer student knowledge, Corbett & Anderson’s Bayesian Knowledge Tracing is one of the most popular student models. However, researchers have discovered problems with some of the most popular methods of fitting it. These problems include: multiple sets of highly dissimilar parameters predicting the data equally well (identifiability), local minima, degenerate parameters, and computational cost during fitting. Some researchers have proposed new fitting procedures to combat these problems, but are more complex and not completely successful at eliminating the problems they set out to prevent. We instead fit parameters by estimating the mostly likely point that each student learned the skill, developing a new method that avoids the above problems while achieving similar predictive accuracy.", "Knowledge Tracing is the de-facto standard for inferring student knowledge from performance data. Unfortunately, it does not allow modeling the feature-rich data that is now possible to collect in modern digital learning environments. Because of this, many ad hoc Knowledge Tracing variants have been proposed to model a specific feature of interest. For example, variants have studied the effect of students’ individual characteristics, the effect of help in a tutor, and subskills. These ad hoc models are successful for their own specific purpose, but are specified to only model a single specific feature. We present FAST (Feature Aware Student knowledge Tracing), an efficient, novel method that allows integrating general features into Knowledge Tracing. We demonstrate FAST’s flexibility with three examples of feature sets that are relevant to a wide audience. We use features in FAST to model (i) multiple subskill tracing, (ii) a temporal Item Response Model implementation, and (iii) expert knowledge. We present empirical results using data collected from an Intelligent Tutoring System. We report that using features can improve up to 25 in classification performance of the task of predicting student performance. Moreover, for fitting and inferencing, FAST can be 300 times faster than models created in BNT-SM, a toolkit that facilitates the creation of ad hoc Knowledge Tracing variants." ] }
1907.12047
2956646654
Abstract The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student’s predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance.
Approaches based on recommendation systems are increasingly being used in e-learning to predict students' scores and to personalize educational content. We mention a few examples below and refer the reader to the surveys by @cite_30 and @cite_37 for more details. Collaborative filtering (CF) was previously used in the educational domain for predicting students' performance. Toscher and Jahrer @cite_26 use an ensemble of CF algorithms to predict performance for items in the KDD 2010 educational challenge. Berger et. al @cite_3 use a model-based approach for predicting accuracy levels of students' performance and skill levels on real and simulated data sets. They also formalize a relationship between CF and Item Response Theory methods and demonstrate this relationship empirically. @cite_28 use matrix factorization for task sequencing in a large commercial Intelligent Tutoring System, showing improved adaptivity compared to a baseline sequencer. Finally, Loll and Pinkwart @cite_1 use CF as a diagnostic tool for knowledge test questions as well as more exploratory ill-defined tasks. None of these approaches ranked questions according the personal difficulty level of questions to specific students.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_28", "@cite_1", "@cite_3" ], "mid": [ "1569739730", "1994576156", "2116413942", "2038585576" ], "abstract": [ "We apply collaborative filtering (CF) to dichotomously scored student response data (right, wrong, or no interaction), finding optimal parameters for each student and item based on cross-validated prediction accuracy. The approach is naturally suited to comparing different models, both unidimensional and multidimensional in ability, including a widely used subset of Item Response Theory (IRT) models which obtain as specific instances of the CF: the one-parameter logistic (Rasch) model, Birnbaum’s 2PL model, and Reckase’s multidimensional generalization M2PL. We find that IRT models perform well relative to generalized alternatives, and thus this method offers a fast and stable alternate approach to IRT parameter estimation. Using both real and simulated data we examine cases where oneor two-dimensional IRT models prevail and are not improved by increasing the number of features. Model selection is based on prediction accuracy of the CF, though it is shown to be consistent with factor analysis. In multidimensional cases the item parameterizations can be used in conjunction with cluster analysis to identify groups of items which measure different ability dimensions.", "A major challenge for collaborative filtering (CF) techniques in recommender systems is the data sparsity that is caused by missing and noisy ratings. This problem is even more serious for CF domains where the ratings are expressed numerically, e.g. as 5-star grades. We assume the 5-star ratings are unordered bins instead of ordinal relative preferences. We observe that, while we may lack the information in numerical ratings, we sometimes have additional auxiliary data in the form of binary ratings. This is especially true given that users can easily express themselves with their preferences expressed as likes or dislikes for items. In this paper, we explore how to use these binary auxiliary preference data to help reduce the impact of data sparsity for CF domains expressed in numerical ratings. We solve this problem by transferring the rating knowledge from some auxiliary data source in binary form (that is, likes or dislikes), to a target numerical rating matrix. In particular, our solution is to model both the numerical ratings and ratings expressed as like or dislike in a principled way. We present a novel framework of Transfer by Collective Factorization (TCF), in which we construct a shared latent space collectively and learn the data-dependent effect separately. A major advantage of the TCF approach over the previous bilinear method of collective matrix factorization is that we are able to capture the data-dependent effect when sharing the data-independent knowledge. This allows us to increase the overall quality of knowledge transfer. We present extensive experimental results to demonstrate the effectiveness of TCF at various sparsity levels, and show improvements of our approach as compared to several state-of-the-art methods.", "We present a general approach for collaborative filtering (CF) using spectral regularization to learn linear operators mapping a set of \"users\" to a set of possibly desired \"objects\". In particular, several recent low-rank type matrix-completion methods for CF are shown to be special cases of our proposed framework. Unlike existing regularization-based CF, our approach can be used to incorporate additional information such as attributes of the users objects---a feature currently lacking in existing regularization-based CF approaches---using popular and well-known kernel methods. We provide novel representer theorems that we use to develop new estimation methods. We then provide learning algorithms based on low-rank decompositions and test them on a standard CF data set. The experiments indicate the advantages of generalizing the existing regularization-based CF methods to incorporate related information about users and objects. Finally, we show that certain multi-task learning methods can be also seen as special cases of our proposed approach.", "Collaborative filtering (CF) has been widely employed within recommender systems to solve many real-world problems. Learning effective latent factors plays the most important role in collaborative filtering. Traditional CF methods based upon matrix factorization techniques learn the latent factors from the user-item ratings and suffer from the cold start problem as well as the sparsity problem. Some improved CF methods enrich the priors on the latent factors by incorporating side information as regularization. However, the learned latent factors may not be very effective due to the sparse nature of the ratings and the side information. To tackle this problem, we learn effective latent representations via deep learning. Deep learning models have emerged as very appealing in learning effective representations in many applications. In particular, we propose a general deep architecture for CF by integrating matrix factorization with deep feature learning. We provide a natural instantiations of our architecture by combining probabilistic matrix factorization with marginalized denoising stacked auto-encoders. The combined framework leads to a parsimonious fit over the latent features as indicated by its improved performance in comparison to prior state-of-art models over four large datasets for the tasks of movie book recommendation and response prediction." ] }
1907.12209
2966634313
Monocular depth prediction plays a crucial role in understanding 3D scene geometry. Although recent methods have achieved impressive progress in evaluation metrics such as the pixel-wise relative error, most methods neglect the geometric constraints in the 3D space. In this work, we show the importance of the high-order 3D geometric constraints for depth prediction. By designing a loss term that enforces one simple type of geometric constraints, namely, virtual normal directions determined by randomly sampled three points in the reconstructed 3D space, we can considerably improve the depth prediction accuracy. Significantly, the byproduct of this predicted depth being sufficiently accurate is that we are now able to recover good 3D structures of the scene such as the point cloud and surface normal directly from the depth, eliminating the necessity of training new sub-models as was previously done. Experiments on two benchmarks: NYU Depth-V2 and KITTI demonstrate the effectiveness of our method and state-of-the-art performance.
Depth prediction from images is a long-standing problem. Previous work can be divided into active methods and passive methods. The former ones use the assistant optical information for prediction, such as coded patterns @cite_30 , while the latter ones completely focus on image contents. Monocular depth prediction @cite_5 @cite_50 @cite_22 @cite_13 @cite_14 has been extensively studied recently. As limited geometric information can be directly extracted from the monocular image, it is essentially an ill-posed problem. Recently, owing to the structural features from very deep convolution neural network, such as ResNet @cite_12 , various DCNN-based methods learn to predict depth with deep CNN features. Fu al @cite_16 proposed an encoder-decoder network, which extracts multi-scale features from the encoder and is trained in an end-to-end manner without iterative refinement. They achieved state-of-the-art performance on several datasets. Jiao al @cite_17 proposed an attention-driven loss, which merges the semantic priors to improve the prediction precision on unbalanced distribution datasets.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_22", "@cite_50", "@cite_5", "@cite_16", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "1803059841", "2950619061", "2125416623", "2124907686" ], "abstract": [ "In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.", "We consider the problem of depth estimation from a single monocular image in this work. It is a challenging task as no reliable depth cues are available, e.g., stereo correspondences, motions, etc. Previous efforts have been focusing on exploiting geometric priors or additional sources of information, with all using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) are setting new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimations can be naturally formulated into a continuous conditional random field (CRF) learning problem. Therefore, we in this paper present a deep convolutional neural field model for estimating depths from a single image, aiming to jointly explore the capacity of deep CNN and continuous CRF. Specifically, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. The proposed method can be used for depth estimations of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be analytically calculated, thus we can exactly solve the log-likelihood optimization. Moreover, solving the MAP problem for predicting depths of a new image is highly efficient as closed-form solutions exist. We experimentally demonstrate that the proposed method outperforms state-of-the-art depth estimation methods on both indoor and outdoor scene datasets.", "We consider the problem of depth estimation from a single monocular image in this work. It is a challenging task as no reliable depth cues are available, e.g., stereo correspondences, motions etc. Previous efforts have been focusing on exploiting geometric priors or additional sources of information, with all using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) are setting new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimations can be naturally formulated into a continuous conditional random field (CRF) learning problem. Therefore, we in this paper present a deep convolutional neural field model for estimating depths from a single image, aiming to jointly explore the capacity of deep CNN and continuous CRF. Specifically, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. The proposed method can be used for depth estimations of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be analytically calculated, thus we can exactly solve the log-likelihood optimization. Moreover, solving the MAP problem for predicting depths of a new image is highly efficient as closed-form solutions exist. We experimentally demonstrate that the proposed method outperforms state-of-the-art depth estimation methods on both indoor and outdoor scene datasets.", "Predicting the depth (or surface normal) of a scene from single monocular color images is a challenging task. This paper tackles this challenging and essentially underdetermined problem by regression on deep convolutional neural network (DCNN) features, combined with a post-processing refining step using conditional random fields (CRF). Our framework works at two levels, super-pixel level and pixel level. First, we design a DCNN model to learn the mapping from multi-scale image patches to depth or surface normal values at the super-pixel level. Second, the estimated super-pixel depth or surface normal is refined to the pixel level by exploiting various potentials on the depth or surface normal map, which includes a data term, a smoothness term among super-pixels and an auto-regression term characterizing the local structure of the estimation map. The inference problem can be efficiently solved because it admits a closed-form solution. Experiments on the Make3D and NYU Depth V2 datasets show competitive results compared with recent state-of-the-art methods." ] }
1907.12209
2966634313
Monocular depth prediction plays a crucial role in understanding 3D scene geometry. Although recent methods have achieved impressive progress in evaluation metrics such as the pixel-wise relative error, most methods neglect the geometric constraints in the 3D space. In this work, we show the importance of the high-order 3D geometric constraints for depth prediction. By designing a loss term that enforces one simple type of geometric constraints, namely, virtual normal directions determined by randomly sampled three points in the reconstructed 3D space, we can considerably improve the depth prediction accuracy. Significantly, the byproduct of this predicted depth being sufficiently accurate is that we are now able to recover good 3D structures of the scene such as the point cloud and surface normal directly from the depth, eliminating the necessity of training new sub-models as was previously done. Experiments on two benchmarks: NYU Depth-V2 and KITTI demonstrate the effectiveness of our method and state-of-the-art performance.
Most previous methods only adopted the pixel-wise depth supervision to train a network. By contrast, Liu al @cite_19 combined DCNN with the continuous conditional random field (CRF) to exploit consistency information of neighbouring pixels. CRF establishes a pair-wise constraint for local regions. Furthermore, several high-order constraints are investigated. Chen al @cite_33 applied the generative adversarial training to lead the network to learn a context-aware and patch-level loss automatically. Note that most of these methods directly work with the depth, instead of in the 3D space.
{ "cite_N": [ "@cite_19", "@cite_33" ], "mid": [ "2964288706", "1923697677", "2254177447", "2962872526" ], "abstract": [ "Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "Deep convolutional neural networks (CNNs) are the backbone of state-of-art semantic image segmentation systems. Recent work has shown that complementing CNNs with fully-connected conditional random fields (CRFs) can significantly enhance their object localization accuracy, yet dense CRF inference is computationally expensive. We propose replacing the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map. Domain transform filtering is several times faster than dense CRF inference and we show that it yields comparable semantic segmentation results, accurately capturing object boundaries. Importantly, our formulation allows learning the reference edge map from intermediate CNN features instead of using the image gradient magnitude as in standard DT filtering. This produces task-specific edges in an end-to-end trainable system optimizing the target semantic segmentation quality.", "Deep convolutional neural networks (CNNs) are the backbone of state-of-art semantic image segmentation systems. Recent work has shown that complementing CNNs with fully-connected conditional random fields (CRFs) can significantly enhance their object localization accuracy, yet dense CRF inference is computationally expensive. We propose replacing the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map. Domain transform filtering is several times faster than dense CRF inference and we show that it yields comparable semantic segmentation results, accurately capturing object boundaries. Importantly, our formulation allows learning the reference edge map from intermediate CNN features instead of using the image gradient magnitude as in standard DT filtering. This produces task-specific edges in an end-to-end trainable system optimizing the target semantic segmentation quality." ] }
1907.12237
2964535354
This paper addresses the challenge of localization of anatomical landmarks in knee X-ray images at different stages of osteoarthritis (OA). Landmark localization can be viewed as regression problem, where the landmark position is directly predicted by using the region of interest or even full-size images leading to large memory footprint, especially in case of high resolution medical images. In this work, we propose an efficient deep neural networks framework with an hourglass architecture utilizing a soft-argmax layer to directly predict normalized coordinates of the landmark points. We provide an extensive evaluation of different regularization techniques and various loss functions to understand their influence on the localization performance. Furthermore, we introduce the concept of transfer learning from low-budget annotations, and experimentally demonstrate that such approach is improving the accuracy of landmark localization. Compared to the prior methods, we validate our model on two datasets that are independent from the train data and assess the performance of the method for different stages of OA severity. The proposed approach demonstrates better generalization performance compared to the current state-of-the-art.
There are several methods focusing solely on the ROI localization. Tiulpin @cite_16 proposed a novel anatomical proposal method to localize the knee joint area. Antony @cite_19 used fully convolutional networks for the same problem. Recently, Chen @cite_14 proposed to use object detection methods to measure the knee OA severity.
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_16" ], "mid": [ "2604759322", "2521048164", "2159988341", "2951269226" ], "abstract": [ "This paper introduces a new approach to automatically quantify the severity of knee OA using X-ray images. Automatically quantifying knee OA severity involves two steps: first, automatically localizing the knee joints; next, classifying the localized knee joint images. We introduce a new approach to automatically detect the knee joints using a fully convolutional neural network (FCN). We train convolutional neural networks (CNN) from scratch to automatically quantify the knee OA severity optimizing a weighted ratio of two loss functions: categorical cross-entropy and mean-squared loss. This joint training further improves the overall quantification of knee OA severity, with the added benefit of naturally producing simultaneous multi-class classification and regression outputs. Two public datasets are used to evaluate our approach, the Osteoarthritis Initiative (OAI) and the Multicenter Osteoarthritis Study (MOST), with extremely promising results that outperform existing approaches.", "This paper proposes a new approach to automatically quantify the severity of knee osteoarthritis (OA) from radiographs using deep convolutional neural networks (CNN). Clinically, knee OA severity is assessed using Kellgren & Lawrence (KL) grades, a five point scale. Previous work on automatically predicting KL grades from radiograph images were based on training shallow classifiers using a variety of hand engineered features. We demonstrate that classification accuracy can be significantly improved using deep convolutional neural network models pre-trained on ImageNet and fine-tuned on knee OA images. Furthermore, we argue that it is more appropriate to assess the accuracy of automatic knee OA severity predictions using a continuous distance-based evaluation metric like mean squared error than it is to use classification accuracy. This leads to the formulation of the prediction of KL grades as a regression problem and further improves accuracy. Results on a dataset of X-ray images and KL grades from the Osteoarthritis Initiative (OAI) show a sizable improvement over the current state-of-the-art.", "Summary Objective To determine whether computer-based analysis can detect features predictive of osteoarthritis (OA) development in radiographically normal knees. Method A systematic computer-aided image analysis method weighted neighbor distances using a compound hierarchy of algorithms representing morphology (WND-CHARM) was used to analyze pairs of weight-bearing knee X-rays. Initial X-rays were all scored as normal Kellgren–Lawrence (KL) grade 0, and on follow-up approximately 20 years later either developed OA (defined as KL grade=2) or remained normal. Results The computer-aided method predicted whether a knee would change from KL grade 0 to grade 3 with 72 accuracy ( P P Conclusion Radiographic features detectable using a computer-aided image analysis method can predict the future development of radiographic knee OA.", "Abstract Knee osteoarthritis (OA) is one major cause of activity limitation and physical disability in older adults. Early detection and intervention can help slow down the OA degeneration. Physicians’ grading based on visual inspection is subjective, varied across interpreters, and highly relied on their experience. In this paper, we successively apply two deep convolutional neural networks (CNN) to automatically measure the knee OA severity, as assessed by the Kellgren-Lawrence (KL) grading system. Firstly, considering the size of knee joints distributed in X-ray images with small variability, we detect knee joints using a customized one-stage YOLOv2 network. Secondly, we fine-tune the most popular CNN models, including variants of ResNet, VGG, and DenseNet as well as InceptionV3, to classify the detected knee joint images with a novel adjustable ordinal loss. To be specific, motivated by the ordinal nature of the knee KL grading task, we assign higher penalty to misclassification with larger distance between the predicted KL grade and the real KL grade. The baseline X-ray images from the Osteoarthritis Initiative (OAI) dataset are used for evaluation. On the knee joint detection, we achieve mean Jaccard index of 0.858 and recall of 92.2 under the Jaccard index threshold of 0.75. On the knee KL grading task, the fine-tuned VGG-19 model with the proposed ordinal loss obtains the best classification accuracy of 69.7 and mean absolute error (MAE) of 0.344. Both knee joint detection and knee KL grading achieve state-of-the-art performance. The code, dataset, and models are released at https: github.com PingjunChen KneeAnalysis ." ] }
1907.12237
2964535354
This paper addresses the challenge of localization of anatomical landmarks in knee X-ray images at different stages of osteoarthritis (OA). Landmark localization can be viewed as regression problem, where the landmark position is directly predicted by using the region of interest or even full-size images leading to large memory footprint, especially in case of high resolution medical images. In this work, we propose an efficient deep neural networks framework with an hourglass architecture utilizing a soft-argmax layer to directly predict normalized coordinates of the landmark points. We provide an extensive evaluation of different regularization techniques and various loss functions to understand their influence on the localization performance. Furthermore, we introduce the concept of transfer learning from low-budget annotations, and experimentally demonstrate that such approach is improving the accuracy of landmark localization. Compared to the prior methods, we validate our model on two datasets that are independent from the train data and assess the performance of the method for different stages of OA severity. The proposed approach demonstrates better generalization performance compared to the current state-of-the-art.
The proposed approach is related to the regression-based methods for keypoint localization @cite_7 . We utilize an hourglass network which is an encoder-decoder model initially introduced for human pose estimation @cite_34 and address both ROI and landmark localization tasks. Several other studies in medical imaging domain also leveraged a similar approach by applying U-Net @cite_38 to the landmark localization problem @cite_37 @cite_0 . However, the encoder-decoder networks are computationally heavy during the training phase since they regress a tensor of high-resolution heatmaps which is challenging for medical images that are typically of a large size. It is notable that decreasing the image resolution could negatively impact the accuracy of landmark localization. In addition, most of the existing approaches use a refinement step which makes the computational burden even harder to cope with. Nevertheless, hourglass CNNs are widely used in human pose estimation @cite_34 due to a possibility of lowering down the resolution and the absence of precise ground truth.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_7", "@cite_0", "@cite_34" ], "mid": [ "2925288829", "2520402508", "2773656608", "2739492061" ], "abstract": [ "Abstract In many medical image analysis applications, only a limited amount of training data is available due to the costs of image acquisition and the large manual annotation effort required from experts. Training recent state-of-the-art machine learning methods like convolutional neural networks (CNNs) from small datasets is a challenging task. In this work on anatomical landmark localization, we propose a CNN architecture that learns to split the localization task into two simpler sub-problems, reducing the overall need for large training datasets. Our fully convolutional SpatialConfiguration-Net (SCN) learns this simplification due to multiplying the heatmap predictions of its two components and by training the network in an end-to-end manner. Thus, the SCN dedicates one component to locally accurate but ambiguous candidate predictions, while the other component improves robustness to ambiguities by incorporating the spatial configuration of landmarks. In our extensive experimental evaluation, we show that the proposed SCN outperforms related methods in terms of landmark localization error on a variety of size-limited 2D and 3D landmark localization datasets, i.e., hand radiographs, lateral cephalograms, hand MRIs, and spine CTs.", "Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D high-resolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-the-art landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy.", "Image based localization is one of the important problems in computer vision due to its wide applicability in robotics, augmented reality, and autonomous systems. There is a rich set of methods described in the literature how to geometrically register a 2D image w.r.t. a 3D model. Recently, methods based on deep (and convolutional) feedforward networks (CNNs) became popular for pose regression. However, these CNN-based methods are still less accurate than geometry based methods despite being fast and memory efficient. In this work we design a deep neural network architecture based on sparse feature descriptors to estimate the absolute pose of an image. Our choice of using sparse feature descriptors has two major advantages: first, our network is significantly smaller than the CNNs proposed in the literature for this task---thereby making our approach more efficient and scalable. Second---and more importantly---, usage of sparse features allows to augment the training data with synthetic viewpoints, which leads to substantial improvements in the generalization performance to unseen poses. Thus, our proposed method aims to combine the best of the two worlds---feature-based localization and CNN-based pose regression--to achieve state-of-the-art performance in the absolute pose estimation. A detailed analysis of the proposed architecture and a rigorous evaluation on the existing datasets are provided to support our method.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Although convolutional neural networks (CNN) had achieved great success in analyzing 2D images, they cannot be directly applied to point clouds beacuse of its unorganized nature. Without a pixel-based neighborhood defined, vanilla CNNs cannot extract local information and gradually expand receptive field sizes in a meaningful manner. Thus, segmentation tasks were first performed in a way that simulate 2D scenarios – by fusing partial views represented with RGB-D images together @cite_4 @cite_16 @cite_25 @cite_12 . Some other work transform point clouds into cost-inefficient voxel representations on which CNN can be directly applied @cite_5 @cite_8 @cite_11 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_5", "@cite_16", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2768623474", "2258484932", "2951309005", "2432481613" ], "abstract": [ "The need to model visual information with compact representations has existed since the early days of computer vision. We implemented in the past a segmentation and model recovery method for range images which is unfortunately too slow for current size of 3D point clouds and type of applications. Recently, neural networks have become the popular choice for quick and effective processing of visual data. In this article we demonstrate that with a convolutional neural network we could achieve comparable results, that is to determine and model all objects in a given 3D point cloud scene. We started off with a simple architecture that could predict the parameters of a single object in a scene. Then we expanded it with an architecture similar to Faster R-CNN, that could predict the parameters for any number of objects in a scene. The results of the initial neural network were satisfactory. The second network, that performed also segmentation, still gave decent results comparable to the original method, but compared to the initial one, performed somewhat worse. Results, however, are encouraging but further experiments are needed to build CNNs that will be able to replace the state-of-the-art method.", "Convolutional neural network (CNN) has achieved the state-of-the-art performance in many different visual tasks. Learned from a large-scale training data set, CNN features are much more discriminative and accurate than the handcrafted features. Moreover, CNN features are also transferable among different domains. On the other hand, traditional dictionary-based features (such as BoW and spatial pyramid matching) contain much more local discriminative and structural information, which is implicitly embedded in the images. To further improve the performance, in this paper, we propose to combine CNN with dictionary-based models for scene recognition and visual domain adaptation (DA). Specifically, based on the well-tuned CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations are further constructed, namely, mid-level local representation (MLR) and convolutional Fisher vector (CFV) representation. In MLR, an efficient two-stage clustering method, i.e., weighted spatial and feature space spectral clustering on the parts of a single image followed by clustering all representative parts of all images, is used to generate a class-mixture or a class-specific part dictionary. After that, the part dictionary is used to operate with the multiscale image inputs for generating mid-level representation. In CFV, a multiscale and scale-proportional Gaussian mixture model training strategy is utilized to generate Fisher vectors based on the last convolutional layer of CNN. By integrating the complementary information of MLR, CFV, and the CNN features of the fully connected layer, the state-of-the-art performance can be achieved on scene recognition and DA problems. An interested finding is that our proposed hybrid representation (from VGG net trained on ImageNet) is also complementary to GoogLeNet and or VGG-11 (trained on Place205) greatly.", "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.", "Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Although these methods did benefit from mature 2D image processing network structures, inefficient 3D data representations constrained them from showing good performance for scene segmentation, where it is necessary to deal with large, dense 3D scenes as a whole. Therefore, recent research gradually turned to networks that directly operate on point clouds when dealing with semantic segmentation for complex indoor outdoor scenes @cite_6 @cite_7 @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_6" ], "mid": [ "2777686015", "2796040722", "2771796597", "2963053547" ], "abstract": [ "Fully convolutional network (FCN) has been successfully applied in semantic segmentation of scenes represented with RGB images. Images augmented with depth channel provide more understanding of the geometric information of the scene in the image. The question is how to best exploit this additional information to improve the segmentation performance.,,In this paper, we present a neural network with multiple branches for segmenting RGB-D images. Our approach is to use the available depth to split the image into layers with common visual characteristic of objects scenes, or common “scene-resolution”. We introduce context-aware receptive field (CaRF) which provides a better control on the relevant contextual information of the learned features. Equipped with CaRF, each branch of the network semantically segments relevant similar scene-resolution, leading to a more focused domain which is easier to learn. Furthermore, our network is cascaded with features from one branch augmenting the features of adjacent branch. We show that such cascading of features enriches the contextual information of each branch and enhances the overall performance. The accuracy that our network achieves outperforms the stateof-the-art methods on two public datasets.", "Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at this http URL", "3D data such as point clouds and meshes are becoming more and more available. The goal of this paper is to obtain 3D object and scene classification and semantic segmentation. Because point clouds have irregular formats, most of the existing methods convert the 3D data into multiple 2D projection images or 3D voxel grids. These representations are suited as input of conventional CNNs but they either ignore the underlying 3D geometrical structure or are constrained by data sparsity and computational complexity. Therefore, recent methods encode the coordinates of each point cloud to certain high dimensional features to cover the 3D space. However, by their design, these models are not able to sufficiently capture the local patterns. In this paper, we propose a method that directly uses point clouds as input and exploits the implicit space partition of k-d tree structure to learn the local contextual information and aggregate features at different scales hierarchically. Extensive experiments on challenging benchmarks show that our proposed model properly captures the local patterns to provide discriminative point set features. For the task of 3D scene semantic segmentation, our method outperforms the state-of-the-art on the challenging Stanford Large-Scale 3D Indoor Spaces Dataset(S3DIS) by a large margin.", "Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at http: www.merl.com research license#KCNet" ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
As introduced, PointNet used multi-layer perception (which process each point independently) to fit the unordered nature of point clouds @cite_8 . Furthermore, similar approaches like using @math convolutional kernels @cite_10 , radius querying @cite_1 or nearest neighbor searching @cite_0 were also adopted. Because local dependencies were not effectively modeled, overfitting constantly occurred when these networks were used to perform large-scale scene segmentation. In addition, work like R-Conv @cite_9 tried to avoid time-consuming neighbor searching with global recurrent transformation prior to convolutional analysis. However, scalability problems still occurred as the global RNN cannot directly operate on the point cloud representing an entire dense scene, which often contains multiple million points.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_10" ], "mid": [ "2900731076", "2963053547", "2796040722", "2963336905" ], "abstract": [ "Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and density functions through kernel density estimation. The most important contribution of this work is a novel reformulation proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-of-the-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.", "Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at http: www.merl.com research license#KCNet", "Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at this http URL", "Semantic parsing of large-scale 3D point clouds is an important research topic in computer vision and remote sensing fields. Most existing approaches utilize hand-crafted features for each modality independently and combine them in a heuristic manner. They often fail to consider the consistency and complementary information among features adequately, which makes them difficult to capture high-level semantic structures. The features learned by most of the current deep learning methods can obtain high-quality image classification results. However, these methods are hard to be applied to recognize 3D point clouds due to unorganized distribution and various point density of data. In this paper, we propose a 3DCNN-DQN-RNN method which fuses the 3D convolutional neural network (CNN), Deep Q-Network (DQN) and Residual recurrent neural network (RNN)for an efficient semantic parsing of large-scale 3D point clouds. In our method, an eye window under control of the 3D CNN and DQN can localize and segment the points of the object's class efficiently. The 3D CNN and Residual RNN further extract robust and discriminative features of the points in the eye window, and thus greatly enhance the parsing accuracy of large-scale point clouds. Our method provides an automatic process that maps the raw data to the classification results. It also integrates object localization, segmentation and classification into one framework. Experimental results demonstrate that the proposed method outperforms the state-of-the-art point cloud classification methods." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Tangent Convolution @cite_7 proposed a way to efficiently model local dependencies and align convolutional filters on different scales. Their work is based on local covariance analysis and down-sampled neighborhood reconstruction with raw data points. Despite tangential convolution itself functioned well extracting local features, their network architecture was limited by static, uniform intermedium feature aggregation and a complete lack of global integration.
{ "cite_N": [ "@cite_7" ], "mid": [ "2949360407", "2793035069", "2169488311", "2412782625" ], "abstract": [ "In many machine learning tasks it is desirable that a model's prediction transforms in an equivariant way under transformations of its input. Convolutional neural networks (CNNs) implement translational equivariance by construction; for other transformations, however, they are compelled to learn the proper mapping. In this work, we develop Steerable Filter CNNs (SFCNNs) which achieve joint equivariance under translations and rotations by design. The proposed architecture employs steerable filters to efficiently compute orientation dependent responses for many orientations without suffering interpolation artifacts from filter rotation. We utilize group convolutions which guarantee an equivariant mapping. In addition, we generalize He's weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters. Numerical experiments show a substantial enhancement of the sample complexity with a growing number of sampled filter orientations and confirm that the network generalizes learned patterns over orientations. The proposed approach achieves state-of-the-art on the rotated MNIST benchmark and on the ISBI 2012 2D EM segmentation challenge.", "While the research on convolutional neural networks (CNNs) is progressing quickly, the real-world deployment of these models is often limited by computing resources and memory constraints. In this paper, we address this issue by proposing a novel filter pruning method to compress and accelerate CNNs. Our work is based on the linear relationship identified in different feature map subspaces via visualization of feature maps. Such linear relationship implies that the information in CNNs is redundant. Our method eliminates the redundancy in convolutional filters by applying subspace clustering to feature maps. In this way, most of the representative information in the network can be retained in each cluster. Therefore, our method provides an effective solution to filter pruning for which most existing methods directly remove filters based on simple heuristics. The proposed method is independent of the network structure, thus it can be adopted by any off-the-shelf deep learning libraries. Experiments on different networks and tasks show that our method outperforms existing techniques before fine-tuning, and achieves the state-of-the-art results after fine-tuning.", "We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an increasingly popular method for learning visual features, it is most often trained at the patch level. Applying the resulting filters convolutionally results in highly redundant codes because overlapping patches are encoded in isolation. By training convolutionally over large image windows, our method reduces the redudancy between feature vectors at neighboring locations and improves the efficiency of the overall representation. In addition to a linear decoder that reconstructs the image from sparse features, our method trains an efficient feed-forward encoder that predicts quasi-sparse features from the input. While patch-based training rarely produces anything but oriented edge detectors, we show that convolutional training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves performance on a number of visual recognition and detection tasks.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Several works turned to the global scale for permutation robustness. Its simplest form, global maximum pooling, only fulfilled light-weight tasks like object classification or part segmentation @cite_21 . Moreover, RNNs constructed with advance cells like Long-Short-Term-Memory @cite_28 or Gate-Recurrent-Unit @cite_19 offered promising results on scene segmentation @cite_26 , even for those architectures without significant consideration for local feature extraction @cite_15 @cite_14 . However, in those cases the global RNNs were built deep, bidirectional or compact with hidden units, giving out a strict limitation on the direct input. As a result, the original point cloud was often down-sampled to an extreme extent, or the network was only capable of operating on sections of the original point cloud @cite_15 .
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_28", "@cite_21", "@cite_19", "@cite_15" ], "mid": [ "2895472109", "2963336905", "2962883174", "2754526845" ], "abstract": [ "Semantic segmentation of 3D unstructured point clouds remains an open research problem. Recent works predict semantic labels of 3D points by virtue of neural networks but take limited context knowledge into consideration. In this paper, a novel end-to-end approach for unstructured point cloud semantic segmentation, named 3P-RNN, is proposed to exploit the inherent contextual features. First the efficient pointwise pyramid pooling module is investigated to capture local structures at various densities by taking multi-scale neighborhood into account. Then the two-direction hierarchical recurrent neural networks (RNNs) are utilized to explore long-range spatial dependencies. Each recurrent layer takes as input the local features derived from unrolled cells and sweeps the 3D space along two directions successively to integrate structure knowledge. On challenging indoor and outdoor 3D datasets, the proposed framework demonstrates robust performance superior to state-of-the-arts.", "Semantic parsing of large-scale 3D point clouds is an important research topic in computer vision and remote sensing fields. Most existing approaches utilize hand-crafted features for each modality independently and combine them in a heuristic manner. They often fail to consider the consistency and complementary information among features adequately, which makes them difficult to capture high-level semantic structures. The features learned by most of the current deep learning methods can obtain high-quality image classification results. However, these methods are hard to be applied to recognize 3D point clouds due to unorganized distribution and various point density of data. In this paper, we propose a 3DCNN-DQN-RNN method which fuses the 3D convolutional neural network (CNN), Deep Q-Network (DQN) and Residual recurrent neural network (RNN)for an efficient semantic parsing of large-scale 3D point clouds. In our method, an eye window under control of the 3D CNN and DQN can localize and segment the points of the object's class efficiently. The 3D CNN and Residual RNN further extract robust and discriminative features of the points in the eye window, and thus greatly enhance the parsing accuracy of large-scale point clouds. Our method provides an automatic process that maps the raw data to the classification results. It also integrates object localization, segmentation and classification into one framework. Experimental results demonstrate that the proposed method outperforms the state-of-the-art point cloud classification methods.", "Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59x speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non- LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available.", "Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59x speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non- LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is publicly available at this https URL" ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Various works in this area aimed to promote existing supervised-learning networks as auto-encoders. For example, FoldingNet @cite_23 managed to learn global features of a 3D object through constructing a deformable 2D grid surface; PointWise @cite_20 considered theoretical smoothness of object surface; and, MortonNet @cite_17 learned compact local features by generating fractal space-filling curves and predicting its endpoint. Although features provided by these auto-encoders are reported to be beneficial, we do not adopt them into our network for a fair evaluation on the aggregation method we propose.
{ "cite_N": [ "@cite_20", "@cite_23", "@cite_17" ], "mid": [ "2796426482", "2778361827", "2890018557", "2785325870" ], "abstract": [ "Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7 parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http: www.merl.com research license#FoldingNet", "Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised semantic learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based approach is proposed in the decoder, which folds a 2D grid onto the underlying 3D object surface of a point cloud. The proposed decoder only uses about 7 parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Finally, this folding-based decoder is interpretable since the reconstruction could be viewed as a fine granular warping from the 2D grid to the point cloud surface.", "Learning 3D global features by aggregating multiple views has been introduced as a successful strategy for 3D shape analysis. In recent deep learning models with end-to-end training, pooling is a widely adopted procedure for view aggregation. However, pooling merely retains the max or mean value over all views, which disregards the content information of almost all views and also the spatial information among the views. To resolve these issues, we propose Sequential Views To Sequential Labels (SeqViews2SeqLabels) as a novel deep learning model with an encoder–decoder structure based on recurrent neural networks (RNNs) with attention. SeqViews2SeqLabels consists of two connected parts, an encoder-RNN followed by a decoder-RNN, that aim to learn the global features by aggregating sequential views and then performing shape classification from the learned global features, respectively. Specifically, the encoder-RNN learns the global features by simultaneously encoding the spatial and content information of sequential views, which captures the semantics of the view sequence. With the proposed prediction of sequential labels, the decoder-RNN performs more accurate classification using the learned global features by predicting sequential labels step by step. Learning to predict sequential labels provides more and finer discriminative information among shape classes to learn, which alleviates the overfitting problem inherent in training using a limited number of 3D shapes. Moreover, we introduce an attention mechanism to further improve the discriminative ability of SeqViews2SeqLabels. This mechanism increases the weight of views that are distinctive to each shape class, and it dramatically reduces the effect of selecting the first view position. Shape classification and retrieval results under three large-scale benchmarks verify that SeqViews2SeqLabels learns more discriminative global features by more effectively aggregating sequential views than state-of-the-art methods.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL ." ] }
1907.12022
2965190089
Traditional grid neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.
Different from the common usage of finding a rich, concise feature embedding, SO-Net @cite_10 unsupervised learned a self-organizing map (SOM) for feature extraction and aggregation. Despite its novelty, few performance improvements were observed even when compared to PointNet or OctNet @cite_24 . Possible reasons include their usage of the SOM and a lack of deep local and global analysis.
{ "cite_N": [ "@cite_24", "@cite_10" ], "mid": [ "2790466413", "2963719584", "2785325870", "2962742544" ], "abstract": [ "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website. this https URL", "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similar striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification." ] }
1907.11941
2966630200
Abstract There can be performance and vulnerability concerns with block ciphers, thus stream ciphers can used as an alternative. Although many symmetric key stream ciphers are fairly resistant to side-channel attacks, cryptographic artefacts may exist in memory. This paper identifies a significant vulnerability within OpenSSH and OpenSSL and which involves the discovery of cryptographic artefacts used within the ChaCha20 cipher. This can allow for the cracking of tunneled data using a single targeted memory extraction. With this, law enforcement agencies and or malicious agents could use the vulnerability to take copies of the encryption keys used for each tunnelled connection. The user of a virtual machine would not be alerted to the capturing of the encryption key, as the method runs from an extraction of the running memory. Methods of mitigation include making cryptographic artefacts difficult to discover and limiting memory access.
This paper focuses on the decrypting network traffic encrypted with ChaCha20-Poly1305 cipher. Prior studies have investigated potential vulnerabilities in cipher design and in cipher implementation. Researchers have found no vulnerabilities in ChaCha20 design. For example, differential attacks using techniques such as identifying significant key bits only succeeded with reduced cipher rounds and significant volumes of plaintext-ciphertext pairs @cite_8 @cite_34 . Combined linear and differential analysis improves performance, but is similarly restricted @cite_21 .
{ "cite_N": [ "@cite_34", "@cite_21", "@cite_8" ], "mid": [ "2612613213", "2768792654", "1577801461", "2336016133" ], "abstract": [ "The stream cipher ChaCha20 and the MAC function Poly1305 have been published as IETF RFC 7539. Since then, the industry is starting to use it more often. For example, it has been implemented by Google in their Chrome browser for TLS and also support has been added to OpenSSL, as well as OpenSSH. It is often claimed, that the algorithms are designed to be resistant to side-channel attacks. However, this is only true, if the only observable side-channel is the timing behavior. In this paper, we show that ChaCha20 is susceptible to power and EM side-channel analysis, which also translates to an attack on Poly1305, if used together with ChaCha20 for key generation. As a first countermeasure, we analyze the effectiveness of randomly shuffling the operations of the ChaCha round function.", "ChaCha is a family of stream ciphers that are very efficient on constrainted platforms. In this paper, we present electromagnetic side-channel analyses for two different software implementations of ChaCha20 on a 32-bit architecture: one compiled and another one directly written in assembly. On the device under test, practical experiments show that they have different levels of resistance to side-channel attacks. For the most leakage-resilient implementation, an analysis of the whole quarter round is required. To overcome this complication, we introduce an optimized attack based on a divide-and-conquer strategy named bricklayer attack.", "The stream cipher Salsa20 was introduced by Bernstein in 2005 as a candidate in the eSTREAM project, accompanied by the reduced versions Salsa20 8 and Salsa20 12. ChaCha is a variant of Salsa20 aiming at bringing better diffusion for similar performance. Variants of Salsa20 with up to 7 rounds (instead of 20) have been broken by differential cryptanalysis, while ChaCha has not been analyzed yet. We introduce a novel method for differential cryptanalysis of Salsa20 and ChaCha, inspired by correlation attacks and related to the notion of neutral bits. This is the first application of neutral bits in stream cipher cryptanalysis. It allows us to break the 256-bit version of Salsa20 8, to bring faster attacks on the 7-round variant, and to break 6- and 7-round ChaCha. In a second part, we analyze the compression function Rumba, built as the XOR of four Salsa20 instances and returning a 512-bit output. We find collision and preimage attacks for two simplified variants, then we discuss differential attacks on the original version, and exploit a high-probability differential to reduce complexity of collision search from 2256to 279for 3-round Rumba. To prove the correctness of our approach we provide examples of collisions and near-collisions on simplified versions.", "Recently, ChaCha20 (the stream cipher ChaCha with 20 rounds) is in the process of being a standardized and thus it attracts serious interest in cryptanalysis. The most significant effort to analyse Salsa and ChaCha was explained by Aumasson et?al. long back (FSE 2008) and further, only minor improvements could be achieved. In this paper, first we revisit the work of Aumasson et?al. to provide a clearer insight of the existing attack (2248 complexity for ChaCha7, i.e.,?7 rounds) and show certain improvements (complexity around 2243) by exploiting additional Probabilistic Neutral Bits. More importantly, we describe a novel idea that explores proper choice of IVs corresponding to the keys, for which the complexity can be improved further (2239). The choice of IVs corresponding to the keys is the prime observation of this work. We systematically show how a single difference propagates after one round and how the differences can be reduced with proper choices of IVs. For Salsa too (Salsa20 8, i.e.,?8 rounds), we get improvement in complexity, reducing it to 2 245.5 from 2 247.2 reported by Aumasson et?al." ] }
1907.11941
2966630200
Abstract There can be performance and vulnerability concerns with block ciphers, thus stream ciphers can used as an alternative. Although many symmetric key stream ciphers are fairly resistant to side-channel attacks, cryptographic artefacts may exist in memory. This paper identifies a significant vulnerability within OpenSSH and OpenSSL and which involves the discovery of cryptographic artefacts used within the ChaCha20 cipher. This can allow for the cracking of tunneled data using a single targeted memory extraction. With this, law enforcement agencies and or malicious agents could use the vulnerability to take copies of the encryption keys used for each tunnelled connection. The user of a virtual machine would not be alerted to the capturing of the encryption key, as the method runs from an extraction of the running memory. Methods of mitigation include making cryptographic artefacts difficult to discover and limiting memory access.
ChaCha20 implementations may be vulnerable to side-channel attacks. While the cipher design may prevent timing attacks @cite_19 , correlating power electromagnetic radiation when specific cryptographic activities are performed may leak key stream information @cite_7 @cite_10 . Engendering instruction skips, for example by using a laser or electromagnetic pulse, could potentially produce the key stream but timing the activity would be challenging @cite_28 . Furthermore, these approaches may be impractical in real-world scenarios.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_10", "@cite_7" ], "mid": [ "2768792654", "2612613213", "2336016133", "2604967423" ], "abstract": [ "ChaCha is a family of stream ciphers that are very efficient on constrainted platforms. In this paper, we present electromagnetic side-channel analyses for two different software implementations of ChaCha20 on a 32-bit architecture: one compiled and another one directly written in assembly. On the device under test, practical experiments show that they have different levels of resistance to side-channel attacks. For the most leakage-resilient implementation, an analysis of the whole quarter round is required. To overcome this complication, we introduce an optimized attack based on a divide-and-conquer strategy named bricklayer attack.", "The stream cipher ChaCha20 and the MAC function Poly1305 have been published as IETF RFC 7539. Since then, the industry is starting to use it more often. For example, it has been implemented by Google in their Chrome browser for TLS and also support has been added to OpenSSL, as well as OpenSSH. It is often claimed, that the algorithms are designed to be resistant to side-channel attacks. However, this is only true, if the only observable side-channel is the timing behavior. In this paper, we show that ChaCha20 is susceptible to power and EM side-channel analysis, which also translates to an attack on Poly1305, if used together with ChaCha20 for key generation. As a first countermeasure, we analyze the effectiveness of randomly shuffling the operations of the ChaCha round function.", "Recently, ChaCha20 (the stream cipher ChaCha with 20 rounds) is in the process of being a standardized and thus it attracts serious interest in cryptanalysis. The most significant effort to analyse Salsa and ChaCha was explained by Aumasson et?al. long back (FSE 2008) and further, only minor improvements could be achieved. In this paper, first we revisit the work of Aumasson et?al. to provide a clearer insight of the existing attack (2248 complexity for ChaCha7, i.e.,?7 rounds) and show certain improvements (complexity around 2243) by exploiting additional Probabilistic Neutral Bits. More importantly, we describe a novel idea that explores proper choice of IVs corresponding to the keys, for which the complexity can be improved further (2239). The choice of IVs corresponding to the keys is the prime observation of this work. We systematically show how a single difference propagates after one round and how the differences can be reduced with proper choices of IVs. For Salsa too (Salsa20 8, i.e.,?8 rounds), we get improvement in complexity, reducing it to 2 245.5 from 2 247.2 reported by Aumasson et?al.", "Time variation during program execution can leak sensitive information. Time variations due to program control flow and hardware resource contention have been used to steal encryption keys in cipher implementations such as AES and RSA. A number of approaches to mitigate timing-based side-channel attacks have been proposed including cache partitioning, control-flow obfuscation and injecting timing noise into the outputs of code. While these techniques make timing-based side-channel attacks more difficult, they do not eliminate the risks. Prior techniques are either too specific or too expensive, and all leave remnants of the original timing side channel for later attackers to attempt to exploit. In this work, we show that the state-of-the-art techniques in timing side-channel protection, which limit timing leakage but do not eliminate it, still have significant vulnerabilities to timing-based side-channel attacks. To provide a means for total protection from timing-based side-channel attacks, we develop Ozone, the first zero timing leakage execution resource for a modern microarchitecture. Code in Ozone execute under a special hardware thread that gains exclusive access to a single core's resources for a fixed (and limited) number of cycles during which it cannot be interrupted. Memory access under Ozone thread execution is limited to a fixed size uncached scratchpad memory, and all Ozone threads begin execution with a known fixed microarchitectural state. We evaluate Ozone using a number of security sensitive kernels that have previously been targets of timing side-channel attacks, and show that Ozone eliminates timing leakage with minimal performance overhead." ] }
1907.11941
2966630200
Abstract There can be performance and vulnerability concerns with block ciphers, thus stream ciphers can used as an alternative. Although many symmetric key stream ciphers are fairly resistant to side-channel attacks, cryptographic artefacts may exist in memory. This paper identifies a significant vulnerability within OpenSSH and OpenSSL and which involves the discovery of cryptographic artefacts used within the ChaCha20 cipher. This can allow for the cracking of tunneled data using a single targeted memory extraction. With this, law enforcement agencies and or malicious agents could use the vulnerability to take copies of the encryption keys used for each tunnelled connection. The user of a virtual machine would not be alerted to the capturing of the encryption key, as the method runs from an extraction of the running memory. Methods of mitigation include making cryptographic artefacts difficult to discover and limiting memory access.
Cryptographic artefacts have been found in device memory. For instance, RSA keys may be discovered in virtual machine images @cite_32 @cite_1 . Studies have also discovered DES and AES cipher keys in cold-boot attacks @cite_17 , Skipjack and Twofish key blocks in virtual memory @cite_22 , and AES session keys in virtual memory @cite_3 . Although these approaches use entropy measures to determine possible keys, they do not decrypt ciphertext encrypted with ciphers such as AES in Counter mode and ChaCha20 which require nonces initialization vectors. This study builds on the TLSkex @cite_3 and MemDecrypt studies @cite_38 which used privileged monitors to extract identified virtual machine process memory to identify TLS 1.2 AES keys, and SSH AES keys and initialization vectors, respectively. Instead, this study uses a different algorithm to find ChaCha20 cipher keys and nonces in device memory enabling SSH and TLS sessions to be decrypted in a non-invasive manner. The approach may enable decryption of Adiantum encrypts @cite_12 , the Google disk-encryption algorithm based on XChaCha20, an extension to ChaCha20 and Salsa20 with a longer nonce @cite_33 .
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_33", "@cite_1", "@cite_32", "@cite_3", "@cite_12", "@cite_17" ], "mid": [ "2144007792", "2935536571", "2900861686", "2172060328" ], "abstract": [ "Cryptanalysis of ciphers usually involves massive computations. The security parameters of cryptographic algorithms are commonly chosen so that attacks are infeasible with available computing resources. Thus, in the absence of mathematical breakthroughs to a cryptanalytical problem, a promising way for tackling the computations involved is to build special-purpose hardware exhibiting a (much) better performance-cost ratio than off-the-shelf computers. This contribution presents a variety of cryptanalytical applications utilizing the cost-optimized parallel code breaker (COPACOBANA) machine, which is a high-performance low-cost cluster consisting of 120 field-programmable gate arrays (FPGAs). COPACOBANA appears to be the only such reconfigurable parallel FPGA machine optimized for code breaking tasks reported in the open literature. Depending on the actual algorithm, the parallel hardware architecture can outperform conventional computers by several orders of magnitude. In this work, we focus on novel implementations of cryptanalytical algorithms, utilizing the impressive computational power of COPACOBANA. We describe various exhaustive key search attacks on symmetric ciphers and demonstrate an attack on a security mechanism employed in the electronic passport (e-passport). Furthermore, we describe time-memory trade-off techniques that can, e.g., be used for attacking the popular A5 1 algorithm used in GSM voice encryption. In addition, we introduce efficient implementations of more complex cryptanalysis on asymmetric cryptosystems, e.g., elliptic curve cryptosystems (ECCs) and number cofactorization for RSA. Even though breaking RSA or elliptic curves with parameter lengths used in most practical applications is out of reach with COPACOBANA, our attacks on algorithms with artificially short bit lengths allow us to extrapolate more reliable security estimates for real-world bit lengths. This is particularly useful for deriving estimates about the longevity of asymmetric key lengths.", "Abstract Decrypting and inspecting encrypted malicious communications may assist crime detection and prevention. Access to client or server memory enables the discovery of artefacts required for decrypting secure communications. This paper develops the MemDecrypt framework to investigate the discovery of encrypted artefacts in memory and applies the methodology to decrypting the secure communications of virtual machines. For Secure Shell, used for secure remote server management, file transfer, and tunnelling inter alia, MemDecrypt experiments rapidly yield AES-encrypted details for a live secure file transfer including remote user credentials, transmitted file name and file contents. Thus, MemDecrypt discovers cryptographic artefacts and quickly decrypts live SSH malicious communications including the detection and interception of data exfiltration of confidential data.", "This paper demonstrates the improved power and electromagnetic (EM) side-channel attack (SCA) resistance of 128-bit Advanced Encryption Standard (AES) engines in 130-nm CMOS using random fast voltage dithering (RFVD) enabled by integrated voltage regulator (IVR) with the bond-wire inductors and an on-chip all-digital clock modulation (ADCM) circuit. RFVD scheme transforms the current signatures with random variations in AES input supply while adding random shifts in the clock edges in the presence of global and local supply noises. The measured power signatures at the supply node of the AES engines show upto 37 @math reduction in peak for higher order test vector leakage assessment (TVLA) metric and upto 692 @math increase in minimum traces required to disclose (MTD) the secret encryption key with correlation power analysis (CPA). Similarly, SCA on the measured EM signatures from the chip demonstrates a reduction of upto 11.3 @math in TVLA peak and upto 37 @math increase in correlation EM analysis (CEMA) MTD.", "Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process." ] }
1907.11914
2965936015
We extend the state-of-the-art Cascade R-CNN with a simple feature sharing mechanism. Our approach focuses on the performance increases on high IoU but decreases on low IoU thresholds--a key problem this detector suffers from. Feature sharing is extremely helpful, our results show that given this mechanism embedded into all stages, we can easily narrow the gap between the last stage and preceding stages on low IoU thresholds without resorting to the commonly used testing ensemble but the network itself. We also observe obvious improvements on all IoU thresholds benefited from feature sharing, and the resulting cascade structure can easily match or exceed its counterparts, only with negligible extra parameters introduced. To push the envelope, we demonstrate 43.2 AP on COCO object detection without any bells and whistles including testing ensemble, surpassing previous Cascade R-CNN by a large margin. Our framework is easy to implement and we hope it can serve as a general and strong baseline for future research.
Multi-stage object detectors are very popular in recent years. Following the main idea of divide and conquer', these detectors optimze a simpler problem first, and then refine the difficult progressively. In the field of object detection, cascade can be introduced into two components, namely the proposal generation process usually called RPN' and the classification and localization predicting process usually called R-CNN @cite_12 '. In these works, for the former, @cite_17 @cite_22 @cite_25 propose to use a multi-stage procedure to generate accurate proposals, and then refine these proposals with a single Fast R-CNN @cite_18 . For the latter, Cascade R-CNN @cite_8 is the most famous object detectors among them, with increasingly foreground background thresholds selected to refine the ROIs progressively. HTC @cite_7 follows this thoughts and propose to refine the features in an interleaved manner, resulting in state-of-the-art performance on instance segmentation task. Other works like @cite_2 @cite_19 also apply the R-CNN stage several times, but the performance is far behind Cascade R-CNN @cite_8 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_8", "@cite_19", "@cite_2", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2589615404", "2743620784", "2963418361", "2610420510" ], "abstract": [ "Object proposals have recently emerged as an essential cornerstone for object detection. The current state-of-the-art object detectors employ object proposals to detect objects within a modest set of candidate bounding box proposals instead of exhaustively searching across an image using the sliding window approach. However, achieving high recall and good localization with few proposals is still a challenging problem. The challenge becomes even more difficult in the context of autonomous driving, in which small objects, occlusion, shadows, and reflections usually occur. In this paper, we present a robust object proposals re-ranking algorithm that effectivity re-ranks candidates generated from a customized class-independent 3DOP (3D Object Proposals) method using a two-stream convolutional neural network (CNN). The goal is to ensure that those proposals that accurately cover the desired objects are amongst the few top-ranked candidates. The proposed algorithm, which we call DeepStereoOP, exploits not only RGB images as in the conventional CNN architecture, but also depth features including disparity map and distance to the ground. Experiments show that the proposed algorithm outperforms all existing object proposal algorithms on the challenging KITTI benchmark in terms of both recall and localization. Furthermore, the combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results of all three KITTI object classes. HighlightsWe present a robust object proposals re-ranking algorithm for object detection in autonomous driving.Both RGB images and depth features are included in the proposed two-stream CNN architecture called DeepStereoOP.Initial object proposals are generated from a customized class-independent 3DOP method.Experiments show that the proposed algorithm outperforms all existing object proposals algorithms.The combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results on KITTI benchmark.", "The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7 on VOC07, 80.4 on VOC12, and 34.4 on COCO. Codes will be made publicly available.", "Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework [15] and its fast versions [14, 27]. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Regionproposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals, in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter-and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of the-art on object detection benchmarks like PASCAL VOC 07 12 and ILSVRC.", "Many modern approaches for object detection are two-staged pipelines. The first stage identifies regions of interest which are then classified in the second stage. Faster R-CNN is such an approach for object detection which combines both stages into a single pipeline. In this paper we apply Faster R-CNN to the task of company logo detection. Motivated by its weak performance on small object instances, we examine in detail both the proposal and the classification stage with respect to a wide range of object sizes. We investigate the influence of feature map resolution on the performance of those stages. Based on theoretical considerations, we introduce an improved scheme for generating anchor proposals and propose a modification to Faster R-CNN which leverages higher-resolution feature maps for small objects. We evaluate our approach on the FlickrLogos dataset improving the RPN performance from 0.52 to 0.71 (MABO) and the detection performance from 0.52 to @math (mAP)." ] }
1907.11914
2965936015
We extend the state-of-the-art Cascade R-CNN with a simple feature sharing mechanism. Our approach focuses on the performance increases on high IoU but decreases on low IoU thresholds--a key problem this detector suffers from. Feature sharing is extremely helpful, our results show that given this mechanism embedded into all stages, we can easily narrow the gap between the last stage and preceding stages on low IoU thresholds without resorting to the commonly used testing ensemble but the network itself. We also observe obvious improvements on all IoU thresholds benefited from feature sharing, and the resulting cascade structure can easily match or exceed its counterparts, only with negligible extra parameters introduced. To push the envelope, we demonstrate 43.2 AP on COCO object detection without any bells and whistles including testing ensemble, surpassing previous Cascade R-CNN by a large margin. Our framework is easy to implement and we hope it can serve as a general and strong baseline for future research.
Feature sharing has also been taken in many approaches. In @cite_14 , sharing features in RPN stage can improve the performance and similar results can be found in @cite_6 @cite_7 across different tasks. Different from these methods, our approach not only focuses on the overall improvements but also narrowing the gap without resorting to the commonly used testing ensemble in cascaded approaches but the network itself based on feature sharing.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_6" ], "mid": [ "2500719210", "2798791651", "2495387757", "2360967250" ], "abstract": [ "The focus of our work is speeding up evaluation of deep neural networks in retrieval scenarios, where conventional architectures may spend too much time on negative examples. We propose to replace a monolithic network with our novel cascade of feature-sharing deep classifiers, called OnionNet, where subsequent stages may add both new layers as well as new feature channels to the previous ones. Importantly, intermediate feature maps are shared among classifiers, preventing them from the necessity of being recomputed. To accomplish this, the model is trained end-to-end in a principled way under a joint loss. We validate our approach in theory and on a synthetic benchmark. As a result demonstrated in three applications (patch matching, object detection, and image retrieval), our cascade can operate significantly faster than both monolithic networks and traditional cascades without sharing at the cost of marginal decrease in precision.", "Effective convolutional features play an important role in saliency estimation but how to learn powerful features for saliency is still a challenging task. FCN-based methods directly apply multi-level convolutional features without distinction, which leads to sub-optimal results due to the distraction from redundant details. In this paper, we propose a novel attention guided network which selectively integrates multi-level contextual information in a progressive manner. Attentive features generated by our network can alleviate distraction of background thus achieve better performance. On the other hand, it is observed that most of existing algorithms conduct salient object detection by exploiting side-output features of the backbone feature extraction network. However, shallower layers of backbone network lack the ability to obtain global semantic information, which limits the effective feature learning. To address the problem, we introduce multi-path recurrent feedback to enhance our proposed progressive attention driven framework. Through multi-path recurrent connections, global semantic information from the top convolutional layer is transferred to shallower layers, which intrinsically refines the entire network. Experimental results on six benchmark datasets demonstrate that our algorithm performs favorably against the state-of-the-art approaches.", "Large pose variations remain to be a challenge that confronts real-word face detection. We propose a new cascaded Convolutional Neural Network, dubbed the name Supervised Transformer Network, to address this challenge. The first stage is a multi-task Region Proposal Network (RPN), which simultaneously predicts candidate face regions along with associated facial landmarks. The candidate regions are then warped by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns. The second stage, which is a RCNN, then verifies if the warped candidate regions are valid faces or not. We conduct end-to-end learning of the cascaded network, including optimizing the canonical positions of the facial landmarks. This supervised learning of the transformations automatically selects the best scale to differentiate face non-face patterns. By combining feature maps from both stages of the network, we achieve state-of-the-art detection accuracies on several public benchmarks. For real-time performance, we run the cascaded network only on regions of interests produced from a boosting cascade face detector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution image.", "Software defect prediction, which predicts defective code regions, can help developers find bugs and prioritize their testing efforts. To build accurate prediction models, previous studies focus on manually designing features that encode the characteristics of programs and exploring different machine learning algorithms. Existing traditional features often fail to capture the semantic differences of programs, and such a capability is needed for building accurate prediction models. To bridge the gap between programs' semantics and defect prediction features, this paper proposes to leverage a powerful representation-learning algorithm, deep learning, to learn semantic representation of programs automatically from source code. Specifically, we leverage Deep Belief Network (DBN) to automatically learn semantic features from token vectors extracted from programs' Abstract Syntax Trees (ASTs). Our evaluation on ten open source projects shows that our automatically learned semantic features significantly improve both within-project defect prediction (WPDP) and cross-project defect prediction (CPDP) compared to traditional features. Our semantic features improve WPDP on average by 14.7 in precision, 11.5 in recall, and 14.2 in F1. For CPDP, our semantic features based approach outperforms the state-of-the-art technique TCA+ with traditional features by 8.9 in F1." ] }
1907.12051
2965243075
Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC.
The probability of decoding a fraction of source packets in RLNC scheme has been a major research topic, in context of both performance @cite_22 and security @cite_2 . The authors of @cite_2 showed an upper bound for the possibility of decoding a fraction of source packets, while in @cite_22 , the authors derived exact expressions for the probability of partial decoding for RLNC. Unfortunately, none of these works can be extended to SNC scheme. @cite_5 , these expressions were used to study the security of RLNC in a multi-relay network. The authors of @cite_10 also found an exact expression for the probability of partial decoding in systematic RLNC. However, their analysis is only valid for Binary Galois Field and also can not be extended to SNC.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_22", "@cite_2" ], "mid": [ "2766982895", "2951615571", "2158058261", "2017096524" ], "abstract": [ "Opportunistic relaying has the potential to achieve full diversity gain, while random linear network coding (RLNC) can reduce latency and energy consumption. In recent years, there has been a growing interest in the integration of both schemes into wireless networks in order to reap their benefits, while considering security concerns. This paper considers a multi-relay network, where relay nodes employ RLNC to encode confidential data and transmit coded packets to a destination in the presence of an eavesdropper. Four relay selection protocols are studied covering a range of network capabilities, such as the availability of the eavesdropper’s channel state information or the possibility to pair the selected relay with a node that intentionally generates interference. For each case, expressions for the probability that a coded packet will not be recovered by a receiver, which can be either the destination or the eavesdropper, are derived. Based on those expressions, a framework is developed that characterizes the probability of the eavesdropper intercepting a sufficient number of coded packets and partially or fully recovering the confidential data. Simulation results confirm the validity and accuracy of the theoretical framework and unveil the security-reliability trade-offs attained by each RLNC-enabled relay selection protocol.", "In an unreliable single-hop broadcast network setting, we investigate the throughput and decoding-delay performance of random linear network coding as a function of the coding window size and the network size. Our model consists of a source transmitting packets of a single flow to a set of @math users over independent erasure channels. The source performs random linear network coding (RLNC) over @math (coding window size) packets and broadcasts them to the users. We note that the broadcast throughput of RLNC must vanish with increasing @math , for any fixed @math Hence, in contrast to other works in the literature, we investigate how the coding window size @math must scale for increasing @math . Our analysis reveals that the coding window size of @math represents a phase transition rate, below which the throughput converges to zero, and above which it converges to the broadcast capacity. Further, we characterize the asymptotic distribution of decoding delay and provide approximate expressions for the mean and variance of decoding delay for the scaling regime of @math These asymptotic expressions reveal the impact of channel correlations on the throughput and delay performance of RLNC. We also show how our analysis can be extended to other rateless block coding schemes such as the LT codes. Finally, we comment on the extension of our results to the cases of dependent channels across users and asymmetric channel model.", "We study the application of convolutional codes to two-way relay networks (TWRNs) with physical-layer network coding (PNC). When a relay node decodes coded signals transmitted by two source nodes simultaneously, we show that the Viterbi algorithm (VA) can be used by approximating the maximum likelihood (ML) decoding for XORed messages as two-user decoding. In this setup, for given memory length constraint, the two source nodes can choose the same convolutional code that has the largest free distance in order to maximize the performance. Motivated from the fact that the relay node only needs to decode XORed messages, a low complexity decoding scheme is proposed using a reduced-state trellis. We show that the reduced-state decoding can achieve the same diversity gain as the full-state decoding for fading channels.", "We investigate a channel-coded physical-layer network coding (CPNC) scheme for binary-input Gaussian two-way relay channels. In this scheme, the codewords of the two users are transmitted simultaneously. The relay computes and forwards a network-coded (NC) codeword without complete decoding of the two users' individual messages. We propose a new punctured codebook method to explicitly find the distance spectrum of the CPNC scheme. Based on that, we derive an asymptotically tight performance bound for the error probability. Our analysis shows that, compared to the single-user scenario, the CPNC scheme exhibits the same minimum Euclidean distance but an increased multiplicity of error events with minimum distance. At a high SNR, this leads to an SNR penalty of at most ln2 (in linear scale), for long channel codes of various rates. Our analytical results match well with the simulated performance." ] }
1907.12051
2965243075
Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC.
Rateless codes such as LT and Raptor codes, can be considered as a binary implementation of SNC @cite_18 @cite_0 and partial decoding probability has been a major research topic in the literature of rateless codes. To mention a few, the authors of @cite_14 designed an algorithm for an optimal recovery rate, i.e. partial decoding probability in LT codes. However, the results of this work are asymptotically optimal and may only be employed for infinite number of source packets. @cite_17 , the authors provided a probability analysis for decoding a fraction of source packets based on the structure of the received coded packets. However, their analysis can not provide any probability for partial decoding, where the exact structure of the received coded packets are unknown. The authors of @cite_9 and @cite_28 , proposed algorithms to increase the probability of partial decoding for rateless codes. However, these works only increase the probability of partial decoding in the specific stages of the whole transmission and also can not be extended to non-binary Galois Fields. Also these algorithms are based on current coded packets received by the decoder and require huge computation overhead while transmitting coded packets.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_28", "@cite_9", "@cite_0", "@cite_17" ], "mid": [ "2134499105", "2110464368", "2963279554", "2949343062" ], "abstract": [ "In this correspondence, a generalization of rateless codes is proposed. The proposed codes provide unequal error protection (UEP). The asymptotic properties of these codes under the iterative decoding are investigated. Moreover, upper and lower bounds on maximum-likelihood (ML) decoding error probabilities of finite-length LT and Raptor codes for both equal and unequal error protection schemes are derived. Further, our work is verified with simulations. Simulation results indicate that the proposed codes provide desirable UEP. We also note that the UEP property does not impose a considerable drawback on the overall performance of the codes. Moreover, we discuss that the proposed codes can provide unequal recovery time (URT). This means that given a target bit error rate, different parts of information bits can be decoded after receiving different amounts of encoded bits. This implies that the information bits can be recovered in a progressive manner. This URT property may be used for sequential data recovery in video audio streaming", "The performance of a novel fountain coding scheme based on maximum distance separable (MDS) codes constructed over Galois fields of order q>=2 is investigated. Upper and lower bounds on the decoding failure probability under maximum likelihood decoding are developed. Differently from Raptor codes (which are based on a serial concatenation of a high-rate outer block code, and an inner Luby-transform code), the proposed coding scheme can be seen as a parallel concatenation of an outer MDS code and an inner random linear fountain code, both operating on the same Galois field. A performance assessment is performed on the gain provided by MDS based fountain coding over linear random fountain coding in terms of decoding failure probability vs. overhead. It is shown how, for example, the concatenation of a (15,10) Reed-Solomon code and a linear random fountain code over F16 brings to a decoding failure probability 4 orders of magnitude lower than the linear random fountain code for the same overhead in a channel with a packet loss probability of epsilon=0.05. Moreover, it is illustrated how the performance of the concatenated fountain code approaches that of an idealized fountain code for higher-order Galois fields and moderate packet loss probabilities. The scheme introduced is of special interest for the distribution of data using small block sizes.", "We consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process that adds the errors can be described by a sufficiently “simple” circuit. Codes for such channel models are attractive since, like codes for standard adversarial errors, they can handle channels whose true behavior is unknown or varying over time. For two classes of channels, we provide explicit, efficiently encodable decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder decoder that works for every channel in the class. The encoders are randomized, and probabilities are taken over the (local, unknown to the decoder) coins of the encoder and those of the channel. Unique decoding for additive errors: We give the first construction of a polynomial-time encodable decodable code for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1 − H(p). These are channels that add an arbitrary error vector e ∈ 0, 1 N of weight at most pN to the transmitted word; the vector e can depend on the code but not on the randomness of the encoder or the particular transmitted word. Such channels capture binary symmetric errors and burst errors as special cases. List decoding for polynomial-time channels: For every constant c > 0, we construct codes with optimal rate (arbitrarily close to 1 − H(p)) that efficiently recover a short list containing the correct message with high probability for channels describable by circuits of size at most Nc. Our construction is not fully explicit but rather Monte Carlo (we give an algorithm that, with high probability, produces an encoder decoder pair that works for all time Nc channels). We are not aware of any channel models considered in the information theory literature other than purely adversarial channels, which require more than linear-size circuits to implement. We justify the relaxation to list decoding with an impossibility result showing that, in a large range of parameters (p > 1 4), codes that are uniquely decodable for a modest class of channels (online, memoryless, nonuniform channels) cannot have positive rate.", "In this paper, collocated and distributed space-time block codes (DSTBCs) which admit multi-group maximum likelihood (ML) decoding are studied. First the collocated case is considered and the problem of constructing space-time block codes (STBCs) which optimally tradeoff rate and ML decoding complexity is posed. Recently, sufficient conditions for multi-group ML decodability have been provided in the literature and codes meeting these sufficient conditions were called Clifford Unitary Weight (CUW) STBCs. An algebraic framework based on extended Clifford algebras is proposed to study CUW STBCs and using this framework, the optimal tradeoff between rate and ML decoding complexity of CUW STBCs is obtained for few specific cases. Code constructions meeting this tradeoff optimally are also provided. The paper then focuses on multi-group ML decodable DSTBCs for application in synchronous wireless relay networks and three constructions of four-group ML decodable DSTBCs are provided. Finally, the OFDM based Alamouti space-time coded scheme proposed by Li-Xia for a 2 relay asynchronous relay network is extended to a more general transmission scheme that can achieve full asynchronous cooperative diversity for arbitrary number of relays. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Four-group decodable DSTBCs applicable in the proposed OFDM based transmission scheme are also given." ] }
1907.12051
2965243075
Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC.
Another interesting research is Instantly Decodable Network Coding (IDNC) @cite_12 . In this scheme, the packets are sent in a way that an instant decoding of at least one of the source packets is guaranteed. However, the analysis on the partial decoding probability of this family of codes, such as @cite_16 and @cite_24 , is only valid for binary Galois Field and the presence of feedback in the system. Moreover, the IDNC family of codes heavily rely on feedbacks, though in our work, the system is considered to be feedback-free.
{ "cite_N": [ "@cite_24", "@cite_16", "@cite_12" ], "mid": [ "2030775967", "1485005937", "1972328812", "1975409994" ], "abstract": [ "This paper investigates the use of instantly decodable network coding (IDNC) for minimizing the mean decoding delay in multicast cooperative data exchange systems, where the clients cooperate with each other to obtain their missing packets. Here, IDNC is used to reduce the decoding delay of each transmission across all clients. We first introduce a new framework to find the optimum client and coded packet that result in the minimum mean decoding delay. However, since finding the optimum solution of the proposed framework is NP-hard, we further propose a heuristic algorithm that aims to minimize the lower bound on the expected decoding delay in each transmission. The effectiveness of the proposed algorithm is assessed through simulations.", "In this paper, we consider the problem of min- imizing the multicast decoding delay of generalized instantly decodable network coding (G-IDNC) over persistent forward and feedback erasure channels with feedback intermittence. In such an environment, the sender does not always receive acknowledgement from the receivers after each transmission. Moreover, both the forward and feedback channels are subject to persistent erasures, which can be modelled by a two state (good and bad states) Markov chain known as Gilbert-Elliott channel (GEC). Due to such feedback imperfections, the sender is unable to determine subsequent instantly decodable packets combination for all receivers. Given this harsh channel and feedback model, we first derive expressions for the probability distributions of decoding delay increments and then employ these expressions in formulating the minimum decoding problem in such environment as a maximum weight clique problem in the G-IDNC graph. We also show that the problem formulations in simpler channel and feedback models are special cases of our generalized formulation. Since this problem is NP-hard, we design a greedy algorithm to solve it and compare it to blind approaches proposed in literature. Through extensive simulations, our adaptive algorithm is shown to outperform the blind approaches in all situations and to achieve significant improvement in the decoding delay, especially when the channel is highly persistent. Index Terms—Multicast Channels, Persistent Erasure Chan- nels, G-IDNC, Decoding Delay, Lossy Intermittent Feedback, Maximum Weight Clique Problem.", "This work aims at introducing two novel packet retransmission techniques for reliable multicast in the framework of Instantly Decodable Network Coding (IDNC). These methods are suitable for order- and delay-sensitive applications, where some information is of high importance for an earlier gain at the receiver's side. We introduce hence an Unequal Error Protection (UEP) scheme, showing by simulations that the Quality of Experience (QoE) for the end-users is improved even without complex encoding and decoding.", "In this paper, we consider the problem of minimizing the completion delay for instantly decodable network coding (IDNC) in wireless multicast and broadcast scenarios. We are interested in this class of network coding due to its numerous benefits, such as low decoding delay, low coding and decoding complexities, and simple receiver requirements. We first extend the IDNC graph, which represents all feasible IDNC coding opportunities, to efficiently operate in both multicast and broadcast scenarios. We then formulate the minimum completion delay problem for IDNC as a stochastic shortest path (SSP) problem. Although finding the optimal policy using SSP is intractable, we use this formulation to draw the theoretical guidelines for the policies that can minimize the completion delay in IDNC. Based on these guidelines, we design a maximum weight clique selection algorithm, which can efficiently reduce the IDNC completion delay in polynomial time. We also design a quadratic-time heuristic clique selection algorithm, which can operate in real-time applications. Simulation results show that our proposed algorithms significantly reduce the IDNC completion delay compared to the random and maximum-rate algorithms, and almost achieve the global optimal completion delay performance over all network codes in broadcast scenarios." ] }
1907.12051
2965243075
Sparse Network Coding (SNC) has been a promising network coding scheme as an improvement for Random Linear Network Coding (RLNC) in terms of the computational complexity. However, in this literature, there has been no analytical expressions for the probability of decoding a fraction of source messages after transmission of some coded packets. In this work, we looked into the problem of the probability of decoding a fraction of source messages, i.e., partial decoding, in the decoder for a system which uses SNC. We exploited the Principle of Inclusion and Exclusion to derive expressions of partial decoding probability. The presented model predicts the probability of partial decoding with an average deviation of 6 . Our results show that SNC has a great potential for recovering a fraction of the source message, especially in higher sparsity and lower Galois Field size. Moreover, to achieve a better probability of partial decoding throughout transmissions, we define a sparsity tuning scheme that significantly increases the probability of partial decoding. Our results show that this tuning scheme achieves a 16 improvement in terms of probability of decoding a fraction of source packets with respect to traditional SNC.
The authors of @cite_13 and @cite_19 introduced and analyzed an improvement for the SNC scheme called perpetual coding. However, this coding scheme is not completely random and uses a structured type of coding to send packets, and the analysis on this coding scheme can not be extended to random SNC scheme, which is the scope of this paper.
{ "cite_N": [ "@cite_19", "@cite_13" ], "mid": [ "2017096524", "1575671689", "2564688069", "1993626808" ], "abstract": [ "We investigate a channel-coded physical-layer network coding (CPNC) scheme for binary-input Gaussian two-way relay channels. In this scheme, the codewords of the two users are transmitted simultaneously. The relay computes and forwards a network-coded (NC) codeword without complete decoding of the two users' individual messages. We propose a new punctured codebook method to explicitly find the distance spectrum of the CPNC scheme. Based on that, we derive an asymptotically tight performance bound for the error probability. Our analysis shows that, compared to the single-user scenario, the CPNC scheme exhibits the same minimum Euclidean distance but an increased multiplicity of error events with minimum distance. At a high SNR, this leads to an SNR penalty of at most ln2 (in linear scale), for long channel codes of various rates. Our analytical results match well with the simulated performance.", "We propose and design a practical modulation-coded (MC) physical-layer network coding (PNC) scheme to approach the capacity limits of Gaussian and fading two-way relay channels (TWRCs). In the proposed scheme, an irregular repeat–accumulate (IRA) MC over @math with the same random coset is employed at two users, which directly maps the message sequences into coded PAM or QAM symbol sequences. The relay chooses appropriate network coding coefficients and computes the associated finite-field linear combinations of the two users' message sequences using an iterative belief propagation algorithm. For a symmetric Gaussian TWRC, we show that, by introducing the same random coset vector at the two users and a time-varying accumulator in the IRA code, the MC-PNC scheme exhibits symmetry and permutation-invariant properties for the soft information distribution of the network-coded message sequence (NCMS). We explore these properties in analyzing the convergence behavior of the scheme and optimizing the MC to approach the capacity limit of a TWRC. For a block fading TWRC, we present a new MC linear PNC scheme and an algorithm used at the relay for computing the NCMS. We demonstrate that our developed schemes achieve near-capacity performance in both Gaussian and Rayleigh fading TWRCs. For example, our designed codes over GF(7) and GF(3) with a code rate of 3 4 are within 1 and 1.2 dB of the TWRC capacity, respectively. Our method can be regarded as a practical embodiment of the notion of compute-and-forward with a good nested lattice code, and it can be applied to a wide range of network configurations.", "Perpetual codes provide a sparse, but structured coding for fast encoding and decoding. In this work, we illustrate that perpetual codes introduce linear dependent packet transmissions in the presence of an erasure channel. We demonstrate that the number of linear dependent packet transmissions is highly dependent on a parameter called the width ( ( )), which represents the number of consecutive non-zero coding coefficient present in each coded packet after a pivot element. We provide a mathematical analysis based on the width of the coding vector for the number of transmitted packets and validate it with simulation results. The simulations show that for ( = 5 ), generation size (g = 256 ), and low erasure probability on the link, a destination can receive up to (70 ) overhead in average. Moreover, increasing the width, the overhead contracts, and for ( 60 ) it becomes negligible.", "We study a new linear physical-layer network coding (LPNC) scheme for fading two-way relay channels. In the uplink phase, two users transmit simultaneously. The relay selects some integer coefficients and computes a linear combination (in a size-q finite set) of the two users' messages, which is broadcast in the downlink phase. We develop a design criterion for choosing the integer coefficients that minimizes the error probability. Based on that, we derive an asymptotically tight bound, in a closed-form, for the error probability of the LPNC scheme over Rayleigh fading channels. Our analysis shows that the error-rate performance of the LPNC scheme becomes asymptotically optimal at a high SNR, and our designed LPNC scheme significantly outperforms existing schemes in the literature." ] }
1907.11907
2966776769
Lemmatization, finding the basic morphological form of a word in a corpus, is an important step in many natural language processing tasks when working with morphologically rich languages. We describe and evaluate Nefnir, a new open source lemmatizer for Icelandic. Nefnir uses suffix substitution rules, derived from a large morphological database, to lemmatize tagged text. Evaluation shows that for correctly tagged text, Nefnir obtains an accuracy of 99.55 , and for text tagged with a PoS tagger, the accuracy obtained is 96.88 .
Machine learning methods emerged to make the rule-learning process more effective, and various algorithms have been developed. These methods rely on training data, which can be a corpus of words and their lemmas or a large morphological lexicon @cite_6 . By analyzing the training data, transformation rules are formed, which can subsequently be used to find lemmas in new texts, given the word forms.
{ "cite_N": [ "@cite_6" ], "mid": [ "2120861206", "2950797609", "185399533", "2107772017" ], "abstract": [ "In spite of their superior performance, neural probabilistic language models (NPLMs) remain far less widely used than n-gram models due to their notoriously long training times, which are measured in weeks even for moderately-sized datasets. Training NPLMs is computationally expensive because they are explicitly normalized, which leads to having to consider all words in the vocabulary when computing the log-likelihood gradients. We propose a fast and simple algorithm for training NPLMs based on noise-contrastive estimation, a newly introduced procedure for estimating unnormalized continuous distributions. We investigate the behaviour of the algorithm on the Penn Treebank corpus and show that it reduces the training times by more than an order of magnitude without affecting the quality of the resulting models. The algorithm is also more efficient and much more stable than importance sampling because it requires far fewer noise samples to perform well. We demonstrate the scalability of the proposed approach by training several neural language models on a 47M-word corpus with a 80K-word vocabulary, obtaining state-of-the-art results on the Microsoft Research Sentence Completion Challenge dataset.", "In spite of their superior performance, neural probabilistic language models (NPLMs) remain far less widely used than n-gram models due to their notoriously long training times, which are measured in weeks even for moderately-sized datasets. Training NPLMs is computationally expensive because they are explicitly normalized, which leads to having to consider all words in the vocabulary when computing the log-likelihood gradients. We propose a fast and simple algorithm for training NPLMs based on noise-contrastive estimation, a newly introduced procedure for estimating unnormalized continuous distributions. We investigate the behaviour of the algorithm on the Penn Treebank corpus and show that it reduces the training times by more than an order of magnitude without affecting the quality of the resulting models. The algorithm is also more efficient and much more stable than importance sampling because it requires far fewer noise samples to perform well. We demonstrate the scalability of the proposed approach by training several neural language models on a 47M-word corpus with a 80K-word vocabulary, obtaining state-of-the-art results on the Microsoft Research Sentence Completion Challenge dataset.", "We apply machine learning techniques to the problem of separating multiple speech sources from a single microphone recording. The method of choice is a sparse non-negative matrix factorization algorithm, which in an unsupervised manner can learn sparse representations of the data. This is applied to the learning of personalized dictionaries from a speech corpus, which in turn are used to separate the audio stream into its components. We show that computational savings can be achieved by segmenting the training data on a phoneme level. To split the data, a conventional speech recognizer is used. The performance of the unsupervised and supervised adaptation schemes result in significant improvements in terms of the target-to-masker ratio.", "This paper investigates a novel approach to unsupervised morphology induction relying on community detection in networks. In a first step, morphological transformation rules are automatically acquired based on graphical similarities between words. These rules encode substring substitutions for transforming one word form into another. The transformation rules are then applied to the construction of a lexical network. The nodes of the network stand for words while edges represent transformation rules. In the next step, a clustering algorithm is applied to the network to detect families of morphologically related words. Finally, morpheme analyses are produced based on the transformation rules and the word families obtained after clustering. While still in its preliminary development stages, this method obtained encouraging results at Morpho Challenge 2009, which demonstrate the viability of the approach." ] }
1907.12108
2965617855
In this paper, we present an end-to-end empathetic conversation agent CAiRE. Our system adapts TransferTransfo (, 2019) learning approach that fine-tunes a large-scale pre-trained language model with multi-task objectives: response language modeling, response prediction and dialogue emotion detection. We evaluate our model on the recently proposed empathetic-dialogues dataset (, 2019), the experiment results show that CAiRE achieves state-of-the-art performance on dialogue emotion detection and empathetic response generation.
Previous work @cite_12 @cite_5 @cite_0 showed that leveraging a large amount of data to learn context-sensitive features from a language model can create state-of-the-art models for a wide range of tasks. Taking this further, deployed higher capacity models and improved the state-of-the-art results. In this paper, we build the empathetic chatbot based on the pre-trained language model and achieve state-of-the-art results on dialogue emotion detection and empathetic response generation.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_12" ], "mid": [ "2950813464", "2773498419", "2896457183", "2768195931" ], "abstract": [ "With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.", "Probabilistic graphical models, such as partially observable Markov decision processes (POMDPs), have been used in stochastic spoken dialog systems to handle the inherent uncertainty in speech recognition and language understanding. Such dialog systems suffer from the fact that only a relatively small number of domain variables are allowed in the model, so as to ensure the generation of good-quality dialog policies. At the same time, the non-language perception modalities on robots, such as vision-based facial expression recognition and Lidar-based distance detection, can hardly be integrated into this process. In this paper, we use a probabilistic commonsense reasoner to “guide” our POMDP-based dialog manager, and present a principled, multimodal dialog management (MDM) framework that allows the robot's dialog belief state to be seamlessly updated by both observations of human spoken language, and exogenous events such as the change of human facial expressions. The MDM approach has been implemented and evaluated both in simulation and on a real mobile robot using guidance tasks.", "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).", "Generating emotional language is a key step towards building empathetic natural language processing agents. However, a major challenge for this line of research is the lack of large-scale labeled training data, and previous studies are limited to only small sets of human annotated sentiment labels. Additionally, explicitly controlling the emotion and sentiment of generated text is also difficult. In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis. More specifically, we collect a large corpus of Twitter conversations that include emojis in the response, and assume the emojis convey the underlying emotions of the sentence. We then introduce a reinforced conditional variational encoder approach to train a deep generative model on these conversations, which allows us to use emojis to control the emotion of the generated text. Experimentally, we show in our quantitative and qualitative analyses that the proposed models can successfully generate high-quality abstractive conversation responses in accordance with designated emotions." ] }
1907.11866
2966500445
Wirelessly powered backscatter communication (WPBC) has been identified as a promising technology for low-power communication systems, which can reap the benefits of energy beamforming to improve energy transfer efficiency. Existing studies on energy beamforming fail to simultaneously take energy supply and information transfer in WPBC into account. This paper takes the first step to fill this gap, by considering the trade-off between the energy harvesting rate and achievable rate using estimated backscatter channel state information (BS-CSI). To ensure reliable communication and user fairness, we formulate the energy beamforming design as a max-min optimization problem by maximizing the minimum achievable rate for all tags subject to the energy constraint. We derive the closed-form expression of the energy harvesting rate, as well as the lower bound of the ergodic achievable rate. Our numerical results indicate that our scheme can significantly outperform state-of-the-art energy beamforming schemes. Additionally, the proposed scheme achieves performance comparable to that obtained via beamforming with perfect CSI.
Significant progress has recently been made on the energy beamforming in wireless powered communication @cite_5 @cite_1 @cite_2 @cite_9 . To harness the benefits of energy beamforming, there have been many efforts made to enable energy beamforming in WPBC. @cite_0 focus on optimizing the transmit beamforming to maximize the sum rate of a cooperative WPBC system. @cite_8 investigate energy beamforming in a relay WPBC system. Both these studies assume that F-CSI can be known to achieve energy beamforming. However, the closed-loop propagation and power-limited tags make it hard to obtain F-CSI. Instead, @cite_10 performs energy beamforming via the estimated BS-CSI to improve WET efficiency. Departure from these studies, our work considers the beamforming design for both energy supply and data transfer to ensure communication performance.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_0", "@cite_2", "@cite_5", "@cite_10" ], "mid": [ "2964056649", "2002932684", "2584574674", "2760327456" ], "abstract": [ "We study RF-enabled wireless energy transfer (WET) via energy beamforming, from a multi-antenna energy transmitter (ET) to multiple energy receivers (ERs) in a backscatter communication system such as RFID. The acquisition of the forward-channel (i.e., ET-to-ER) state information (F-CSI) at the ET (or RFID reader) is challenging, since the ERs (or RFID tags) are typically too energy-and-hardware-constrained to estimate or feedback the F-CSI. The ET leverages its observed backscatter signals to estimate the backscatter-channel (i.e., ET-to-ER-to-ET) state information (BS-CSI) directly. We first analyze the harvested energy obtained using the estimated BS-CSI. Furthermore, we optimize the resource allocation to maximize the total utility of harvested energy. For WET to single ER, we obtain the optimal channel-training energy in a semiclosed form. For WET to multiple ERs, we optimize the channel-training energy and the energy allocation weights for different energy beams. For the straightforward weighted-sum-energy (WSE) maximization, the optimal WET scheme is shown to use only one energy beam, which leads to unfairness among ERs and motivates us to consider the complicated proportional-fair-energy (PFE) maximization. For PFE maximization, we show that it is a biconvex problem, and propose a block-coordinate-descent-based algorithm to find the close-to-optimal solution. Numerical results show that with the optimized solutions, the harvested energy suffers slight reduction of less than 10 , compared to that obtained using the perfect F-CSI.", "In this letter, we study the robust beamforming problem for the multi-antenna wireless broadcasting system with simultaneous information and power transmission, under the assumption of imperfect channel state information (CSI) at the transmitter. Following the worst-case deterministic model, our objective is to maximize the worst-case harvested energy for the energy receiver while guaranteeing that the rate for the information receiver is above a threshold for all possible channel realizations. Such problem is nonconvex with infinite number of constraints. Using certain transformation techniques, we convert this problem into a relaxed semidefinite programming problem (SDP) which can be solved efficiently. We further show that the solution of the relaxed SDP problem is always rank-one. This indicates that the relaxation is tight and we can get the optimal solution for the original problem. Simulation results are presented to validate the effectiveness of the proposed algorithm.", "This paper analyzes the performance of information and energy beamforming in multiple-input multiple- output (MIMO) wireless communications systems, where a self-powered multi-antenna hybrid access point (AP) coordinates wireless information and power transfer (WIPT) with an energy-constrained multi-antenna user terminal (UT). The wirelessly powered UT scavenge energy from the hybrid AP radio-frequency (RF) signal in the downlink (DL) using the harvest-then-transmit protocol, then uses the harvested energy to send its information to the hybrid AP in the uplink (UL). To maximize the overall signal-to-noise ratio (SNR) as well as the harvested energy so as to mitigate the severe effects of fading and enable long-distance wireless power transfer, information and energy beamforming is investigated by steering the transmitted information and energy signals along the strongest eigenmode. To this end, exact and lower-bound expressions for the outage probability and ergodic capacity are presented in closed-form, through which the throughput of the delay- constrained and delay-tolerant transmission modes are analyzed, respectively. Numerical results sustained by Monte Carlo simulations show the exactness and tightness of the proposed analytical expressions. The impact of various parameters such as energy harvesting time, hybrid AP transmit power and the number of antennas on the system throughput is also considered.", "This paper investigates the optimal resource allocation in wireless powered communication network with user cooperation, where two single-antenna users first harvest energy from the signals transmitted by a multi-antenna hybrid access point (H-AP) and then cooperatively send information to the H-AP using their harvested energy. To explore the system information transmission performance limit, an optimization problem is formulated to maximize the weighted sum-rate (WSR) by jointly optimizing energy beamforming vector, time assignment, and power allocation. Besides, another optimization problem is also formulated to minimize the total transmission time for given amount of data required to be transmitted at the two sources. Because both problems are non-convex, we first transform them to be convex by using proper variable substitutions and then apply semi-definite relaxation to solve them. We theoretically prove that our proposed methods guarantee the global optimum of both problems. Simulation results show that system WSR and transmission time can be significantly enhanced by using energy beamforming and user cooperation. It is observed that when the total amount of information of two users is fixed, with the increase of the information amount of the user relatively farther away from the H-AP, the transmission time of the user cooperation scheme decreases while that of the direct transmission increases. Besides, the effects of user position on the system performances are also discussed, which provides some useful insights." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
Clones can be broadly categorized into four types based on the nature of their similarity @cite_40 @cite_42 @cite_5 @cite_48 . : are clone pairs that are identical to each other with no modification to the source code. : are clone pairs that are only different in literals and variable types. : are renamed clone pairs with some structural modifications such as additions, deletions, and rearrangement of statements. : are clone pairs that have different syntax but perform the same functionality (i.e., semantically equivalent). These typically are most challenging to find and identify, yet, they are the more relevant in the context of ERP systems @cite_12 @cite_23 .
{ "cite_N": [ "@cite_48", "@cite_42", "@cite_40", "@cite_23", "@cite_5", "@cite_12" ], "mid": [ "2171368158", "1508590353", "2114056383", "1698439592" ], "abstract": [ "Code clones are defined to be the exactly or nearly similar code fragments in a software system's code-base. The existing clone related studies reveal that code clones are likely to introduce bugs and inconsistencies in the code-base. However, although there are different types of clones, it is still unknown which types of clones have a higher likeliness of introducing bugs to the software systems and so, should be considered more important for managing with techniques such as refactoring or tracking. With this focus, we performed a study that compared the bug-proneness of the major clone-types: Type 1, Type 2, and Type 3. According to our experimental results on thousands of revisions of seven diverse subject systems, Type 3 clones exhibit the highest bug-proneness among the three clone-types. The bug-proneness of Type 1 clones is the lowest. Also, Type 3 clones have the highest likeliness of being co-changed consistently while experiencing bug-fixing changes. Moreover, the Type 3 clones that experience bug-fixes have a higher possibility of evolving following a Similarity Preserving Change Pattern (SPCP) compared to the bug-fix clones of the other two clone-types. From the experimental results it is clear that Type 3 clones should be given a higher priority than the other two clone-types when making clone management decisions. We believe that our study provides useful implications for ranking clones for refactoring and tracking.", "SUMMARY Two similar code segments, or clones, form a clone pair within a software system. The changes to the clones over time create a clone evolution history. In this work, we study late propagation, a specific pattern of clone evolution. In late propagation, one clone in a clone pair is modified, causing the clone pair to diverge. The code segments are then reconciled in a later commit. Existing work has established late propagation as a clone evolution pattern and suggested that the pattern is related to a high number of faults. In this study, we examine the characteristics of late propagation in three long-lived software systems using the Simian ( Simon Harris, Victoria, Australia, http: www.harukizaemon.com simian), CCFinder, and NiCad (Software Technology Laboratory, Queen's University, Kingston, ON, Canada) clone detection tools. We define eight types of late propagation and compare them to other forms of clone evolution. Our results not only verify that late propagation is more harmful to software systems but also establish that some specific types of late propagations are more harmful than others. Specifically, two types are most risky: (1) when a clone experiences diverging changes and then a reconciling change without any modification to the other clone in a clone pair; and (2) when two clones undergo a diverging modification followed by a reconciling change that modifies both the clones in a clone pair. We also observe that the reconciliation in the former case is more prone to faults than in the latter case. We determine that the size of the clones experiencing late propagation has an effect on the fault proneness of specific types of late propagation genealogies. Lastly, we cannot report a correlation between the delay of the propagation of changes and its faults, as the fault proneness of each delay period is system dependent. Copyright © 2013 John Wiley & Sons, Ltd.", "Code Clones - duplicated source fragments - are said to increase maintenance effort and to facilitate problems caused by inconsistent changes to identical parts. While this is certainly true for some clones and certainly not true for others, it is unclear how many clones are real threats to the system's quality and need to be taken care of. Our analysis of clone evolution in mature software projects shows that most clones are rarely changed and the number of unintentional inconsistent changes to clones is small. We thus have to carefully select the clones to be managed to avoid unnecessary effort managing clones with no risk potential.", "This paper presents a technique to automatically identify duplicate and near duplicate functions in a large software system. The identification technique is based on metrics extracted from the source code using the tool Datrix sup TM . This clone identification technique uses 21 function metrics grouped into four points of comparison. Each point of comparison is used to compare functions and determine their cloning level. An ordinal scale of eight cloning levels is defined. The levels range from an exact copy to distinct functions. The metrics, the thresholds and the process used are fully described. The results of applying the clone detection technique to two telecommunication monitoring systems totaling one million lines of source code are provided as examples. The information provided by this study is useful in monitoring the maintainability of large software systems." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
Several approaches have been proposed in the literature to identify similar source code ranging from textual to semantic similarity identification. Generally, they're classified based on the source representations they work with. In , the raw source code, with minimal transformation, is used to perform a pairwise comparison to identify similar source code @cite_27 . on the other hand, extracts a sequence of tokens using compiler-style source code transformation @cite_25 . The sequence is then used to match tokens and identify duplicates in the repository and the corresponding original code is returned as clones. In , the code is transformed to Abstract Syntax Trees (ASTs) that are then used in tree sub matching algorithms to identify similar sub trees @cite_16 . Similarly, clone detection is expressed as graph matching problem for Program Dependence Graphs (PDGs) in @cite_36 . extracts a number of metrics from the source code fragments and then compare metrics rather than code or trees to identify similar code @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_36", "@cite_27", "@cite_16", "@cite_25" ], "mid": [ "2125260159", "2584966780", "2104609444", "2157532207" ], "abstract": [ "Several techniques have been developed for identifying similar code fragments in programs. These similar fragments, referred to as code clones, can be used to identify redundant code, locate bugs, or gain insight into program design. Existing scalable approaches to clone detection are limited to finding program fragments that are similar only in their contiguous syntax. Other, semantics-based approaches are more resilient to differences in syntax, such as reordered statements, related statements interleaved with other unrelated statements, or the use of semantically equivalent control structures. However, none of these techniques have scaled to real world code bases. These approaches capture semantic information from Program Dependence Graphs (PDGs), program representations that encode data and control dependencies between statements and predicates. Our definition of a code clone is also based on this representation: we consider program fragments with isomorphic PDGs to be clones. In this paper, we present the first scalable clone detection algorithm based on this definition of semantic clones. Our insight is the reduction of the difficult graph similarity problem to a simpler tree similarity problem by mapping carefully selected PDG subgraphs to their related structured syntax. We efficiently solve the tree similarity problem to create a scalable analysis. We have implemented this algorithm in a practical tool and performed evaluations on several million-line open source projects, including the Linux kernel. Compared with previous approaches, our tool locates significantly more clones, which are often more semantically interesting than simple copied and pasted code fragments.", "If two fragments of source code are identical to each other, they are called code clones. Code clones introduce difficulties in software maintenance and cause bug propagation. In this paper, we present a machine learning framework to automatically detect clones in software, which is able to detect Types-3 and the most complicated kind of clones, Type-4 clones. Previously used traditional features are often weak in detecting the semantic clones The novel aspects of our approach are the extraction of features from abstract syntax trees (AST) and program dependency graphs (PDG), representation of a pair of code fragments as a vector and the use of classification algorithms. The key benefit of this approach is that our approach can find both syntactic and semantic clones extremely well. Our evaluation indicates that using our new AST and PDG features is a viable methodology, since they improve detecting clones on the IJaDataset 2.0.", "While finding clones in source code has drawn considerable attention, there has been only very little work in finding similar fragments in binary code and intermediate languages, such as Java bytecode. Some recent studies showed that it is possible to find distinct sets of clone pairs in bytecode representation of source code, which are not always detectable at source code-level. In this paper, we present a bytecode clone detection approach, called SeByte, which exploits the benefits of compilers (the bytecode representation) for detecting a specific type of semantic clones in Java bytecode. SeByte is a hybrid metric-based approach that takes advantage of both, Semantic Web technologies and Set theory. We use a two-step analysis process: (1) Pattern matching via Semantic Web querying and reasoning, and (2) Content matching, using Jaccard coefficient for set similarity measurement. Semantic Web-based pattern matching helps us to find method blocks which share similar patterns even in case of extreme dissimilarity (e.g., numerous repetitions or large gaps). Although it leads to high recall, it gives high false positive rate. We thus use the content matching (via Jaccard) to reduce false positive rate by focusing on content semantic resemblance. Our evaluation of four Java systems and five other tools shows that SeByte can detect a large number of semantic clones that are either not detected or supported by source code based clone detectors.", "Existing research suggests that a considerable fraction (5-10 ) of the source code of large scale computer programs is duplicate code (\"clones\"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
Generally, similar code identification techniques work at varying level of granularity. detection leverages tokens, statements and lines as the basis for detection and comparison @cite_32 . On the other hand, detection uses functions, methods, classes, or program files as the basic units of detection @cite_40 . Naturally, the finer the granularity of the tool is, the longer time it takes to find clone candidates. Equally, the larger the granularity of the tool is, the faster time it takes for detection, albeit with fewer detected clones @cite_4 . Detection tools have therefore to make design trade-offs between accuracy and performance on an almost constant basis based on the code base being examined.
{ "cite_N": [ "@cite_40", "@cite_4", "@cite_32" ], "mid": [ "2511803001", "2157532207", "2547865220", "2286236884" ], "abstract": [ "Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93 of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers.", "Existing research suggests that a considerable fraction (5-10 ) of the source code of large scale computer programs is duplicate code (\"clones\"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations.", "If two fragments of source code are identical to each other, they are called code clones. Code clones introduce difficulties in software maintenance and cause bug propagation. Coarse-grained clone detectors have higher precision than fine-grained, but fine-grained detectors have higher recall than coarse-grained. In this paper, we present a hybrid clone detection technique that first uses a coarse-grained technique to analyze clones effectively to improve precision. Subsequently, we use a fine-grained detector to obtain additional information about the clones and to improve recall. Our method detects Type-1 and Type-2 clones using hash values for blocks, and gapped code clones (Type-3) using block detection and subsequent comparison between them using Levenshtein distance and Cosine measures with varying thresholds.", "Despite a decade of active research, there has been a marked lack in clone detection techniques that scale to large repositories for detecting near-miss clones. In this paper, we present a token-based clone detector, SourcererCC, that can detect both exact and near-miss clones from large inter-project repositories using a standard workstation. It exploits an optimized inverted-index to quickly query the potential clones of a given code block. Filtering heuristics based on token ordering are used to significantly reduce the size of the index, the number of code-block comparisons needed to detect the clones, as well as the number of required token-comparisons needed to judge a potential clone. We evaluate the scalability, execution time, recall and precision of SourcererCC, and compare it to four publicly available and state-of-the-art tools. To measure recall, we use two recent benchmarks: (1) a big benchmark of real clones, BigCloneBench, and (2) a Mutation Injection-based framework of thousands of fine-grained artificial clones. We find SourcererCC has both high recall and precision, and is able to scale to a large inter-project repository (25K projects, 250MLOC) using a standard workstation." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
Another challenge in finding duplicate code is the performance of querying and retrieving possible matches from a large code base. Fingerprinting and hashing have been used to improve the search efficiency @cite_44 . Hashing maps variable size source code to a fixed size fingerprint that can later be used to query and search for clones in linear time @cite_14 . However, a simple match doesn't work well for inexact matches. Others @cite_12 @cite_38 use hashing techniques to group similar source code fragments together, thus enhancing the accuracy and performance of clone detection techniques. However, this is less effective in detecting Type 4 clones as hashing and fingerprints are based on the source code and not its semantic. Machine learning approaches have been proposed @cite_31 to link lexical level features with syntactic level features using semantic encoding techniques @cite_28 to improve Type 4 clone detection. However, in order for them to be effective, human experts need to analyze source code repositories to define features that are most relevant for clone detection.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_28", "@cite_44", "@cite_31", "@cite_12" ], "mid": [ "2104609444", "2157532207", "2511803001", "2298313545" ], "abstract": [ "While finding clones in source code has drawn considerable attention, there has been only very little work in finding similar fragments in binary code and intermediate languages, such as Java bytecode. Some recent studies showed that it is possible to find distinct sets of clone pairs in bytecode representation of source code, which are not always detectable at source code-level. In this paper, we present a bytecode clone detection approach, called SeByte, which exploits the benefits of compilers (the bytecode representation) for detecting a specific type of semantic clones in Java bytecode. SeByte is a hybrid metric-based approach that takes advantage of both, Semantic Web technologies and Set theory. We use a two-step analysis process: (1) Pattern matching via Semantic Web querying and reasoning, and (2) Content matching, using Jaccard coefficient for set similarity measurement. Semantic Web-based pattern matching helps us to find method blocks which share similar patterns even in case of extreme dissimilarity (e.g., numerous repetitions or large gaps). Although it leads to high recall, it gives high false positive rate. We thus use the content matching (via Jaccard) to reduce false positive rate by focusing on content semantic resemblance. Our evaluation of four Java systems and five other tools shows that SeByte can detect a large number of semantic clones that are either not detected or supported by source code based clone detectors.", "Existing research suggests that a considerable fraction (5-10 ) of the source code of large scale computer programs is duplicate code (\"clones\"). Detection and removal of such clones promises decreased software maintenance costs of possibly the same magnitude. Previous work was limited to detection of either near misses differing only in single lexems, or near misses only between complete functions. The paper presents simple and practical methods for detecting exact and near miss clones over arbitrary program fragments in program source code by using abstract syntax trees. Previous work also did not suggest practical means for removing detected clones. Since our methods operate in terms of the program structure, clones could be removed by mechanical methods producing in-lined procedures or standard preprocessor macros. A tool using these techniques is applied to a C production software system of some 400 K source lines, and the results confirm detected levels of duplication found by previous work. The tool produces macro bodies needed for clone removal, and macro invocations to replace the clones. The tool uses a variation of the well known compiler method for detecting common sub expressions. This method determines exact tree matches; a number of adjustments are needed to detect equivalent statement sequences, commutative operands, and nearly exact matches. We additionally suggest that clone detection could also be useful in producing more structured code, and in reverse engineering to discover domain concepts and their implementations.", "Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93 of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers.", "Code duplication or copying a code fragment and then reuse by pasting with or without any modiflcations is a well known code smell in software maintenance. Several studies show that about 5 to 20 of a software systems can contain duplicated code, which is basically the results of copying existing code fragments and using then by pasting with or without minor modiflcations. One of the major shortcomings of such duplicated fragments is that if a bug is detected in a code fragment, all the other fragments similar to it should be investigated to check the possible existence of the same bug in the similar fragments. Refactoring of the duplicated code is another prime issue in software maintenance although several studies claim that refactoring of certain clones are not desirable and there is a risk of removing them. However, it is also widely agreed that clones should at least be detected. In this paper, we survey the state of the art in clone detection research. First, we describe the clone terms commonly used in the literature along with their corresponding mappings to the commonly used clone types. Second, we provide a review of the existing clone taxonomies, detection approaches and experimental evaluations of clone detection tools. Applications of clone detection research to other domains of software engineering and in the same time how other domain can assist clone detection research have also been pointed out. Finally, this paper concludes by pointing out several open problems related to clone detection research." ] }
1907.11817
2954117579
Source code similarity are increasingly used in application development to identify clones, isolate bugs, and find copy-rights violations. Similar code fragments can be very problematic due to the fact that errors in the original code must be fixed in every copy. Other maintenance changes, such as extensions or patches, must be applied multiple times. Furthermore, the diversity of coding styles and flexibility of modern languages makes it difficult and cost ineffective to manually inspect large code repositories. Therefore, detection is only feasible by automatic techniques. We present an efficient and scalable approach for similar code fragment identification based on source code control flow graphs fingerprinting. The source code is processed to generate control flow graphs that are then hashed to create a unique fingerprint of the code capturing semantics as well as syntax similarity. The fingerprints can then be efficiently stored and retrieved to perform similarity search between code fragments. Experimental results from our prototype implementation supports the validity of our approach and show its effectiveness and efficiency in comparison with other solutions.
One way to capture the program semantic is the code Control Flow Graphs (CFGs) . CFGs are one of the intermediate code representations that describes in graph notation, all paths that might be followed through a piece of code during its execution @cite_34 . In CFGs, vertices represent basic blocks and edges (i.e., arcs) represent execution flow. Since CFGs capture syntactic and semantic features of the code, they are better at resisting changes in the code that manipulate source code in very minor ways, while not affecting the functionality of the program. For this reason, control flow graphs have been used in static analysis @cite_30 , fuzzing and test coverage tools @cite_8 , execution profiling @cite_33 @cite_24 , binary code analysis @cite_7 , malware analysis @cite_26 , and anomaly analysis @cite_41 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_33", "@cite_7", "@cite_8", "@cite_41", "@cite_24", "@cite_34" ], "mid": [ "1600009974", "2125260159", "95993446", "50815725" ], "abstract": [ "Existing program analysis tools that implement abstraction rely on saturating procedures to compute over-approximations of fixpoints. As an alternative, we propose a new algorithm to compute an over-approximation of the set of reachable states of a program by replacing loops in the control flow graph by their abstract transformer. Our technique is able to generate diagnostic information in case of property violations, which we call leaping counterexamples. We have implemented this technique and report experimental results on a set of large ANSI-C programs using abstract domains that focus on properties related to string-buffers.", "Several techniques have been developed for identifying similar code fragments in programs. These similar fragments, referred to as code clones, can be used to identify redundant code, locate bugs, or gain insight into program design. Existing scalable approaches to clone detection are limited to finding program fragments that are similar only in their contiguous syntax. Other, semantics-based approaches are more resilient to differences in syntax, such as reordered statements, related statements interleaved with other unrelated statements, or the use of semantically equivalent control structures. However, none of these techniques have scaled to real world code bases. These approaches capture semantic information from Program Dependence Graphs (PDGs), program representations that encode data and control dependencies between statements and predicates. Our definition of a code clone is also based on this representation: we consider program fragments with isomorphic PDGs to be clones. In this paper, we present the first scalable clone detection algorithm based on this definition of semantic clones. Our insight is the reduction of the difficult graph similarity problem to a simpler tree similarity problem by mapping carefully selected PDG subgraphs to their related structured syntax. We efficiently solve the tree similarity problem to create a scalable analysis. We have implemented this algorithm in a practical tool and performed evaluations on several million-line open source projects, including the Linux kernel. Compared with previous approaches, our tool locates significantly more clones, which are often more semantically interesting than simple copied and pasted code fragments.", "characterized in terms of properties of Rule Graphs. We show that, unfortunately, also the RG is ambiguous with respect to the answer set semantics, while the EDG is isomorphic to the program it represents. We argue that the reason of this drawback of the RG as a software engineering tool relies in the absence of a distinction between the different kinds of connections between cycles. Finally, we suggest that properties of a program might be characterized(andchecked)intermsofadmissiblecolorings of the EDG.", "Verification using static analysis often hinges on precise numeric invariants. Numeric domains of infinite height can infer these invariants, but require widening narrowing which complicates the fixpoint computation and is often too imprecise. As a consequence, several strategies have been proposed to prevent a precision loss during widening or to narrow in a smarter way. Most of these strategies are difficult to retrofit into an existing analysis as they either require a pre-analysis, an on-the-fly modification of the CFG, or modifications to the fixpoint algorithm. We propose to encode widening and its various refinements from the literature as cofibered abstract domains that wrap standard numeric domains, thereby providing a modular way to add numeric analysis to any static analysis, that is, without modifying the fixpoint engine. Since these domains cannot make any assumptions about the structure of the program, our approach is suitable to the analysis of executables, where the (potentially irreducible) CFG is re-constructed on-the-fly. Moreover, our domain-based approach not only mirrors the precision of more intrusive approaches in the literature but also requires fewer iterations to find a fixpoint of loops than many heuristics that merely aim for precision." ] }
1907.11845
2964526184
Generative adversarial networks (GANs) has proven hugely successful in variety of applications of image processing. However, generative adversarial networks for handwriting is relatively rare somehow because of difficulty of handling sequential handwriting data by Convolutional Neural Network (CNN). In this paper, we propose a handwriting generative adversarial network framework (HWGANs) for synthesizing handwritten stroke data. The main features of the new framework include: (i) A discriminator consists of an integrated CNN-Long-Short-Term- Memory (LSTM) based feature extraction with Path Signature Features (PSF) as input and a Feedforward Neural Network (FNN) based binary classifier; (ii) A recurrent latent variable model as generator for synthesizing sequential handwritten data. The numerical experiments show the effectivity of the new model. Moreover, comparing with sole handwriting generator, the HWGANs synthesize more natural and realistic handwritten text.
: The GANs proposed on 2019 @cite_1 aims at generating realistic images of handwritten texts, which is naturally a fit for Optical Character Recognition (OCR). The authors use bidirectional LSTM recurrent layers to get an embedding of the word to be rendered, and then feed it to the generator network. They also modify the standard GAN by adding an auxiliary network for text recognition. However, since this approach can not directly synthesize handwritten text of digital ink, although its generated images are realistic, an additional effective Ink Grab algorithm is further required for the conversion from image to digital stroke.
{ "cite_N": [ "@cite_1" ], "mid": [ "2920553990", "2009444210", "2752225195", "2573871018" ], "abstract": [ "State-of-the-art offline handwriting text recognition systems tend to use neural networks and therefore require a large amount of annotated data to be trained. In order to partially satisfy this requirement, we propose a system based on Generative Adversarial Networks (GAN) to produce synthetic images of handwritten words. We use bidirectional LSTM recurrent layers to get an embedding of the word to be rendered, and we feed it to the generator network. We also modify the standard GAN by adding an auxiliary network for text recognition. The system is then trained with a balanced combination of an adversarial loss and a CTC loss. Together, these extensions to GAN enable to control the textual content of the generated word images. We obtain realistic images on both French and Arabic datasets, and we show that integrating these synthetic images into the existing training data of a text recognition system can slightly enhance its performance.", "Recurrent neural networks (RNN) have been successfully applied for recognition of cursive handwritten documents, both in English and Arabic scripts. Ability of RNNs to model context in sequence data like speech and text makes them a suitable candidate to develop OCR systems for printed Nabataean scripts (including Nastaleeq for which no OCR system is available to date). In this work, we have presented the results of applying RNN to printed Urdu text in Nastaleeq script. Bidirectional Long Short Term Memory (BLSTM) architecture with Connectionist Temporal Classification (CTC) output layer was employed to recognize printed Urdu text. We evaluated BLSTM networks for two cases: one ignoring the character's shape variations and the second is considering them. The recognition error rate at character level for first case is 5.15 and for the second is 13.6 . These results were obtained on synthetically generated UPTI dataset containing artificially degraded images to reflect some real-world scanning artifacts along with clean images. Comparison with shape-matching based method is also presented.", "Optical Character Recognition (OCR) aims to recognize text in natural images. Inspired by a recently proposed model for general image classification, Recurrent Convolution Neural Network (RCNN), we propose a new architecture named Gated RCNN (GRCNN) for solving this problem. Its critical component, Gated Recurrent Convolution Layer (GRCL), is constructed by adding a gate to the Recurrent Convolution Layer (RCL), the critical component of RCNN. The gate controls the context modulation in RCL and balances the feed-forward information and the recurrent information. In addition, an efficient Bidirectional Long Short-Term Memory (BLSTM) is built for sequence modeling. The GRCNN is combined with BLSTM to recognize text in natural images. The entire GRCNN-BLSTM model can be trained end-to-end. Experiments show that the proposed model outperforms existing methods on several benchmark datasets including the IIIT-5K, Street View Text (SVT) and ICDAR.", "Recently, we propose deep neural network based hidden Markov models (DNN-HMMs) for offline handwritten Chinese text recognition. In this study, we design a novel writer code based adaptation on top of the DNN-HMM to further improve the accuracy via a customized recognizer. The writer adaptation is implemented by incorporating the new layers with the original input or hidden layers of the writer-independent DNN. These new layers are driven by the so-called writer code, which guides and adapts the DNN-based recognizer with the writer information. In the training stage, the writer-aware layers are jointly learned with the conventional DNN layers in an alternative manner. In the recognition stage, with the initial recognition results from the first-pass decoding with the writer-independent DNN, an unsupervised adaptation is performed to generate the writer code via the cross-entropy criterion for the subsequent second-pass decoding. The experiments on the most challenging task of ICDAR 2013 Chinese handwriting competition show that our proposed adaptation approach can achieve consistent and significant improvements of recognition accuracy over a highperformance writer-independent DNN-HMM based recognizer across all 60 writers, yielding a relative character error rate reduction of 23.62 in average." ] }
1907.11845
2964526184
Generative adversarial networks (GANs) has proven hugely successful in variety of applications of image processing. However, generative adversarial networks for handwriting is relatively rare somehow because of difficulty of handling sequential handwriting data by Convolutional Neural Network (CNN). In this paper, we propose a handwriting generative adversarial network framework (HWGANs) for synthesizing handwritten stroke data. The main features of the new framework include: (i) A discriminator consists of an integrated CNN-Long-Short-Term- Memory (LSTM) based feature extraction with Path Signature Features (PSF) as input and a Feedforward Neural Network (FNN) based binary classifier; (ii) A recurrent latent variable model as generator for synthesizing sequential handwritten data. The numerical experiments show the effectivity of the new model. Moreover, comparing with sole handwriting generator, the HWGANs synthesize more natural and realistic handwritten text.
: Alex Grave proposed an RNN based generator model to mimic handwriting data @cite_12 , referred as throughout the whole paper. For each timestamp, encodes prefix sampled path to produce a set of parameters of a probability distribution of next stroke point, then sample the next stroke point given this distribution. There are two variants of , i.e., handwritten predictor and handwritten synthesizer, where the later one has the capability to synthesize handwritten text for given text.
{ "cite_N": [ "@cite_12" ], "mid": [ "2573871018", "2009444210", "2962712200", "2343436474" ], "abstract": [ "Recently, we propose deep neural network based hidden Markov models (DNN-HMMs) for offline handwritten Chinese text recognition. In this study, we design a novel writer code based adaptation on top of the DNN-HMM to further improve the accuracy via a customized recognizer. The writer adaptation is implemented by incorporating the new layers with the original input or hidden layers of the writer-independent DNN. These new layers are driven by the so-called writer code, which guides and adapts the DNN-based recognizer with the writer information. In the training stage, the writer-aware layers are jointly learned with the conventional DNN layers in an alternative manner. In the recognition stage, with the initial recognition results from the first-pass decoding with the writer-independent DNN, an unsupervised adaptation is performed to generate the writer code via the cross-entropy criterion for the subsequent second-pass decoding. The experiments on the most challenging task of ICDAR 2013 Chinese handwriting competition show that our proposed adaptation approach can achieve consistent and significant improvements of recognition accuracy over a highperformance writer-independent DNN-HMM based recognizer across all 60 writers, yielding a relative character error rate reduction of 23.62 in average.", "Recurrent neural networks (RNN) have been successfully applied for recognition of cursive handwritten documents, both in English and Arabic scripts. Ability of RNNs to model context in sequence data like speech and text makes them a suitable candidate to develop OCR systems for printed Nabataean scripts (including Nastaleeq for which no OCR system is available to date). In this work, we have presented the results of applying RNN to printed Urdu text in Nastaleeq script. Bidirectional Long Short Term Memory (BLSTM) architecture with Connectionist Temporal Classification (CTC) output layer was employed to recognize printed Urdu text. We evaluated BLSTM networks for two cases: one ignoring the character's shape variations and the second is considering them. The recognition error rate at character level for first case is 5.15 and for the second is 13.6 . These results were obtained on synthetically generated UPTI dataset containing artificially degraded images to reflect some real-world scanning artifacts along with clean images. Comparison with shape-matching based method is also presented.", "This paper proposes an end-to-end framework, namely fully convolutional recurrent network (FCRN) for handwritten Chinese text recognition (HCTR). Unlike traditional methods that rely heavily on segmentation, our FCRN is trained with online text data directly and learns to associate the pen-tip trajectory with a sequence of characters. FCRN consists of four parts: a path-signature layer to extract signature features from the input pen-tip trajectory, a fully convolutional network to learn informative representation, a sequence modeling layer to make per-frame predictions on the input sequence and a transcription layer to translate the predictions into a label sequence. We also present a refined beam search method that efficiently integrates the language model to decode the FCRN and significantly improve the recognition results. We evaluate the performance of the proposed method on the test sets from the databases CASIA-OLHWDB and ICDAR 2013 Chinese handwriting recognition competition, and both achieve state-of-the-art performance with correct rates of 96.40 and 95.00 , respectively.", "Offline handwriting recognition systems require cropped text line images for both training and recognition. On the one hand, the annotation of position and transcript at line level is costly to obtain. On the other hand, automatic line segmentation algorithms are prone to errors, compromising the subsequent recognition. In this paper, we propose a modification of the popular and efficient multi-dimensional long short-term memory recurrent neural networks (MDLSTM-RNNs) to enable end-to-end processing of handwritten paragraphs. More particularly, we replace the collapse layer transforming the two-dimensional representation into a sequence of predictions by a recurrent version which can recognize one line at a time. In the proposed model, a neural network performs a kind of implicit line segmentation by computing attention weights on the image representation. The experiments on paragraphs of Rimes and IAM database yield results that are competitive with those of networks trained at line level, and constitute a significant step towards end-to-end transcription of full documents." ] }
1907.11836
2966726302
Massive multiple-input multiple-output (MIMO) with frequency division duplex (FDD) mode is a promising approach to increasing system capacity and link robustness for the fifth generation (5G) wireless cellular systems. The premise of these advantages is the accurate downlink channel state information (CSI) fed back from user equipment. However, conventional feedback methods have difficulties in reducing feedback overhead due to significant amount of base station (BS) antennas in massive MIMO systems. Recently, deep learning (DL)-based CSI feedback conquers many difficulties, yet still shows insufficiency to decrease the occupation of uplink bandwidth resources. In this paper, to solve this issue, we combine DL and superimposed coding (SC) for CSI feedback, in which the downlink CSI is spread and then superimposed on uplink user data sequences (UL-US) toward the BS. Then, a multi-task neural network (NN) architecture is proposed at BS to recover the downlink CSI and UL-US by unfolding two iterations of the minimum mean-squared error (MMSE) criterion-based interference reduction. In addition, for a network training, a subnet-by-subnet approach is exploited to facilitate the parameter tuning and expedite the convergence rate. Compared with standalone SC-based CSI scheme, our multi-task NN, trained in a specific signal-to-noise ratio (SNR) and power proportional coefficient (PPC), consistently improves the estimation of downlink CSI with similar or better UL-US detection under SNR and PPC varying.
Without any occupation of uplink bandwidth resources, @cite_31 and @cite_6 estimated downlink CSI from uplink CSI by using DL approach. In @cite_31 , the core idea was that since the same propagating environment was shared for both uplink and downlink channels, the environment information could be applied to downlink channel cases after it was extracted from uplink channel response. Similar to @cite_31 , a NN-based scheme for extrapolating downlink CSI from observed uplink CSI has been proposed in @cite_6 , where the underlying physical relation between the downlink and uplink frequency bands was exploited to construct the learning architecture. Need to mention that, the methods in @cite_31 usually needs to retrain the NN when the environment information changes significantly. For example, for a well-trained equipment, its extracted environment information (e.g., the shapes of buildings, streets and mountains, the materials that objects are made up, etc) from one city would no longer be applicable for another. The method in @cite_6 will encounter poor CSI recovery performance in the environment of wide band interval between downlink and uplink frequency bands.
{ "cite_N": [ "@cite_31", "@cite_6" ], "mid": [ "2904192264", "2109711397", "1987954156", "2963145597" ], "abstract": [ "Knowledge of the channel state information (CSI) at the transmitter side is one of the primary sources of information that can be used for efficient allocation of wireless resources. Obtaining Down-Link (DL) CSI in FDD systems from Up-Link (UL) CSI is not as straightforward as TDD systems, and so usually users feedback the DL-CSI to the transmitter. To remove the need for feedback (and thus having less signaling overhead), several methods have been studied to estimate DL-CSI from UL-CSI. In this paper, we propose a scheme to infer DL-CSI by observing UL-CSI in which we use two recent deep neural network structures: a) Convolutional Neural network and b) Generative Adversarial Networks. The proposed deep network structures are first learning a latent model of the environment from the training data. Then, the resulted latent model is used to predict the DL-CSI from the UL-CSI. We have simulated the proposed scheme and evaluated its performance in a few network settings.", "In closed-loop FDD MIMO system, downlink channel state information (DL-CSI) is usually feedback to base station in forms of codebook or CQI, both of which aim at lowering the feedback quantity at the cost of limited feedback precision and heavy processing complexity at mobile side. Meanwhile, the recently proposed direct channel feedback method incurs great system overhead due to its exclusive occupation of uplink bandwidth resources. We propose a low-cost feedback method for DL-CSI, which spreads unquantized and uncoded DL-CSI and superimposes it onto uplink user data sequences (UL-US). Exclusive occupation of system resources by DL-CSI can thus be avoided. Due to spreading, DL-CSI can be estimated accurately with little power allocation at the cost of some UL-US's SER performance", "In this paper, we study resource allocation in a downlink OFDMA system assuming imperfect channel state information (CSI) at the transmitter. To achieve the individual QoS of the users in OFDMA system, adaptive resource allocation is very important, and has therefore been an active area of research. However, in most of the the previous work perfect CSI at the transmitter is assumed which is rarely possible due to channel estimation error and feedback delay. In this paper, we study the effect of channel estimation error on resource allocation in a downlink OFDMA system. We assume that each user terminal estimates its channel by using an MMSE estimator and sends its CSI back to the base station through a feedback channel. We approach the problem by using convex optimization framework, provide an explicit closed form expression for the users' transmit power and then develop an optimal margin adaptive resource allocation algorithm. Our proposed algorithm minimizes the total transmit power of the system subject to constraints on users' average data rate. The algorithm has polynomial complexity and solves the problem with zero optimality gaps. Simulation results show that our algorithm highly improves the system performance in the presence of imperfect channel estimation.", "In frequency division duplex mode, the downlink channel state information (CSI) should be sent to the base station through feedback links so that the potential gains of a massive multiple-input multiple-output can be exhibited. However, such a transmission is hindered by excessive feedback overhead. In this letter, we use deep learning technology to develop CsiNet, a novel CSI sensing and recovery mechanism that learns to effectively use channel structure from training samples. CsiNet learns a transformation from CSI to a near-optimal number of representations (or codewords) and an inverse transformation from codewords to CSI. We perform experiments to demonstrate that CsiNet can recover CSI with significantly improved reconstruction quality compared with existing compressive sensing (CS)-based methods. Even at excessively low compression regions where CS-based methods cannot work, CsiNet retains effective beamforming gain." ] }
1907.11836
2966726302
Massive multiple-input multiple-output (MIMO) with frequency division duplex (FDD) mode is a promising approach to increasing system capacity and link robustness for the fifth generation (5G) wireless cellular systems. The premise of these advantages is the accurate downlink channel state information (CSI) fed back from user equipment. However, conventional feedback methods have difficulties in reducing feedback overhead due to significant amount of base station (BS) antennas in massive MIMO systems. Recently, deep learning (DL)-based CSI feedback conquers many difficulties, yet still shows insufficiency to decrease the occupation of uplink bandwidth resources. In this paper, to solve this issue, we combine DL and superimposed coding (SC) for CSI feedback, in which the downlink CSI is spread and then superimposed on uplink user data sequences (UL-US) toward the BS. Then, a multi-task neural network (NN) architecture is proposed at BS to recover the downlink CSI and UL-US by unfolding two iterations of the minimum mean-squared error (MMSE) criterion-based interference reduction. In addition, for a network training, a subnet-by-subnet approach is exploited to facilitate the parameter tuning and expedite the convergence rate. Compared with standalone SC-based CSI scheme, our multi-task NN, trained in a specific signal-to-noise ratio (SNR) and power proportional coefficient (PPC), consistently improves the estimation of downlink CSI with similar or better UL-US detection under SNR and PPC varying.
As a whole, the DL-based and SC-based CSI feedback methods still face huge challenge, which can be summarized as follows: Concentrated on feedback reduction, the DL-based CSI feedback methods, e.g., the methods in @cite_34 -- @cite_10 , inevitably occupy uplink bandwidth resources. Although the occupation of uplink bandwidth resources can be avoided, the methods that estimate downlink CSI from uplink CSI in @cite_31 and @cite_6 usually limit the applications in mobile or wide frequency-band interval environment. The SC-based CSI feedback @cite_21 can also avoid the occupation of uplink bandwidth resources, while facing with huge challenge to cancel the interference between downlink CSI and UL-US due to the lack of good solutions in previous works.
{ "cite_N": [ "@cite_21", "@cite_6", "@cite_31", "@cite_34", "@cite_10" ], "mid": [ "2109711397", "2904192264", "1987954156", "2963145597" ], "abstract": [ "In closed-loop FDD MIMO system, downlink channel state information (DL-CSI) is usually feedback to base station in forms of codebook or CQI, both of which aim at lowering the feedback quantity at the cost of limited feedback precision and heavy processing complexity at mobile side. Meanwhile, the recently proposed direct channel feedback method incurs great system overhead due to its exclusive occupation of uplink bandwidth resources. We propose a low-cost feedback method for DL-CSI, which spreads unquantized and uncoded DL-CSI and superimposes it onto uplink user data sequences (UL-US). Exclusive occupation of system resources by DL-CSI can thus be avoided. Due to spreading, DL-CSI can be estimated accurately with little power allocation at the cost of some UL-US's SER performance", "Knowledge of the channel state information (CSI) at the transmitter side is one of the primary sources of information that can be used for efficient allocation of wireless resources. Obtaining Down-Link (DL) CSI in FDD systems from Up-Link (UL) CSI is not as straightforward as TDD systems, and so usually users feedback the DL-CSI to the transmitter. To remove the need for feedback (and thus having less signaling overhead), several methods have been studied to estimate DL-CSI from UL-CSI. In this paper, we propose a scheme to infer DL-CSI by observing UL-CSI in which we use two recent deep neural network structures: a) Convolutional Neural network and b) Generative Adversarial Networks. The proposed deep network structures are first learning a latent model of the environment from the training data. Then, the resulted latent model is used to predict the DL-CSI from the UL-CSI. We have simulated the proposed scheme and evaluated its performance in a few network settings.", "In this paper, we study resource allocation in a downlink OFDMA system assuming imperfect channel state information (CSI) at the transmitter. To achieve the individual QoS of the users in OFDMA system, adaptive resource allocation is very important, and has therefore been an active area of research. However, in most of the the previous work perfect CSI at the transmitter is assumed which is rarely possible due to channel estimation error and feedback delay. In this paper, we study the effect of channel estimation error on resource allocation in a downlink OFDMA system. We assume that each user terminal estimates its channel by using an MMSE estimator and sends its CSI back to the base station through a feedback channel. We approach the problem by using convex optimization framework, provide an explicit closed form expression for the users' transmit power and then develop an optimal margin adaptive resource allocation algorithm. Our proposed algorithm minimizes the total transmit power of the system subject to constraints on users' average data rate. The algorithm has polynomial complexity and solves the problem with zero optimality gaps. Simulation results show that our algorithm highly improves the system performance in the presence of imperfect channel estimation.", "In frequency division duplex mode, the downlink channel state information (CSI) should be sent to the base station through feedback links so that the potential gains of a massive multiple-input multiple-output can be exhibited. However, such a transmission is hindered by excessive feedback overhead. In this letter, we use deep learning technology to develop CsiNet, a novel CSI sensing and recovery mechanism that learns to effectively use channel structure from training samples. CsiNet learns a transformation from CSI to a near-optimal number of representations (or codewords) and an inverse transformation from codewords to CSI. We perform experiments to demonstrate that CsiNet can recover CSI with significantly improved reconstruction quality compared with existing compressive sensing (CS)-based methods. Even at excessively low compression regions where CS-based methods cannot work, CsiNet retains effective beamforming gain." ] }
1907.11770
2965569712
In this paper we compare learning-based methods and classical methods for navigation in virtual environments. We construct classical navigation agents and demonstrate that they outperform state-of-the-art learning-based agents on two standard benchmarks: MINOS and Stanford Large-Scale 3D Indoor Spaces. We perform detailed analysis to study the strengths and weaknesses of learned agents and classical agents, as well as how characteristics of the virtual environment impact navigation performance. Our results show that learned agents have inferior collision avoidance and memory management, but are superior in handling ambiguity and noise. These results can inform future design of navigation agents.
Error analysis has played an important role in computer vision research such as object detection @cite_41 and VQA @cite_14 . Although many learning-based methods have recently been proposed for navigation @cite_11 @cite_3 @cite_2 @cite_29 @cite_33 , there has been little work focused on error analysis of state-of-the-art methods. The closest to ours is the concurrent works by Mishkin al @cite_34 and Savva al @cite_13 , who bench-marked learned agents against classical ones in indoor simulators. Our work shares similarity in comparing learned and classical agents, but is different in that we propose new metrics to diagnose various aspects of navigation capability including collision, avoidance, memory management, and exploitation of available information.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_41", "@cite_29", "@cite_3", "@cite_2", "@cite_34", "@cite_13", "@cite_11" ], "mid": [ "2110405746", "2130155248", "2963272646", "2949153416" ], "abstract": [ "Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100 precision with recall rates of up to 60 .", "This paper describes a technique for multi-agent exploration of an unknown environment, that improves the quality of the map by reducing the inaccuracies that occur over time from dead reckoning errors. We present an algorithmic solution, simulation results, as well as a cost analysis and experimental data. The approach is based on using a pair of robots that observe one another’s behaviour, thus greatly reducing odometry errors. We assume the robots can both directly sense nearby obstacles and see one another. We have implemented both these capabilities with actual robots in our lab. By exploiting the ability of the robots to see one another, we can detect opaque obstacles in the environment independent of their surface reflectance properties. 1", "In this paper, we present a novel, general, and efficient architecture for addressing computer vision problems that are approached from an 'Analysis by Synthesis' standpoint. Analysis by synthesis involves the minimization of reconstruction error, which is typically a non-convex function of the latent target variables. State-of-the-art methods adopt a hybrid scheme where discriminatively trained predictors like Random Forests or Convolutional Neural Networks are used to initialize local search algorithms. While these hybrid methods have been shown to produce promising results, they often get stuck in local optima. Our method goes beyond the conventional hybrid architecture by not only proposing multiple accurate initial solutions but by also defining a navigational structure over the solution space that can be used for extremely efficient gradient-free local search. We demonstrate the efficacy and generalizability of our approach on tasks as diverse as Hand Pose Estimation, RGB Camera Relocalization, and Image Retrieval.", "As humans we possess an intuitive ability for navigation which we master through years of practice; however existing approaches to model this trait for diverse tasks including monitoring pedestrian flow and detecting abnormal events have been limited by using a variety of hand-crafted features. Recent research in the area of deep-learning has demonstrated the power of learning features directly from the data; and related research in recurrent neural networks has shown exemplary results in sequence-to-sequence problems such as neural machine translation and neural image caption generation. Motivated by these approaches, we propose a novel method to predict the future motion of a pedestrian given a short history of their, and their neighbours, past behaviour. The novelty of the proposed method is the combined attention model which utilises both \"soft attention\" as well as \"hard-wired\" attention in order to map the trajectory information from the local neighbourhood to the future positions of the pedestrian of interest. We illustrate how a simple approximation of attention weights (i.e hard-wired) can be merged together with soft attention weights in order to make our model applicable for challenging real world scenarios with hundreds of neighbours. The navigational capability of the proposed method is tested on two challenging publicly available surveillance databases where our model outperforms the current-state-of-the-art methods. Additionally, we illustrate how the proposed architecture can be directly applied for the task of abnormal event detection without handcrafting the features." ] }
1907.11770
2965569712
In this paper we compare learning-based methods and classical methods for navigation in virtual environments. We construct classical navigation agents and demonstrate that they outperform state-of-the-art learning-based agents on two standard benchmarks: MINOS and Stanford Large-Scale 3D Indoor Spaces. We perform detailed analysis to study the strengths and weaknesses of learned agents and classical agents, as well as how characteristics of the virtual environment impact navigation performance. Our results show that learned agents have inferior collision avoidance and memory management, but are superior in handling ambiguity and noise. These results can inform future design of navigation agents.
Another line of research follows a more module approach by developing learning-based navigation modules which can be integrated to a larger network. For example, localization can be formulated as a 3-DOF or 6-DOF camera pose estimation problem and performed by a deep network @cite_22 @cite_7 @cite_21 . Learning-based approaches have also been studied in the context of SLAM @cite_36 @cite_18 @cite_25 . Most relevant to our work, Tamar al propose Value Iteration Network (VIN) @cite_43 as a differential planner, and Gupta al integrate VIN with a differential mapper and propose CMP in @cite_28 , an end-to-end mapper-planner which we analyze as the state-of-the-art method with specially designed components.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_36", "@cite_28", "@cite_21", "@cite_43", "@cite_25" ], "mid": [ "2909119029", "2909955272", "2800595980", "2948138929" ], "abstract": [ "This paper presents a deep network based unsupervised visual odometry system for 6-DoF camera pose estimation and finding dense depth map for its monocular view. The proposed network is trained using unlabeled binocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. This is achieved by introducing a novel objective function and training the network using temporally alligned sequences of monocular images. The objective function is based on the Charbonnier penalty applied to spatial and bi-directional temporal reconstruction losses. The overall novelty of the approach lies in the fact that the proposed deep framework combines a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6-DoF camera pose and superior depth map. According to our knowledge, such a framework with complete unsupervised end-to-end learning has not been tried so far, making it a novel contribution in the field. The effectiveness of the approach is demonstrated through performance comparison with the state-of-the-art methods on KITTI driving dataset.", "As simultaneous localization and mapping (SLAM) techniques have flourished with the advent of 3D Light Detection and Ranging (LiDAR) sensors, accurate 3D maps are readily available. Many researchers turn their attention to localization in a previously acquired 3D map. In this paper, we propose a novel and lightweight camera-only visual positioning algorithm that involves localization within prior 3D LiDAR maps. We aim to achieve the consumer level global positioning system (GPS) accuracy using vision within the urban environment, where GPS signal is unreliable. Via exploiting a stereo camera, depth from the stereo disparity map is matched with 3D LiDAR maps. A full six degree of freedom (DOF) camera pose is estimated via minimizing depth residual. Powered by visual tracking that provides a good initial guess for the localization, the proposed depth residual is successfully applied for camera pose estimation. Our method runs online, as the average localization error is comparable to ones resulting from state-of-the-art approaches. We validate the proposed method as a stand-alone localizer using KITTI dataset and as a module in the SLAM framework using our own dataset.", "In this paper, we propose a novel robocentric formulation of the visual-inertial navigation system (VINS) within a sliding-window filtering framework and design an efficient, lightweight, robocentric visual-inertial odometry (R-VIO) algorithm for consistent motion tracking even in challenging environments using only a monocular camera and a 6-axis IMU. The key idea is to deliberately reformulate the VINS with respect to a moving local frame, rather than a fixed global frame of reference as in the standard world-centric VINS, in order to obtain relative motion estimates of higher accuracy for updating global poses. As an immediate advantage of this robocentric formulation, the proposed R-VIO can start from an arbitrary pose, without the need to align the initial orientation with the global gravitational direction. More importantly, we analytically show that the linearized robocentric VINS does not undergo the observability mismatch issue as in the standard world-centric counterpart which was identified in the literature as the main cause of estimation inconsistency. Additionally, we investigate in-depth the special motions that degrade the performance in the world-centric formulation and show that such degenerate cases can be easily compensated in the proposed robocentric formulation, without resorting to additional sensors as in the world-centric formulation, thus leading to better robustness. The proposed R-VIO algorithm has been extensively tested through both Monte Carlo simulations and real-world experiments with different sensor platforms navigating in different environments, and shown to achieve better (or competitive at least) performance than the state-of-the-art VINS, in terms of consistency, accuracy and efficiency.", "We introduce the value iteration network (VIN): a fully differentiable neural network with a 'planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains." ] }
1907.11653
2965308893
The prediction of electrical power in combined cycle power plants is a key challenge in the electrical power and energy systems field. This power output can vary depending on environmental variables, such as temperature, pressure, and humidity. Thus, the business problem is how to predict the power output as a function of these environmental conditions in order to maximize the profit. The research community has solved this problem by applying machine learning techniques and has managed to reduce the computational and time costs in comparison with the traditional thermodynamical analysis. Until now, this challenge has been tackled from a batch learning perspective in which data is assumed to be at rest, and where models do not continuously integrate new information into already constructed models. We present an approach closer to the Big Data and Internet of Things paradigms in which data is arriving continuously and where models learn incrementally, achieving significant enhancements in terms of data processing (time, memory and computational costs), and obtaining competitive performances. This work compares and examines the hourly electrical power prediction of several streaming regressors, and discusses about the best technique in terms of time processing and performance to be applied on this streaming scenario.
Regarding the SL topic, many researches have focused on it due to its mentioned relevance, such as @cite_3 @cite_57 @cite_36 @cite_44 @cite_25 , and more recently in @cite_28 @cite_51 @cite_30 @cite_5 . The application of regression techniques to SL has been recently addressed in @cite_16 , where the authors cover the most important online regression methods. The work @cite_37 deals with ensemble learning from data streams, and concretely it focused on regression ensembles. The authors of @cite_4 propose several criteria for efficient sample selection in case of SL regression problems within an online active learning context. In general, we can say that regression tasks in SL have not received as much attention as classification tasks, and this was spotlighted in @cite_21 , where researchers carried out an study and an empirical evaluation of a set of online algorithms for regression, which includes the baseline Hoeffding-based regression trees, online option trees, and an online least mean squares filter.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_4", "@cite_36", "@cite_28", "@cite_21", "@cite_3", "@cite_57", "@cite_44", "@cite_5", "@cite_16", "@cite_51", "@cite_25" ], "mid": [ "2073427650", "2491694318", "2142057089", "2952662639" ], "abstract": [ "Abstract The emergence of ubiquitous sources of streaming data has given rise to the popularity of algorithms for online machine learning. In that context, Hoeffding trees represent the state-of-the-art algorithms for online classification. Their popularity stems in large part from their ability to process large quantities of data with a speed that goes beyond the processing power of any other streaming or batch learning algorithm. As a consequence, Hoeffding trees have often been used as base models of many ensemble learning algorithms for online classification. However, despite the existence of many algorithms for online classification, ensemble learning algorithms for online regression do not exist. In particular, the field of online any-time regression analysis seems to have experienced a serious lack of attention. In this paper, we address this issue through a study and an empirical evaluation of a set of online algorithms for regression, which includes the baseline Hoeffding-based regression trees, online option trees, and an online least mean squares filter. We also design, implement and evaluate two novel ensemble learning methods for online regression: online bagging with Hoeffding-based model trees, and an online RandomForest method in which we have used a randomized version of the online model tree learning algorithm as a basic building block. Within the study presented in this paper, we evaluate the proposed algorithms along several dimensions: predictive accuracy and quality of models, time and memory requirements, bias–variance and bias–variance–covariance decomposition of the error, and responsiveness to concept drift.", "We study resource-limited online learning, motivated by the problem of conditional-branch outcome prediction in computer architecture. In particular, we consider (parallel) time and space-efficient ensemble learners for online settings, empirically demonstrating benefits similar to those shown previously for offline ensembles. Our learning algorithms are inspired by the previously published “boosting by filtering” framework as well as the offline Arc-x4 boosting-style algorithm. We train ensembles of online decision trees using a novel variant of the ID4 online decision-tree algorithm as the base learner, and show empirical results for both boosting and bagging-style online ensemble methods. Our results evaluate these methods on both our branch prediction domain and online variants of three familiar machine-learning benchmarks. Our data justifies three key claims. First, we show empirically that our extensions to ID4 significantly improve performance for single trees and additionally are critical to achieving performance gains in tree ensembles. Second, our results indicate significant improvements in predictive accuracy with ensemble size for the boosting-style algorithm. The bagging algorithms we tried showed poor performance relative to the boosting-style algorithm (but still improve upon individual base learners). Third, we show that ensembles of small trees are often able to outperform large single trees with the same number of nodes (and similarly outperform smaller ensembles of larger trees that use the same total number of nodes). This makes online boosting particularly useful in domains such as branch prediction with tight space restrictions (i.e., the available real-estate on a microprocessor chip).", "Recommender problems with large and dynamic item pools are ubiquitous in web applications like content optimization, online advertising and web search. Despite the availability of rich item meta-data, excess heterogeneity at the item level often requires inclusion of item-specific \"factors\" (or weights) in the model. However, since estimating item factors is computationally intensive, it poses a challenge for time-sensitive recommender problems where it is important to rapidly learn factors for new items (e.g., news articles, event updates, tweets) in an online fashion. In this paper, we propose a novel method called FOBFM (Fast Online Bilinear Factor Model) to learn item-specific factors quickly through online regression. The online regression for each item can be performed independently and hence the procedure is fast, scalable and easily parallelizable. However, the convergence of these independent regressions can be slow due to high dimensionality. The central idea of our approach is to use a large amount of historical data to initialize the online models based on offline features and learn linear projections that can effectively reduce the dimensionality. We estimate the rank of our linear projections by taking recourse to online model selection based on optimizing predictive likelihood. Through extensive experiments, we show that our method significantly and uniformly outperforms other competitive methods and obtains relative lifts that are in the range of 10-15 in terms of predictive log-likelihood, 200-300 for a rank correlation metric on a proprietary My Yahoo! dataset; it obtains 9 reduction in root mean squared error over the previously best method on a benchmark MovieLens dataset using a time-based train test data split.", "In this paper, we first provide a new perspective to divide existing high performance object detection methods into direct and indirect regressions. Direct regression performs boundary regression by predicting the offsets from a given point, while indirect regression predicts the offsets from some bounding box proposals. Then we analyze the drawbacks of the indirect regression, which the recent state-of-the-art detection structures like Faster-RCNN and SSD follows, for multi-oriented scene text detection, and point out the potential superiority of direct regression. To verify this point of view, we propose a deep direct regression based method for multi-oriented scene text detection. Our detection framework is simple and effective with a fully convolutional network and one-step post processing. The fully convolutional network is optimized in an end-to-end way and has bi-task outputs where one is pixel-wise classification between text and non-text, and the other is direct regression to determine the vertex coordinates of quadrilateral text boundaries. The proposed method is particularly beneficial for localizing incidental scene texts. On the ICDAR2015 Incidental Scene Text benchmark, our method achieves the F1-measure of 81 , which is a new state-of-the-art and significantly outperforms previous approaches. On other standard datasets with focused scene texts, our method also reaches the state-of-the-art performance." ] }
1907.11752
2965643485
Decision making under uncertain conditions has been well studied when uncertainty can only be considered at the associative level of information. The classical Theorems of von Neumann-Morgenstern and Savage provide a formal criterion for rationally making choices using associative information. We provide here a previous result from Pearl and show that it can be considered as a causal version of the von Neumann-Morgenstern Theorem; furthermore, we consider the case when the true causal mechanism that controls the environment is unknown to the decision maker and propose a causal version of the Savage Theorem. As applications, we argue how previous optimal action learning methods for causal environments fit within the Causal Savage Theorem we present thus showing the utility of our result in the justification and design of learning algorithms; furthermore, we define a Causal Nash Equilibria for a strategic game in a causal environment in terms of the preferences induced by our Causal Decision Making Theorem.
A previous attempt to formalize Decision Theory in the presence of Causal Information is given in @cite_53 , @cite_70 . According to such formulation, a decision maker must choose whatever action is more likely to (causally) produce desired outcomes while keeping any beliefs about causal relations fixed ( @cite_4 ). This is stated by the Stalnaker ( @cite_59 ) equation where @math is to be read as @math @math ( @cite_41 , @cite_11 ). Lewis' and Joyce's work captured the intuition that causal relations may be used to control the environment and to predict what is caused by the actions of a decision maker. In Section we refine the @math operator by an explicit way of calculating the probability of causing an outcome by doing a certain action in terms of Pearl's do-calculus.
{ "cite_N": [ "@cite_4", "@cite_70", "@cite_41", "@cite_53", "@cite_59", "@cite_11" ], "mid": [ "1480413091", "2542071751", "1850984366", "1961009203" ], "abstract": [ "We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and effect. Finally, we show how canonical form facilitates counterfactual reasoning.", "Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are @math observed contexts and @math arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality @math ( @math ). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the @math mean reward matrix @math (for each context in @math and each arm in @math ) factorizes into non-negative factors @math ( @math ) and @math ( @math ). This insight enables us to propose an @math -greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of @math at time @math , as compared to @math for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of @math . These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.", "The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X,Y. An example is to decide whether altitude causes temperature, or vice versa, given only joint measurements of both variables. Even under the simplifying assumptions of no confounding, no feedback loops, and no selection bias, such bivariate causal discovery problems are challenging. Nevertheless, several approaches for addressing those problems have been proposed in recent years. We review two families of such methods: methods based on Additive Noise Models (ANMs) and Information Geometric Causal Inference (IGCI). We present the benchmark CAUSEEFFECTPAIRS that consists of data for 100 different causee ffect pairs selected from 37 data sets from various domains (e.g., meteorology, biology, medicine, engineering, economy, etc.) and motivate our decisions regarding the \"ground truth\" causal directions of all pairs. We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data. Our empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions. One of the best performing methods overall is the method based on Additive Noise Models that has originally been proposed by (2009), which obtains an accuracy of 63 ± 10 and an AUC of 0.74 ± 0.05 on the real-world benchmark. As the main theoretical contribution of this work we prove the consistency of that method.", "Mining for association rules in market basket data has proved a fruitful area of research. Measures such as conditional probability (confidence) and correlation have been used to infer rules of the form “the existence of item A implies the existence of item B.” However, such rules indicate only a statistical relationship between A and B. They do not specify the nature of the relationship: whether the presence of A causes the presence of B, or the converse, or some other attribute or phenomenon causes both to appear together. In applications, knowing such causal relationships is extremely useful for enhancing understanding and effecting change. While distinguishing causality from correlation is a truly difficult problem, recent work in statistics and Bayesian learning provide some avenues of attack. In these fields, the goal has generally been to learn complete causal models, which are essentially impossible to learn in large-scale data mining applications with a large number of variables. In this paper, we consider the problem of determining casual relationships, instead of mere associations, when mining market basket data. We identify some problems with the direct application of Bayesian learning ideas to mining large databases, concerning both the scalability of algorithms and the appropriateness of the statistical techniques, and introduce some initial ideas for dealing with these problems. We present experimental results from applying our algorithms on several large, real-world data sets. The results indicate that the approach proposed here is both computationally feasible and successful in identifying interesting causal structures. An interesting outcome is that it is perhaps easier to infer the lack of causality than to infer causality, information that is useful in preventing erroneous decision making." ] }
1907.11752
2965643485
Decision making under uncertain conditions has been well studied when uncertainty can only be considered at the associative level of information. The classical Theorems of von Neumann-Morgenstern and Savage provide a formal criterion for rationally making choices using associative information. We provide here a previous result from Pearl and show that it can be considered as a causal version of the von Neumann-Morgenstern Theorem; furthermore, we consider the case when the true causal mechanism that controls the environment is unknown to the decision maker and propose a causal version of the Savage Theorem. As applications, we argue how previous optimal action learning methods for causal environments fit within the Causal Savage Theorem we present thus showing the utility of our result in the justification and design of learning algorithms; furthermore, we define a Causal Nash Equilibria for a strategic game in a causal environment in terms of the preferences induced by our Causal Decision Making Theorem.
@cite_43 provides a framework for defining the notions of cause and effect in terms of decision theoretical concepts, such as states and outcomes and gives a theoretical basis for graphical description of causes and effects, such as causal influence diagrams ( @cite_54 ). Heckerman gave an elegant definition of causality, but did not addressed how to actually make choices using causal information.
{ "cite_N": [ "@cite_43", "@cite_54" ], "mid": [ "1480413091", "1850984366", "2564513914", "2143891888" ], "abstract": [ "We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and effect. Finally, we show how canonical form facilitates counterfactual reasoning.", "The discovery of causal relationships from purely observational data is a fundamental problem in science. The most elementary form of such a causal discovery problem is to decide whether X causes Y or, alternatively, Y causes X, given joint observations of two variables X,Y. An example is to decide whether altitude causes temperature, or vice versa, given only joint measurements of both variables. Even under the simplifying assumptions of no confounding, no feedback loops, and no selection bias, such bivariate causal discovery problems are challenging. Nevertheless, several approaches for addressing those problems have been proposed in recent years. We review two families of such methods: methods based on Additive Noise Models (ANMs) and Information Geometric Causal Inference (IGCI). We present the benchmark CAUSEEFFECTPAIRS that consists of data for 100 different causee ffect pairs selected from 37 data sets from various domains (e.g., meteorology, biology, medicine, engineering, economy, etc.) and motivate our decisions regarding the \"ground truth\" causal directions of all pairs. We evaluate the performance of several bivariate causal discovery methods on these real-world benchmark data and in addition on artificially simulated data. Our empirical results on real-world data indicate that certain methods are indeed able to distinguish cause from effect using only purely observational data, although more benchmark data would be needed to obtain statistically significant conclusions. One of the best performing methods overall is the method based on Additive Noise Models that has originally been proposed by (2009), which obtains an accuracy of 63 ± 10 and an AUC of 0.74 ± 0.05 on the real-world benchmark. As the main theoretical contribution of this work we prove the consistency of that method.", "One of the key uses of causes is to explain why things happen. Explanations of specific events, like an individual's heart attack on Monday afternoon or a particular car accident, help assign responsibility and inform our future decisions. Computational methods for causal inference make use of the vast amounts of data collected by individuals to better understand their behavior and improve their health. However, most methods for explanation of specific events have provided theoretical approaches with limited applicability. In contrast we make two main contributions: an algorithm for explanation that calculates the strength of token causes, and an evaluation based on simulated data that enables objective comparison against prior methods and ground truth. We show that the approach finds the correct relationships in classic test cases (causal chains, common cause, and backup causation) and in a realistic scenario (explaining hyperglycemic episodes in a simulation of type 1 diabetes).", "1. Introduction to probabilities, graphs, and causal models 2. A theory of inferred causation 3. Causal diagrams and the identification of causal effects 4. Actions, plans, and direct effects 5. Causality and structural models in the social sciences 6. Simpson's paradox, confounding, and collapsibility 7. Structural and counterfactual models 8. Imperfect experiments: bounds and counterfactuals 9. Probability of causation: interpretation and identification Epilogue: the art and science of cause and effect." ] }
1907.11703
2966816175
Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
Safe Reinforcement Learning tries to ensure reasonable system performance and or respect safety constraints during the learning and or deployment processes @cite_30 . Roughly, there are two ways of doing safe RL: some methods adapt the optimality criterion, while others adapt the exploration mechanism. Our work uses continuous action guidance of lookahead search with MCTS for better exploration.
{ "cite_N": [ "@cite_30" ], "mid": [ "1845972764", "2053572490", "1840625103", "2964340170" ], "abstract": [ "Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and or respect safety constraints during the learning and or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning.", "Reinforcement learning for robotic applications faces the challenge of constraint satisfaction, which currently impedes its application to safety critical systems. Recent approaches successfully introduce safety based on reachability analysis, determining a safe region of the state space where the system can operate. However, overly constraining the freedom of the system can negatively affect performance, while attempting to learn less conservative safety constraints might fail to preserve safety if the learned constraints are inaccurate. We propose a novel method that uses a principled approach to learn the system's unknown dynamics based on a Gaussian process model and iteratively approximates the maximal safe set. A modified control strategy based on real-time model validation preserves safety under weaker conditions than current approaches. Our framework further incorporates safety into the reinforcement learning performance metric, allowing a better integration of safety and learning. We demonstrate our algorithm on simulations of a cart-pole system and on an experimental quadrotor application and show how our proposed scheme succeeds in preserving safety where current approaches fail to avoid an unsafe condition.", "In this paper, we consider the important problem of safe exploration in reinforcement learning. While reinforcement learning is well-suited to domains with complex transition dynamics and high-dimensional state-action spaces, an additional challenge is posed by the need for safe and efficient exploration. Traditional exploration techniques are not particularly useful for solving dangerous tasks, where the trial and error process may lead to the selection of actions whose execution in some states may result in damage to the learning system (or any other system). Consequently, when an agent begins an interaction with a dangerous and high-dimensional state-action space, an important question arises; namely, that of how to avoid (or at least minimize) damage caused by the exploration of the state-action space. We introduce the PI-SRL algorithm which safely improves suboptimal albeit robust behaviors for continuous state and action control tasks and which efficiently learns from the experience gained from the environment. We evaluate the proposed method in four complex tasks: automatic car parking, pole-balancing, helicopter hovering, and business management.", "In many real-world reinforcement learning (RL) problems, besides optimizing the main objective function, an agent must concurrently avoid violating a number of constraints. In particular, besides optimizing performance it is crucial to guarantee the of an agent during training as well as deployment (e.g. a robot should avoid taking actions - exploratory or not - which irrevocably harm its hardware). To incorporate safety in RL, we derive algorithms under the framework of Constrained Markov decision problems (CMDPs), an extension of the standard Markov decision problems (MDPs) augmented with constraints on expected cumulative costs. Our approach hinges on a novel method. We define and present a method for constructing Lyapunov functions, which provide an effective way to guarantee the global safety of a behavior policy during training via a set of local, linear constraints. Leveraging these theoretical underpinnings, we show how to use the Lyapunov approach to systematically transform dynamic programming (DP) and RL algorithms into their safe counterparts. To illustrate their effectiveness, we evaluate these algorithms in several CMDP planning and decision-making tasks on a safety benchmark domain. Our results show that our proposed method significantly outperforms existing baselines in balancing constraint satisfaction and performance." ] }
1907.11703
2966816175
Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
Approaches such as DAgger @cite_31 formulate imitation learning as a supervised problem where the aim is to match the demonstrator performance. However, the performance of agents using these methods is upper-bounded by the demonstrator. Recent works such as Expert Iteration @cite_2 and AlphaGo Zero @cite_4 extend imitation learning to the RL setting where the demonstrator is also continuously improved during training. There has been a growing body of work on imitation learning where demonstrators' data is used to speed up policy learning in RL @cite_10 .
{ "cite_N": [ "@cite_10", "@cite_31", "@cite_4", "@cite_2" ], "mid": [ "2804930149", "2767506186", "2735089625", "2802726207" ], "abstract": [ "Imitation learning (IL) consists of a set of tools that leverage expert demonstrations to quickly learn policies. However, if the expert is suboptimal, IL can yield policies with inferior performance compared to reinforcement learning (RL). In this paper, we aim to provide an algorithm that combines the best aspects of RL and IL. We accomplish this by formulating several popular RL and IL algorithms in a common mirror descent framework, showing that these algorithms can be viewed as a variation on a single approach. We then propose LOKI, a strategy for policy learning that first performs a small but random number of IL iterations before switching to a policy gradient RL method. We show that if the switching time is properly randomized, LOKI can learn to outperform a suboptimal expert and converge faster than running policy gradient from scratch. Finally, we evaluate the performance of LOKI experimentally in several simulated environments.", "Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.", "Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.", "Humans often learn how to perform tasks via imitation: they observe others perform a task, and then very quickly infer the appropriate actions to take based on their observations. While extending this paradigm to autonomous agents is a well-studied problem in general, there are two particular aspects that have largely been overlooked: (1) that the learning is done from observation only (i.e., without explicit action information), and (2) that the learning is typically done very quickly. In this work, we propose a two-phase, autonomous imitation learning technique called behavioral cloning from observation (BCO), that aims to provide improved performance with respect to both of these aspects. First, we allow the agent to acquire experience in a self-supervised fashion. This experience is used to develop a model which is then utilized to learn a particular task by observing an expert perform that task without the knowledge of the specific actions taken. We experimentally compare BCO to imitation learning methods, including the state-of-the-art, generative adversarial imitation learning (GAIL) technique, and we show comparable task performance in several different simulation domains while exhibiting increased learning speed after expert trajectories become available." ] }
1907.11703
2966816175
Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
( hester2017deep ) used demonstrator data by combining the supervised learning loss with the Q-learning loss within the DQN algorithm to pre-train and showed that their method achieves good results on Atari games by using a few minutes of game-play data. ( kim2013learning ) proposed a learning from demonstration approach where limited demonstrator data is used to impose constraints on the policy iteration phase. Another recent work @cite_1 used planner demonstrations to learn a value function, which was then further refined with RL and a short-horizon planner for robotic manipulation tasks.
{ "cite_N": [ "@cite_1" ], "mid": [ "2788862220", "2151210636", "2726005894", "2909986471" ], "abstract": [ "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.", "The combination of modern Reinforcement Learning and Deep Learning approaches holds the promise of making significant progress on challenging applications requiring both rich perception and policy-selection. The Arcade Learning Environment (ALE) provides a set of Atari games that represent a useful benchmark set of such applications. A recent breakthrough in combining model-free reinforcement learning with deep learning, called DQN, achieves the best real-time agents thus far. Planning-based approaches achieve far higher scores than the best model-free approaches, but they exploit information that is not available to human players, and they are orders of magnitude slower than needed for real-time play. Our main goal in this work is to build a better real-time Atari game playing agent than DQN. The central idea is to use the slow planning-based agents to provide training data for a deep-learning architecture capable of real-time play. We proposed new agents based on this idea and show that they outperform DQN.", "This research describes a study into the ability of a state of the art reinforcement learning algorithm to learn to perform multiple tasks. We demonstrate that the limitation of learning to performing two tasks can be mitigated with a competitive training method. We show that this approach results in improved generalization of the system when performing unforeseen tasks. The learning agent assessed is an altered version of the DeepMind deep Q–learner network (DQN), which has been demonstrated to outperform human players for a number of Atari 2600 games. The key findings of this paper is that there were significant degradations in performance when learning more than one game, and how this varies depends on both similarity and the comparative complexity of the two games.", "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certainty-driven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels." ] }
1907.11703
2966816175
Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
Previous work @cite_19 combined planning and RL in such a way that RL can explore on the action space filtered down by the planner, outperforming using either solely a planner or RL. Other work @cite_23 employed MCTS as a high-level planner, which is fed a set of low-level offline learned DRL policies and refines them for safer execution within a simulated autonomous driving domain. A recent work by ( vodopivec2017monte ) unified RL, planning, and search.
{ "cite_N": [ "@cite_19", "@cite_23" ], "mid": [ "2837605352", "2778917778", "2514762535", "2962917939" ], "abstract": [ "Autonomous urban driving navigation with complex multi-agent dynamics is under-explored due to the difficulty of learning an optimal driving policy. The traditional modular pipeline heavily relies on hand-designed rules and the pre-processing perception system while the supervised learning-based models are limited by the accessibility of extensive human experience. We present a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in a high-fidelity car simulator. To alleviate the low exploration efficiency for large continuous action space that often prohibits the use of classical RL on challenging real tasks, our CIRL explores over a reasonably constrained action space guided by encoded experiences that imitate human demonstrations, building upon Deep Deterministic Policy Gradient (DDPG). Moreover, we propose to specialize adaptive policies and steering-angle reward designs for different control signals (i.e. follow, straight, turn right, turn left) based on the shared representations to improve the model capability in tackling with diverse cases. Extensive experiments on CARLA driving benchmark demonstrate that CIRL substantially outperforms all previous methods in terms of the percentage of successfully completed episodes on a variety of goal-directed driving tasks. We also show its superior generalization capability in unseen environments. To our knowledge, this is the first successful case of the learned driving policy by reinforcement learning in the high-fidelity simulator, which performs better than supervised imitation learning.", "Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved wide-spread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search.", "Human drivers use nonverbal communication and anticipation of other drivers' actions to master conflicts occurring in everyday driving situations. Without a high penetration of vehicle-to-vehicle communication an autonomous vehicle has to have the possibility to understand intentions of others and share own intentions with the surrounding traffic participants. This paper proposes a cooperative combinatorial motion planning algorithm without the need for inter vehicle communication based on Monte Carlo Tree Search (MCTS). We motivate why MCTS is particularly suited for the autonomous driving domain. Furthermore, adoptions to the MCTS algorithm are presented as for example simultaneous decisions, the usage of the Intelligent Driver Model as microscopic traffic simulation, and a cooperative cost function. We further show simulation results of merging scenarios in highway-like situations to underline the cooperative nature of the approach.", "We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL). The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology. Next, the sampling-based planners provide roadmaps which connect robot configurations that can be successfully navigated by the RL agent. The same RL agents are used to control the robot under the direction of the planning, enabling long-range navigation. We use the Probabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents are constructed using feature-based and deep neural net policies in continuous state and action spaces. We evaluate PRM-RL, both in simulation and on-robot, on two navigation tasks with non-trivial robot dynamics: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments with load displacement constraints. Our results show improvement in task completion over both RL agents on their own and traditional sampling-based planners. In the indoor navigation task, PRM-RL successfully completes up to 215 m long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 m without violating the task constraints in an environment 63 million times larger than used in training." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
Most existing access control schemes for secure contents are application specific or lack security strength. For example, in @cite_0 , the authors presented a scheme for protected contents using network coding as encryption. However, the scheme requires a private connection between the publisher and consumer to obtain the decoding matrix and missing data blocks. @cite_25 , the authors presented a security framework for the copyrighted video streaming in ICN based on linear random coding. It is proven that the linear random coding alone improves the performance of ICN @cite_26 . However in @cite_25 , each video was encrypted with a large number of symmetric encryption keys, such that each video frame was encrypted with a unique symmetric encryption key. Since only authorized users who possessed the set of all keys could decrypt the video content, the distribution of a large number of keys for each video content can be an extra communication overhead.
{ "cite_N": [ "@cite_0", "@cite_26", "@cite_25" ], "mid": [ "2514042371", "2115677140", "1567993328", "2590898937" ], "abstract": [ "As a novel network architecture, Information-Centric Networking(ICN) has a good performance in security, mobility and scalability. Although in-network cache used in ICN can effectively solve the problem of network congestion. Meanwhile, it also brings a lot of challenges such as copyright protection. How to prevent unauthorized user access to the large-sized contents of the route has become the focus of the research. Some of the current solutions are based on the traditional encryption technology. But these solutions don't applay to ICN. Current approaches that rely on a common encryption key among authorized users cannot protect copyright well since if authorized user leaks the private key out, we cannot tell who has leaked the key out. In this paper, we use a novel scheme to solve the problem of copyright protection. In this scheme, we take the method of the network encoding. The first, we have splitted the large-sized content into N blocks. Then through linear network coding(LNC) encrypted the content, if the user can not obtain the decrypted matrix, the user will not be able to decrypt the content. Therefore, the scheme can achieve the protection of the content. Our analysis of this program shows that the scheme has a good performance in copyright protection.", "In this paper, we describe a configurable content-based MPEG video authentication scheme, which is robust to typical video transcoding approaches, namely frame resizing, frame dropping and requantization. By exploiting the synergy between cryptographic signature, forward error correction (FEC) and digital watermarking, the generated content-based message authentication code (MAC or keyed crypto hash) is embedded back into the video to reduce the transmission cost. The proposed scheme is secure against malicious attacks such as video frame insertion and alteration. System robustness and security are balanced in a configurable way (i.e., more robust the system is, less secure the system will be). Compressed-domain process makes the scheme computationally efficient. Furthermore, the proposed scheme is compliant with state-of-the-art public key infrastructure. Experimental results demonstrate the validity of the proposed scheme", "Shifting from host-oriented to data-oriented, information-centric networking (ICN) adopts several key design principles, e.g., in-network caching, to cope with the tremendous internet growth. In the ICN setting, data to be distributed can be cached by ICN routers anywhere and accessed arbitrarily by customers without data publishers' permission, which imposes new challenges when achieving data access control: (i) security: How can data publishers protect data confidentiality (either data cached by ICN routers or data accessed by authorized users) even when an authorized user's decryption key was revoked or compromised, and (ii) scalability: How can data publishers leverage ICN's promising features and enforce access control without complicated key management or extensive communication. This paper addresses these challenges by using the new proposed dual-phase encryption that uniquely combines the ideas from one-time decryption key, proxy re-encryption and all-or-nothing transformation, while still being able to leverage ICN's features. Our analysis and performance show that our solution is highly efficient and provable secure under the existing security model.", "The fast-growing Internet traffic is increasingly becoming content-based and driven by mobile users, with users more interested in data rather than its source. This has precipitated the need for an information-centric Internet architecture. Research in information-centric networks (ICNs) have resulted in novel architectures, e.g., CCN NDN, DONA, and PSIRP PURSUIT; all agree on named data based addressing and pervasive caching as integral design components. With network-wide content caching, enforcement of content access control policies become non-trivial. Each caching node in the network needs to enforce access control policies with the help of the content provider. This becomes inefficient and prone to unbounded latencies especially during provider outages. In this paper, we propose an efficient access control framework for ICN, which allows legitimate users to access and use the cached content directly, and does not require verification authentication by an online provider authentication server or the content serving router. This framework would help reduce the impact of system down-time from server outages and reduce delivery latency by leveraging caching while guaranteeing access only to legitimate users. Experimental simulation results demonstrate the suitability of this scheme for all users, but particularly for mobile users, especially in terms of the security and latency overheads." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
In earlier work @cite_35 , the authors proposed a content access control scheme for ICN enabled wireless edge. The proposed one is an extension of @cite_23 , which employs the public-key based algorithm and shamir's secret sharing as a building block, named AccConF. To obtain a unique interpolating polynomial of shamir's scheme, AccConF espoused Lagrangian Interpolation technique. The calculation of Lagrangian Interpolation is a computationally expensive process. To reduce the client-side computational burden the publisher piggy backs an enabling block with each content, which encapsulates partially solved Lagrangian coefficients.
{ "cite_N": [ "@cite_35", "@cite_23" ], "mid": [ "1973956822", "2963353797", "1146167248", "2514042371" ], "abstract": [ "We design and analyze a method to extract secret keys from the randomness inherent to wireless channels. We study a channel model for a multipath wireless channel and exploit the channel diversity in generating secret key bits. We compare the key extraction methods based both on entire channel state information (CSI) and on single channel parameter such as the received signal strength indicators (RSSI). Due to the reduction in the degree-of-freedom when going from CSI to RSSI, the rate of key extraction based on CSI is far higher than that based on RSSI. This suggests that exploiting channel diversity and making CSI information available to higher layers would greatly benefit the secret key generation. We propose a key generation system based on low-density parity-check (LDPC) codes and describe the design and performance of two systems: one based on binary LDPC codes and the other (useful at higher signal-to-noise ratios) based on four-ary LDPC codes.", "Due to the publicly-known deterministic character- istic of pilot tones, pilot-aware attack, by jamming, nulling and spoofing pilot tones, can significantly paralyze the uplink channel training in large-scale MISO-OFDM systems. To solve this, we in this paper develop an independence-checking coding based (ICCB) uplink training architecture for one-ring scattering scenarios allowing for uniform linear arrays (ULA) deployment. Here, we not only insert randomized pilots on subcarriers for channel impulse response (CIR) estimation, but also diversify and encode subcarrier activation patterns (SAPs) to convey those pilots simultaneously. The coded SAPs, though interfered by arbitrary unknown SAPs in wireless environment, are qualified to be reliably identified and decoded into the original pilots by checking the hidden channel independence existing in sub- carriers. Specifically, an independence-checking coding (ICC) theory is formulated to support the encoding decoding process in this architecture. The optimal ICC code is further devel- oped for guaranteeing a well-imposed estimation of CIR while maximizing the code rate. Based on this code, the identification error probability (IEP) is characterized to evaluate the reliability of this architecture. Interestingly, we discover the principle of IEP reduction by exploiting the array spatial correlation, and prove that zero- IEP, i.e., perfect reliability, can be guaranteed under continuously-distributed mean angle of arrival (AoA). Besides this, a novel closed form of IEP expression is derived in discretely-distributed case. Simulation results finally verify the effectiveness of the proposed architecture.", "A new multi-secret sharing (t, n) threshold scheme is proposed in this paper. The scheme uses the Lagrange interpolating polynomial to split and reconstruct the secrets based on Shamir secret sharing scheme, and verifies the legality of data by NTRU algorithm and one-way hashing function. Compared with other public key cryptosystems such as elliptic curve cryptography, the proposed is simpler in design, which requires less calculation and fewer storage spaces. It can detect effectively a variety of cheating and forgery behaviors, which guarantee that the reconstruction of secret is the secure and trustworthy.", "As a novel network architecture, Information-Centric Networking(ICN) has a good performance in security, mobility and scalability. Although in-network cache used in ICN can effectively solve the problem of network congestion. Meanwhile, it also brings a lot of challenges such as copyright protection. How to prevent unauthorized user access to the large-sized contents of the route has become the focus of the research. Some of the current solutions are based on the traditional encryption technology. But these solutions don't applay to ICN. Current approaches that rely on a common encryption key among authorized users cannot protect copyright well since if authorized user leaks the private key out, we cannot tell who has leaked the key out. In this paper, we use a novel scheme to solve the problem of copyright protection. In this scheme, we take the method of the network encoding. The first, we have splitted the large-sized content into N blocks. Then through linear network coding(LNC) encrypted the content, if the user can not obtain the decrypted matrix, the user will not be able to decrypt the content. Therefore, the scheme can achieve the protection of the content. Our analysis of this program shows that the scheme has a good performance in copyright protection." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
In work by @cite_30 , an access control realized by a flexible secure content distribution architecture, which combins the proxy re-encryption and identity-based encryption mechanisms. The publisher generates a symmetric key and encrypt the content before dissemination. To access the content from in-network cache or directly from publisher, a consumer first sends a request to publisher to acquires the symmetric encryption key. Upon receiving the key request, the publisher validates and verifies the authenticity of consumer, and sends the symmetric key encapsulated in response message encrypted with consumer’s identity. The proposed scheme eliminated the asymmetric encryption, but it is not clear that how the consumer’s private identity could be known to the content provider.
{ "cite_N": [ "@cite_30" ], "mid": [ "2080636584", "2365029783", "1498290244", "2514042371" ], "abstract": [ "Distributed sensor networks are becoming a robust solution that allows users to directly access data generated by individual sensors. In many practical scenarios, fine-grained access control is a pivotal security requirement to enhance usability and protect sensitive sensor information from unauthorized access. Recently, there have been proposed many schemes to adapt public key cryptosystems into sensor systems consisting of high-end sensor nodes in order to enforce security policy efficiently. However, the drawback of these approaches is that the complexity of computation increases linear to the expressiveness of the access policy. Key-policy attribute-based encryption is a promising cryptographic solution to enforce fine-grained access policies on the sensor data. However, the problem of applying it to distributed sensor networks introduces several challenges with regard to the attribute and user revocation. In this paper, we propose an access control scheme using KP-ABE with efficient attribute and user revocation capability for distributed sensor networks that are composed of high-end sensor devices. They can be achieved by the proxy encryption mechanism which takes advantage of attribute-based encryption and selective group key distribution. The analysis results indicate that the proposed scheme achieves efficient user access control while requiring the same computation overhead at each sensor as the previous schemes.", "Abstract With the proliferation of smart grids, traditional utilities are struggling to handle the increasing amount of metering data. Outsourcing the metering data to heterogeneous distributed systems has the potential to provide efficient data access and processing. In an untrusted heterogeneous distributed system environment, employing data encryption prior to outsourcing can be an effective way to preserve user privacy. However, how to efficiently query encrypted multidimensional metering data stored in an untrusted heterogeneous distributed system environment remains a research challenge. In this paper, we propose a high performance and privacy-preserving query (P2Q) scheme over encrypted multidimensional big metering data to address this challenge. In the proposed scheme, encrypted metering data are stored in the server of an untrusted heterogeneous distributed system environment. A Locality Sensitive Hashing (LSH) based similarity search approach is then used to realize the similarity query. To demonstrate utility of the proposed LSH-based search approach, we implement a prototype using MapReduce for the Hadoop distributed environment. More specifically, for a given query, the proxy server will return K top similar data object identifiers. An enhanced Ciphertext-Policy Attribute-based Encryption (CP-ABE) policy is then used to control access to the search results. Therefore, only the requester with an authorized query attribute can obtain the correct secret keys to retrieve the metering data. We then prove that the P2Q scheme achieves data confidentiality and preserves the data owner’s privacy in a semi-trusted cloud. In addition, our evaluations demonstrate that the P2Q scheme can significantly reduce response time and provide high search efficiency without compromising on search quality (i.e. suitable for multidimensional big data search in heterogeneous distributed system, such as cloud storage system).", "Users of content-based publish subscribe systems (CBPS) are interested in receiving data items with values that satisfy certain conditions. Each user submits a list of subscription specifications to a broker, which routes data items from publishers to users. When a broker receives a notification that contains a value from a publisher, it forwards it only to the subscribers whose requests match the value. However, in many applications, the data published are confidential, and their contents must not be revealed to brokers. Furthermore, a user's subscription may contain sensitive information that must be protected from brokers. Therefore, a difficult challenge arises: how to route publisher data to the appropriate subscribers without the intermediate brokers learning the plain text values of the notifications and subscriptions. To that extent, brokers must be able to perform operations on top of the encrypted contents of subscriptions and notifications. Such operations may be as simple as equality match, but often require more complex operations such as determining inclusion of data in a value interval. Previous work attempted to solve this problem by using one-way data mappings or specialized encryption functions that allow evaluation of conditions on ciphertexts. However, such operations are computationally expensive, and the resulting CBPS lack scalability. As fast dissemination is an important requirement in many applications, we focus on a new data transformation method called Asymmetric Scalar-product Preserving Encryption (ASPE) [1]. We devise methods that build upon ASPE to support private evaluation of several types of conditions. We also suggest techniques for secure aggregation of notifications, supporting functions such as sum, minimum, maximum and count. Our experimental evaluation shows that ASPE-based CBPS incurs 65 less overhead for exact-match filtering and 50 less overhead for range filtering compared to the state-of-the-art.", "As a novel network architecture, Information-Centric Networking(ICN) has a good performance in security, mobility and scalability. Although in-network cache used in ICN can effectively solve the problem of network congestion. Meanwhile, it also brings a lot of challenges such as copyright protection. How to prevent unauthorized user access to the large-sized contents of the route has become the focus of the research. Some of the current solutions are based on the traditional encryption technology. But these solutions don't applay to ICN. Current approaches that rely on a common encryption key among authorized users cannot protect copyright well since if authorized user leaks the private key out, we cannot tell who has leaked the key out. In this paper, we use a novel scheme to solve the problem of copyright protection. In this scheme, we take the method of the network encoding. The first, we have splitted the large-sized content into N blocks. Then through linear network coding(LNC) encrypted the content, if the user can not obtain the decrypted matrix, the user will not be able to decrypt the content. Therefore, the scheme can achieve the protection of the content. Our analysis of this program shows that the scheme has a good performance in copyright protection." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
In other work @cite_33 , author proposed a content access control scheme based on proxy re-encryption. In proxy re-encryption the content is re-encrypted by an intermediate node. In proposed scheme the edge routers perform the content re-encryption. Upon receiving a content request, the publisher encrypts the data and a randomly generated key k1, using its public key. Upon receiving the content request, edge router generates a random key k2 encrypted by the publisher’s public key and signed by the edge router. Edge router sends the encrypted k2 to publisher and appends the encrypted k2 with the content and dispatch it towards consumer. Meanwhile, the publisher verifies the authenticity of consumer, and generates the content decryption key K using K1, K2 and public key. Upon receiving K the consumer can decrypt the content.
{ "cite_N": [ "@cite_33" ], "mid": [ "2116361063", "2365029783", "2114428623", "2041480327" ], "abstract": [ "Proxy re-encryption (PRE) allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into one encrypting the same plaintext for Bob. The proxy only needs a re-encryption key given by Alice, and cannot learn anything about the plaintext encrypted. This adds flexibility in various applications, such as confidential email, digital right management and distributed storage. In this paper, we study unidirectional PRE, which the re-encryption key only enables delegation in one direction but not the opposite. In PKC 2009, Shao and Cao proposed a unidirectional PRE assuming the random oracle. However, we show that it is vulnerable to chosen-ciphertext attack (CCA). We then propose an efficient unidirectional PRE scheme (without resorting to pairings). We gain high efficiency and CCA-security using the “token-controlled encryption” technique, under the computational Diffie-Hellman assumption, in the random oracle model and a relaxed but reasonable definition.", "Abstract With the proliferation of smart grids, traditional utilities are struggling to handle the increasing amount of metering data. Outsourcing the metering data to heterogeneous distributed systems has the potential to provide efficient data access and processing. In an untrusted heterogeneous distributed system environment, employing data encryption prior to outsourcing can be an effective way to preserve user privacy. However, how to efficiently query encrypted multidimensional metering data stored in an untrusted heterogeneous distributed system environment remains a research challenge. In this paper, we propose a high performance and privacy-preserving query (P2Q) scheme over encrypted multidimensional big metering data to address this challenge. In the proposed scheme, encrypted metering data are stored in the server of an untrusted heterogeneous distributed system environment. A Locality Sensitive Hashing (LSH) based similarity search approach is then used to realize the similarity query. To demonstrate utility of the proposed LSH-based search approach, we implement a prototype using MapReduce for the Hadoop distributed environment. More specifically, for a given query, the proxy server will return K top similar data object identifiers. An enhanced Ciphertext-Policy Attribute-based Encryption (CP-ABE) policy is then used to control access to the search results. Therefore, only the requester with an authorized query attribute can obtain the correct secret keys to retrieve the metering data. We then prove that the P2Q scheme achieves data confidentiality and preserves the data owner’s privacy in a semi-trusted cloud. In addition, our evaluations demonstrate that the P2Q scheme can significantly reduce response time and provide high search efficiency without compromising on search quality (i.e. suitable for multidimensional big data search in heterogeneous distributed system, such as cloud storage system).", "In 1998, Blaze, Bleumer, and Strauss (BBS) proposed an application called atomic proxy re-encryption, in which a semitrusted proxy converts a ciphertext for Alice into a ciphertext for Bob without seeing the underlying plaintext. We predict that fast and secure re-encryption will become increasingly popular as a method for managing encrypted file systems. Although efficiently computable, the wide-spread adoption of BBS re-encryption has been hindered by considerable security risks. Following recent work of Dodis and Ivan, we present new re-encryption schemes that realize a stronger notion of security and demonstrate the usefulness of proxy re-encryption as a method of adding access control to a secure file system. Performance measurements of our experimental file system demonstrate that proxy re-encryption can work effectively in practice.", "Service providers like Google and Amazon are moving into the SaaS (Software as a Service) business. They turn their huge infrastructure into a cloud-computing environment and aggressively recruit businesses to run applications on their platforms. To enforce security and privacy on such a service model, we need to protect the data running on the platform. Unfortunately, traditional encryption methods that aim at providing \"unbreakable\" protection are often not adequate because they do not support the execution of applications such as database queries on the encrypted data. In this paper we discuss the general problem of secure computation on an encrypted database and propose a SCONEDB Secure Computation ON an Encrypted DataBase) model, which captures the execution and security requirements. As a case study, we focus on the problem of k-nearest neighbor (kNN) computation on an encrypted database. We develop a new asymmetric scalar-product-preserving encryption (ASPE) that preserves a special type of scalar product. We use APSE to construct two secure schemes that support kNN computation on encrypted data; each of these schemes is shown to resist practical attacks of a different background knowledge level, at a different overhead cost. Extensive performance studies are carried out to evaluate the overhead and the efficiency of the schemes." ] }
1907.11717
2966432599
The benefits of the ubiquitous caching in information centric networking (ICN) are profound; even though such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in ICN, such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. SDPC also provides protection against privacy leakage. The security of SDPC was proved with the Burrows–Abadi–Needham (BAN) logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.
In another study @cite_7 , the authors presented an access control scheme for the encrypted content in ICN, which is based on the efficient unidirectional proxy re-encryption (EU-PRE) proposed by @cite_3 . The proposed scheme, named efficient unidirectional re-encryption (EU-RE), simplifies EU-PRE by eliminating the need of proxies in the re-encryption operation. However, the EU-RE scheme is still based on asymmetric cryptography, which is not suitable for several resource constraint applications such as, IoT and sensor networks. Moreover, the authors made an assumption that the content provider behaves correctly, i.e., it does not distribute any private content or decryption rights to unauthorized users. However, this assumption falsifies the protocol claims defined in @cite_20 , which means EU-RE is weak against several attacks. To verify the protocol claims, we implemented EU-RE in an automated security protocol analysis tool, Scyther @cite_10 , and presented the results in .
{ "cite_N": [ "@cite_10", "@cite_3", "@cite_20", "@cite_7" ], "mid": [ "2116361063", "2152926062", "2765463836", "2162567660" ], "abstract": [ "Proxy re-encryption (PRE) allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into one encrypting the same plaintext for Bob. The proxy only needs a re-encryption key given by Alice, and cannot learn anything about the plaintext encrypted. This adds flexibility in various applications, such as confidential email, digital right management and distributed storage. In this paper, we study unidirectional PRE, which the re-encryption key only enables delegation in one direction but not the opposite. In PKC 2009, Shao and Cao proposed a unidirectional PRE assuming the random oracle. However, we show that it is vulnerable to chosen-ciphertext attack (CCA). We then propose an efficient unidirectional PRE scheme (without resorting to pairings). We gain high efficiency and CCA-security using the “token-controlled encryption” technique, under the computational Diffie-Hellman assumption, in the random oracle model and a relaxed but reasonable definition.", "We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry's bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2λ security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with O(λ · L3) per-gate computation -- i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is O(λ2), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results to the above for LWE, but with worse performance. Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least λ -- we can reduce the per-gate computation of the bootstrapped version to O(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω(λ3.5) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011).", "Using dynamic Searchable Symmetric Encryption, a user with limited storage resources can securely outsource a database to an untrusted server, in such a way that the database can still be searched and updated efficiently. For these schemes, it would be desirable that updates do not reveal any information a priori about the modifications they carry out, and that deleted results remain inaccessible to the server a posteriori. If the first property, called forward privacy, has been the main motivation of recent works, the second one, backward privacy, has been overlooked. In this paper, we study for the first time the notion of backward privacy for searchable encryption. After giving formal definitions for different flavors of backward privacy, we present several schemes achieving both forward and backward privacy, with various efficiency trade-offs. Our constructions crucially rely on primitives such as constrained pseudo-random functions and puncturable encryption schemes. Using these advanced cryptographic primitives allows for a fine-grained control of the power of the adversary, preventing her from evaluating functions on selected inputs, or decrypting specific ciphertexts. In turn, this high degree of control allows our SSE constructions to achieve the stronger forms of privacy outlined above. As an example, we present a framework to construct forward-private schemes from range-constrained pseudo-random functions. Finally, we provide experimental results for implementations of our schemes, and study their practical efficiency.", "We survey the notion of provably secure searchable encryption (SE) by giving a complete and comprehensive overview of the two main SE techniques: searchable symmetric encryption (SSE) and public key encryption with keyword search (PEKS). Since the pioneering work of Song, Wagner, and Perrig (IEEE S&P '00), the field of provably secure SE has expanded to the point where we felt that taking stock would provide benefit to the community. The survey has been written primarily for the nonspecialist who has a basic information security background. Thus, we sacrifice full details and proofs of individual constructions in favor of an overview of the underlying key techniques. We categorize and compare the different SE schemes in terms of their security, efficiency, and functionality. For the experienced researcher, we point out connections between the many approaches to SE and identify open research problems. Two major conclusions can be drawn from our work. While the so-called IND-CKA2 security notion becomes prevalent in the literature and efficient (sublinear) SE schemes meeting this notion exist in the symmetric setting, achieving this strong form of security efficiently in the asymmetric setting remains an open problem. We observe that in multirecipient SE schemes, regardless of their efficiency drawbacks, there is a noticeable lack of query expressiveness that hinders deployment in practice." ] }
1907.11718
2964674144
We propose to solve large scale Markowitz mean-variance (MV) portfolio allocation problem using reinforcement learning (RL). By adopting the recently developed continuous-time exploratory control framework, we formulate the exploratory MV problem in high dimensions. We further show the optimality of a multivariate Gaussian feedback policy, with time-decaying variance, in trading off exploration and exploitation. Based on a provable policy improvement theorem, we devise a scalable and data-efficient RL algorithm and conduct large scale empirical tests using data from the S&P 500 stocks. We found that our method consistently achieves over 10 annualized returns and it outperforms econometric methods and the deep RL method by large margins, for both long and medium terms of investment with monthly and daily trading.
The difficulty of seeking the global optimum for Markov Decision Process (MDP) problems under the MV criterion has been previously noted in @cite_30 . In fact, the variance of reward-to-go is nonlinear in expectation and, as a result of Bellman's inconsistency, most of the well-known RL algorithms cannot be applied directly.
{ "cite_N": [ "@cite_30" ], "mid": [ "2356031020", "2964000194", "1491322982", "2911793117" ], "abstract": [ "In Markov decision processes (MDPs), the variance of the reward-to-go is a natural measure of uncertainty about the long term performance of a policy, and is important in domains such as finance, resource allocation, and process control. Currently however, there is no tractable procedure for calculating it in large scale MDPs. This is in contrast to the case of the expected reward-to-go, also known as the value function, for which effective simulation-based algorithms are known, and have been used successfully in various domains. In this paper we extend temporal difference (TD) learning algorithms to estimating the variance of the reward-to-go for a fixed policy. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in an option pricing problem. Our results show a dramatic improvement in terms of sample efficiency over standard Monte-Carlo methods, which are currently the state-of-the-art.", "We consider the problem of learning an unknown Markov Decision Process (MDP) that is weakly communicating in the infinite horizon setting. We propose a Thompson Sampling-based reinforcement learning algorithm with dynamic episodes (TSDE). At the beginning of each episode, the algorithm generates a sample from the posterior distribution over the unknown model parameters. It then follows the optimal stationary policy for the sampled model for the rest of the episode. The duration of each episode is dynamically determined by two stopping criteria. The first stopping criterion controls the growth rate of episode length. The second stopping criterion happens when the number of visits to any state-action pair is doubled. We establish @math bounds on expected regret under a Bayesian setting, where @math and @math are the sizes of the state and action spaces, @math is time, and @math is the bound of the span. This regret bound matches the best available bound for weakly communicating MDPs. Numerical results show it to perform better than existing algorithms for infinite horizon MDPs.", "We consider Markov decision processes (MDPs) with multiple discounted reward objectives. Such MDPs occur in design problems where one wishes to simultaneously optimize several criteria, for example, latency and power. The possible trade-offs between the different objectives are characterized by the Pareto curve. We show that every Pareto-optimal point can be achieved by a memoryless strategy; however, unlike in the single-objective case, the memoryless strategy may require randomization. Moreover, we show that the Pareto curve can be approximated in polynomial time in the size of the MDP. Additionally, we study the problem if a given value vector is realizable by any strategy, and show that it can be decided in polynomial time; but the question whether it is realizable by a deterministic memoryless strategy is NP-complete. These results provide efficient algorithms for design exploration in MDP models with multiple objectives.", "Consider a Markov decision process (MDP) that admits a set of state-action features, which can linearly express the process's probabilistic transition model. We propose a parametric Q-learning algorithm that finds an approximate-optimal policy using a sample size proportional to the feature dimension @math and invariant with respect to the size of the state space. To further improve its sample efficiency, we exploit the monotonicity property and intrinsic noise structure of the Bellman operator, provided the existence of anchor state-actions that imply implicit non-negativity in the feature space. We augment the algorithm using techniques of variance reduction, monotonicity preservation, and confidence bounds. It is proved to find a policy which is @math -optimal from any initial state with high probability using @math sample transitions for arbitrarily large-scale MDP with a discount factor @math . A matching information-theoretical lower bound is proved, confirming the sample optimality of the proposed method with respect to all parameters (up to polylog factors)." ] }
1907.11718
2964674144
We propose to solve large scale Markowitz mean-variance (MV) portfolio allocation problem using reinforcement learning (RL). By adopting the recently developed continuous-time exploratory control framework, we formulate the exploratory MV problem in high dimensions. We further show the optimality of a multivariate Gaussian feedback policy, with time-decaying variance, in trading off exploration and exploitation. Based on a provable policy improvement theorem, we devise a scalable and data-efficient RL algorithm and conduct large scale empirical tests using data from the S&P 500 stocks. We found that our method consistently achieves over 10 annualized returns and it outperforms econometric methods and the deep RL method by large margins, for both long and medium terms of investment with monthly and daily trading.
Existing works on variance estimation and control generally divide into value based methods and policy based methods. @cite_29 obtained the Bellman's equation for the variance of reward-to-go under a fixed , given policy. @cite_15 further derived the TD(0) learning rule to estimate the variance, followed by @cite_7 which applied this value based method to an MV portfolio selection problem. It is worth noting that due to the definition of the value function (i.e., the variance penalized expected reward-to-go) in @cite_7 , Bellman's optimality principle does not hold. As a result, it is not guaranteed that a greedy policy based on the latest updated value function will eventually lead to the true global optimal policy. The second approach, the policy based RL, was proposed in @cite_16 . They also extended the work to linear function approximators and devised actor-critic algorithms for MV optimization problems for which convergence to the local optimum is guaranteed with probability one ( @cite_17 ). Related works following this line of research include @cite_18 , @cite_4 , among others. Despite the various methods mentioned above, it remains an open and interesting question in RL to search for the global optimum under the MV criterion.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_29", "@cite_15", "@cite_16", "@cite_17" ], "mid": [ "2963856199", "2356031020", "1925816294", "2105078254" ], "abstract": [ "In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in rewards in addition to maximizing a standard criterion. Variance related risk measures are among the most common risk-sensitive criteria in finance and operations research. However, optimizing many such criteria is known to be a hard problem. In this paper, we consider both discounted and average reward Markov decision processes. For each formulation, we first define a measure of variability for a policy, which in turn gives us a set of risk-sensitive criteria to optimize. For each of these criteria, we derive a formula for computing its gradient. We then devise actor-critic algorithms that operate on three timescales--a TD critic on the fastest timescale, a policy gradient (actor) on the intermediate timescale, and a dual ascent for Lagrange multipliers on the slowest timescale. In the discounted setting, we point out the difficulty in estimating the gradient of the variance of the return and incorporate simultaneous perturbation approaches to alleviate this. The average setting, on the other hand, allows for an actor update using compatible features to estimate the gradient of the variance. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in a traffic signal control application.", "In Markov decision processes (MDPs), the variance of the reward-to-go is a natural measure of uncertainty about the long term performance of a policy, and is important in domains such as finance, resource allocation, and process control. Currently however, there is no tractable procedure for calculating it in large scale MDPs. This is in contrast to the case of the expected reward-to-go, also known as the value function, for which effective simulation-based algorithms are known, and have been used successfully in various domains. In this paper we extend temporal difference (TD) learning algorithms to estimating the variance of the reward-to-go for a fixed policy. We propose variants of both TD(0) and LSTD(λ) with linear function approximation, prove their convergence, and demonstrate their utility in an option pricing problem. Our results show a dramatic improvement in terms of sample efficiency over standard Monte-Carlo methods, which are currently the state-of-the-art.", "With the goal to generate more scalable algorithms with higher efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towards combining classical techniques from optimal control and dynamic programming with modern learning techniques from statistical estimation theory. In this vein, this paper suggests to use the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parameterized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton-Jacobi-Bellman (HJB) equations, policy improvements can be transformed into an approximation problem of a path integral which has no open algorithmic parameters other than the exploration noise. The resulting algorithm can be conceived of as model-based, semi-model-based, or even model free, depending on how the learning problem is structured. The update equations have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Our new algorithm demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a simulated 12 degree-of-freedom robot dog illustrates the functionality of our algorithm in a complex robot learning scenario. We believe that Policy Improvement with Path Integrals (PI2) offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL based on trajectory roll-outs.", "This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H∞ control, we consider a differential game in which a \"disturbing\" agent tries to make the worst possible disturbance while a \"control\" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H∞ control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired." ] }
1907.11830
2966222024
360° images are usually represented in either equirectangular projection (ERP) or multiple perspective projections. Different from the flat 2D images, the detection task is challenging for 360° images due to the distortion of ERP and the inefficiency of perspective projections. However, existing methods mostly focus on one of the above representations instead of both, leading to limited detection performance. Moreover, the lack of appropriate bounding-box annotations as well as the annotated datasets further increases the difficulties of the detection task. In this paper, we present a standard object detection framework for 360° images. Specifically, we adapt the terminologies of the traditional object detection task to the omnidirectional scenarios, and propose a novel two-stage object detector, i.e., Reprojection R-CNN by combining both ERP and perspective projection. Owing to the omnidirectional field-of-view of ERP, Reprojection R-CNN first generates coarse region proposals efficiently by a distortion-aware spherical region proposal network. Then, it leverages the distortion-free perspective projection and refines the proposed regions by a novel reprojection network. We construct two novel synthetic datasets for training and evaluation. Experiments reveal that Reprojection R-CNN outperforms the previous state-of-the-art methods on the mAP metric. In addition, the proposed detector could run at 178ms per image in the panoramic datasets, which implies its practicability in real-world applications.
: Recent advances in 360 @math images resort to geometric information on the sphere. @cite_13 represent the ERP with a weighted graph, and apply the graph convolutional network to generate graph-based representations. @cite_9 propose SO(3) 3D rotation group for retrieval and classification tasks on spherical images. On top of that, @cite_30 suggest transforming the domain space from Euclidean S2 space to a SO(3) representation to reduce the distortion, and encoding rotation equivariance in the network. Meanwhile, some works attempt to solve the distortion in the ERP directly. @cite_15 transfer knowledge from a pre-trained CNN on perspective projections to a novel network on ERP. Other approaches @cite_32 @cite_6 @cite_28 refer to the idea of the deformable convolutional network @cite_18 , and propose the distortion-aware spherical convolution, where the convolutional filter get distorted in the same way as the objects on the ERP. Though SphConv is simple and effective, due to the implicit interpolation, it could not eliminate the distortion as the network grows deeper. To adjust the distortion from SphConv, we introduce a reprojection mechanism in Rep R-CNN, which significantly increases the detection accuracy.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_28", "@cite_9", "@cite_32", "@cite_6", "@cite_15", "@cite_13" ], "mid": [ "2109255472", "2807007689", "2963325056", "2902303185" ], "abstract": [ "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 @math 224) input image. This requirement is “artificial” and may reduce the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with another pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 @math faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.", "Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans.", "Convolution as inner product has been the founding basis of convolutional neural networks (CNNs) and the key to end-to-end visual representation learning. Benefiting from deeper architectures, recent CNNs have demonstrated increasingly strong representation abilities. Despite such improvement, the increased depth and larger parameter space have also led to challenges in properly training a network. In light of such challenges, we propose hyperspherical convolution (SphereConv), a novel learning framework that gives angular representations on hyperspheres. We introduce SphereNet, deep hyperspherical convolution networks that are distinct from conventional inner product based convolutional networks. In particular, SphereNet adopts SphereConv as its basic convolution operator and is supervised by generalized angular softmax loss - a natural loss formulation under SphereConv. We show that SphereNet can effectively encode discriminative representation and alleviate training difficulty, leading to easier optimization, faster convergence and comparable (even better) classification accuracy over convolutional counterparts. We also provide some theoretical insights for the advantages of learning on hyperspheres. In addition, we introduce the learnable SphereConv, i.e., a natural improvement over prefixed SphereConv, and SphereNorm, i.e., hyperspherical learning as a normalization method. Experiments have verified our conclusions.", "The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation." ] }
1907.11830
2966222024
360° images are usually represented in either equirectangular projection (ERP) or multiple perspective projections. Different from the flat 2D images, the detection task is challenging for 360° images due to the distortion of ERP and the inefficiency of perspective projections. However, existing methods mostly focus on one of the above representations instead of both, leading to limited detection performance. Moreover, the lack of appropriate bounding-box annotations as well as the annotated datasets further increases the difficulties of the detection task. In this paper, we present a standard object detection framework for 360° images. Specifically, we adapt the terminologies of the traditional object detection task to the omnidirectional scenarios, and propose a novel two-stage object detector, i.e., Reprojection R-CNN by combining both ERP and perspective projection. Owing to the omnidirectional field-of-view of ERP, Reprojection R-CNN first generates coarse region proposals efficiently by a distortion-aware spherical region proposal network. Then, it leverages the distortion-free perspective projection and refines the proposed regions by a novel reprojection network. We construct two novel synthetic datasets for training and evaluation. Experiments reveal that Reprojection R-CNN outperforms the previous state-of-the-art methods on the mAP metric. In addition, the proposed detector could run at 178ms per image in the panoramic datasets, which implies its practicability in real-world applications.
: The promising modern object detectors are usually based on two-stage approaches. The Region-based CNN (R-CNN) approach @cite_17 attends to a set of candidate region proposals @cite_12 in the first stage, and then uses a convolutional network to regress the bounding boxes and classify the objects in the second stage. Fast R-CNN @cite_25 extends R-CNN by extracting the proposals directly on feature maps using RoI pooling. Faster R-CNN @cite_2 further replaces the slow selective search with a fast region proposal network, achieving improvements on both speed and accuracy. Numerous extensions have been proposed to this framework @cite_1 @cite_7 @cite_19 @cite_26 . Compared with two-stage approaches, the single-stage pipeline skips the object proposal stage and generates detection and classification directly, such as SSD @cite_14 @cite_23 and YOLO @cite_3 @cite_8 @cite_11 . Though these single-stage pipelines attract interests owing to their fast speed, they lack the alignments of the proposals, which is important for 360 @math object detection. Hence, we adopt the two-stage method in this paper.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_11", "@cite_7", "@cite_8", "@cite_1", "@cite_3", "@cite_19", "@cite_23", "@cite_2", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2589615404", "2610420510", "2743620784", "2769291631" ], "abstract": [ "Object proposals have recently emerged as an essential cornerstone for object detection. The current state-of-the-art object detectors employ object proposals to detect objects within a modest set of candidate bounding box proposals instead of exhaustively searching across an image using the sliding window approach. However, achieving high recall and good localization with few proposals is still a challenging problem. The challenge becomes even more difficult in the context of autonomous driving, in which small objects, occlusion, shadows, and reflections usually occur. In this paper, we present a robust object proposals re-ranking algorithm that effectivity re-ranks candidates generated from a customized class-independent 3DOP (3D Object Proposals) method using a two-stream convolutional neural network (CNN). The goal is to ensure that those proposals that accurately cover the desired objects are amongst the few top-ranked candidates. The proposed algorithm, which we call DeepStereoOP, exploits not only RGB images as in the conventional CNN architecture, but also depth features including disparity map and distance to the ground. Experiments show that the proposed algorithm outperforms all existing object proposal algorithms on the challenging KITTI benchmark in terms of both recall and localization. Furthermore, the combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results of all three KITTI object classes. HighlightsWe present a robust object proposals re-ranking algorithm for object detection in autonomous driving.Both RGB images and depth features are included in the proposed two-stream CNN architecture called DeepStereoOP.Initial object proposals are generated from a customized class-independent 3DOP method.Experiments show that the proposed algorithm outperforms all existing object proposals algorithms.The combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results on KITTI benchmark.", "Many modern approaches for object detection are two-staged pipelines. The first stage identifies regions of interest which are then classified in the second stage. Faster R-CNN is such an approach for object detection which combines both stages into a single pipeline. In this paper we apply Faster R-CNN to the task of company logo detection. Motivated by its weak performance on small object instances, we examine in detail both the proposal and the classification stage with respect to a wide range of object sizes. We investigate the influence of feature map resolution on the performance of those stages. Based on theoretical considerations, we introduce an improved scheme for generating anchor proposals and propose a modification to Faster R-CNN which leverages higher-resolution feature maps for small objects. We evaluate our approach on the FlickrLogos dataset improving the RPN performance from 0.52 to 0.71 (MABO) and the detection performance from 0.52 to @math (mAP).", "The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7 on VOC07, 80.4 on VOC12, and 34.4 on COCO. Codes will be made publicly available.", "In this paper, we first investigate why typical two-stage methods are not as fast as single-stage, fast detectors like YOLO and SSD. We find that Faster R-CNN and R-FCN perform an intensive computation after or before RoI warping. Faster R-CNN involves two fully connected layers for RoI recognition, while R-FCN produces a large score maps. Thus, the speed of these networks is slow due to the heavy-head design in the architecture. Even if we significantly reduce the base model, the computation cost cannot be largely decreased accordingly. We propose a new two-stage detector, Light-Head R-CNN, to address the shortcoming in current two-stage approaches. In our design, we make the head of network as light as possible, by using a thin feature map and a cheap R-CNN subnet (pooling and single fully-connected layer). Our ResNet-101 based light-head R-CNN outperforms state-of-art object detectors on COCO while keeping time efficiency. More importantly, simply replacing the backbone with a tiny network (e.g, Xception), our Light-Head R-CNN gets 30.7 mmAP at 102 FPS on COCO, significantly outperforming the single-stage, fast detectors like YOLO and SSD on both speed and accuracy. Code will be made publicly available." ] }
1907.11830
2966222024
360° images are usually represented in either equirectangular projection (ERP) or multiple perspective projections. Different from the flat 2D images, the detection task is challenging for 360° images due to the distortion of ERP and the inefficiency of perspective projections. However, existing methods mostly focus on one of the above representations instead of both, leading to limited detection performance. Moreover, the lack of appropriate bounding-box annotations as well as the annotated datasets further increases the difficulties of the detection task. In this paper, we present a standard object detection framework for 360° images. Specifically, we adapt the terminologies of the traditional object detection task to the omnidirectional scenarios, and propose a novel two-stage object detector, i.e., Reprojection R-CNN by combining both ERP and perspective projection. Owing to the omnidirectional field-of-view of ERP, Reprojection R-CNN first generates coarse region proposals efficiently by a distortion-aware spherical region proposal network. Then, it leverages the distortion-free perspective projection and refines the proposed regions by a novel reprojection network. We construct two novel synthetic datasets for training and evaluation. Experiments reveal that Reprojection R-CNN outperforms the previous state-of-the-art methods on the mAP metric. In addition, the proposed detector could run at 178ms per image in the panoramic datasets, which implies its practicability in real-world applications.
: Object detection in spherical images is an emerging task in computer vision, and several efforts @cite_32 @cite_15 @cite_31 have been made to push forward this issue. @cite_15 utilize the network distillation in the network. This approach applies regular CNN to a specific tangent plane with origin aligned to the object center to generate region proposals. They construct a synthetic dataset by projecting objects in 2D images onto a sphere. Specifically, for each image in the dataset, they select a single bounding box in the image and project it onto the 180th meridian of the sphere with different polar angles. @cite_31 exploit a perspective-projection based detector on a real-world dataset. However, they annotate the objects with rectangular regions on ERP, which should have been distorted on the sphere. Meanwhile, @cite_32 attach the rendered 3D car images to the real-world omnidirectional images and create the synthetic FlyingCars dataset. To solve the distortion in ERP, they utilize the spherical convolution, and apply it to a vanilla SSD.
{ "cite_N": [ "@cite_15", "@cite_31", "@cite_32" ], "mid": [ "2963809933", "2963721253", "2725486421", "2589615404" ], "abstract": [ "The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.", "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L 1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L 1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40 while remaining highly competitive in terms of processing time.", "In this paper, we propose a novel method called Rotational Region CNN (R2CNN) for detecting arbitrary-oriented texts in natural scene images. The framework is based on Faster R-CNN [1] architecture. First, we use the Region Proposal Network (RPN) to generate axis-aligned bounding boxes that enclose the texts with different orientations. Second, for each axis-aligned text box proposed by RPN, we extract its pooled features with different pooled sizes and the concatenated features are used to simultaneously predict the text non-text score, axis-aligned box and inclined minimum area box. At last, we use an inclined non-maximum suppression to get the detection results. Our approach achieves competitive results on text detection benchmarks: ICDAR 2015 and ICDAR 2013.", "Object proposals have recently emerged as an essential cornerstone for object detection. The current state-of-the-art object detectors employ object proposals to detect objects within a modest set of candidate bounding box proposals instead of exhaustively searching across an image using the sliding window approach. However, achieving high recall and good localization with few proposals is still a challenging problem. The challenge becomes even more difficult in the context of autonomous driving, in which small objects, occlusion, shadows, and reflections usually occur. In this paper, we present a robust object proposals re-ranking algorithm that effectivity re-ranks candidates generated from a customized class-independent 3DOP (3D Object Proposals) method using a two-stream convolutional neural network (CNN). The goal is to ensure that those proposals that accurately cover the desired objects are amongst the few top-ranked candidates. The proposed algorithm, which we call DeepStereoOP, exploits not only RGB images as in the conventional CNN architecture, but also depth features including disparity map and distance to the ground. Experiments show that the proposed algorithm outperforms all existing object proposal algorithms on the challenging KITTI benchmark in terms of both recall and localization. Furthermore, the combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results of all three KITTI object classes. HighlightsWe present a robust object proposals re-ranking algorithm for object detection in autonomous driving.Both RGB images and depth features are included in the proposed two-stream CNN architecture called DeepStereoOP.Initial object proposals are generated from a customized class-independent 3DOP method.Experiments show that the proposed algorithm outperforms all existing object proposals algorithms.The combination of DeepStereoOP and Fast R-CNN achieves one of the best detection results on KITTI benchmark." ] }
1907.11357
2965380104
As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance. Recently, due to the increasing demand for autonomous systems and robots, it is significant to make a tradeoff between accuracy and inference speed. In this paper, we propose a novel Depthwise Asymmetric Bottleneck (DAB) module to address this dilemma, which efficiently adopts depth-wise asymmetric convolution and dilated convolution to build a bottleneck structure. Based on the DAB module, we design a Depth-wise Asymmetric Bottleneck Network (DABNet) especially for real-time semantic segmentation, which creates sufficient receptive field and densely utilizes the contextual information. Experiments on Cityscapes and CamVid datasets demonstrate that the proposed DABNet achieves a balance between speed and precision. Specifically, without any pretrained model and postprocessing, it achieves 70.1 Mean IoU on the Cityscapes test dataset with only 0.76 million parameters and a speed of 104 FPS on a single GTX 1080Ti card.
Real-time semantic segmentation network requires finding a trade-off between high-quality prediction and high-inference speed. ENet @cite_8 is the first network to be designed in real time, it trims a great number of convolution filters to reduce computation. ICNet @cite_21 proposes an image cascade network that incorporates multi-resolution branches. ERFNet @cite_20 uses residual connections and factorized convolutions to remain efficient while retaining remarkable accuracy. More recently, ESPNet @cite_29 introduces an efficient spatial pyramid (ESP), which brings great improvement in both speed and performance. BiSeNet @cite_2 proposes two paths to combine spatial information and context information. These networks successfully made a trade-off between speed and performance, but there is still sufficient space for further improvement.
{ "cite_N": [ "@cite_8", "@cite_29", "@cite_21", "@cite_2", "@cite_20" ], "mid": [ "2611259176", "2964217532", "2790933182", "2563705555" ], "abstract": [ "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.", "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.", "We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8 less. We evaluated EPSNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively.", "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date." ] }
1907.11357
2965380104
As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance. Recently, due to the increasing demand for autonomous systems and robots, it is significant to make a tradeoff between accuracy and inference speed. In this paper, we propose a novel Depthwise Asymmetric Bottleneck (DAB) module to address this dilemma, which efficiently adopts depth-wise asymmetric convolution and dilated convolution to build a bottleneck structure. Based on the DAB module, we design a Depth-wise Asymmetric Bottleneck Network (DABNet) especially for real-time semantic segmentation, which creates sufficient receptive field and densely utilizes the contextual information. Experiments on Cityscapes and CamVid datasets demonstrate that the proposed DABNet achieves a balance between speed and precision. Specifically, without any pretrained model and postprocessing, it achieves 70.1 Mean IoU on the Cityscapes test dataset with only 0.76 million parameters and a speed of 104 FPS on a single GTX 1080Ti card.
Dilated convolution @cite_23 inserts zeros between each pixel in a standard convolution, which leads to a large effective receptive field without increasing parameters, hence it is generally used in semantic segmentation models. In DeepLab series @cite_40 @cite_35 @cite_3 , an atrous spatial pyramid pooling (ASPP) module is introduced which employs multiple parallel filters with different dilation rates to collect multi-scale information. DenseASPP @cite_39 concatenates a set of dilated convolution layers to generate dense multi-scale feature representation. Most of the state-of-the-art networks in semantic segmentation exploit dilated convolution, which proves its effectiveness in pixel-level prediction task.
{ "cite_N": [ "@cite_35", "@cite_3", "@cite_39", "@cite_40", "@cite_23" ], "mid": [ "2412782625", "2952865063", "2895340898", "2950510876" ], "abstract": [ "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "This paper proposes a fast video salient object detection model, based on a novel recurrent network architecture, named Pyramid Dilated Bidirectional ConvLSTM (PDB-ConvLSTM). A Pyramid Dilated Convolution (PDC) module is first designed for simultaneously extracting spatial features at multiple scales. These spatial features are then concatenated and fed into an extended Deeper Bidirectional ConvLSTM (DB-ConvLSTM) to learn spatiotemporal information. Forward and backward ConvLSTM units are placed in two layers and connected in a cascaded way, encouraging information flow between the bi-directional streams and leading to deeper feature extraction. We further augment DB-ConvLSTM with a PDC-like structure, by adopting several dilated DB-ConvLSTMs to extract multi-scale spatiotemporal information. Extensive experimental results show that our method outperforms previous video saliency models in a large margin, with a real-time speed of 20 fps on a single GPU. With unsupervised video object segmentation as an example application, the proposed model (with a CRF-based post-process) achieves state-of-the-art results on two popular benchmarks, well demonstrating its superior performance and high applicability.", "Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the \"gridding issue\" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1 mIOU in the test set at the time of submission. We also have achieved state-of-the-art overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at this https URL ." ] }
1907.11357
2965380104
As a pixel-level prediction task, semantic segmentation needs large computational cost with enormous parameters to obtain high performance. Recently, due to the increasing demand for autonomous systems and robots, it is significant to make a tradeoff between accuracy and inference speed. In this paper, we propose a novel Depthwise Asymmetric Bottleneck (DAB) module to address this dilemma, which efficiently adopts depth-wise asymmetric convolution and dilated convolution to build a bottleneck structure. Based on the DAB module, we design a Depth-wise Asymmetric Bottleneck Network (DABNet) especially for real-time semantic segmentation, which creates sufficient receptive field and densely utilizes the contextual information. Experiments on Cityscapes and CamVid datasets demonstrate that the proposed DABNet achieves a balance between speed and precision. Specifically, without any pretrained model and postprocessing, it achieves 70.1 Mean IoU on the Cityscapes test dataset with only 0.76 million parameters and a speed of 104 FPS on a single GTX 1080Ti card.
Convolution factorization divides a standard convolution operation into several steps to reduce the computational cost and memory, which is extensively adopted in lightweight CNN models. Inception @cite_11 @cite_25 @cite_1 employ several small-sized convolutions to replace the convolution with large kernel size while maintaining the size of the receptive field. Xception @cite_36 and MobileNet @cite_0 use the depth-wise separable convolution to reduce the amount of computation with only a slight drop in performance. MobileNetV2 @cite_15 proposes an inverted residual block and linear bottlenecks to further improve the performance. ShuffleNet @cite_17 applies the point-wise group convolution with channel shuffle operation to enable information communication between different groups of channels.
{ "cite_N": [ "@cite_36", "@cite_1", "@cite_17", "@cite_0", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2788715907", "2885059312", "2963705792", "2963048316" ], "abstract": [ "In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1 , 79.7 and 60.9 , respectively. Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.", "We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike \"deep kernels\", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84 classification error on MNIST, a new record for GPs with a comparable number of parameters.", "Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. @PARASPLIT We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1-7.3x convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers.", "Abstract: We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1 from 91 to 90 ). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of @math increase of the overall top-5 classification error." ] }
1907.11484
2966798122
In recent years, object detection has shown impressive results using supervised deep learning, but it remains challenging in a cross-domain environment. The variations of illumination, style, scale, and appearance in different domains can seriously affect the performance of detection models. Previous works use adversarial training to align global features across the domain shift and to achieve image information transfer. However, such methods do not effectively match the distribution of local features, resulting in limited improvement in cross-domain object detection. To solve this problem, we propose a multi-level domain adaptive model to simultaneously align the distributions of local-level features and global-level features. We evaluate our method with multiple experiments, including adverse weather adaptation, synthetic data adaptation, and cross camera adaptation. In most object categories, the proposed method achieves superior performance against state-of-the-art techniques, which demonstrates the effectiveness and robustness of our method.
Domain adaptation is a technique that adapts a model trained in one domain to another. Many related works try to define and minimize the distance of feature distributions between the data from different domains @cite_6 @cite_3 @cite_0 @cite_23 @cite_14 @cite_11 . For example, deep domain confusion (DDC) model @cite_14 explores invariant representations between different domains by minimizing the maximum mean discrepancy (MMD) of feature distributions. Long al propose to adapt all task-specific layers and explore multiple kernel variants of MMD @cite_3 . Ganin and Lempitsky report using the adversarial learning to achieve domain adaptation and learning the distance with the discriminator @cite_6 . Saito al propose to maximize the discrepancy between two classifiers’ output to align distributions @cite_10 . Most of the mentioned works above are designed for classification or segmentation.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_6", "@cite_0", "@cite_23", "@cite_10", "@cite_11" ], "mid": [ "2962970380", "2607350342", "1594039573", "2811444512" ], "abstract": [ "Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on several visual domain adaptation benchmarks.", "Domain adaptation is transfer learning which aims to generalize a learning model across training and testing data with different distributions. Most previous research tackle this problem in seeking a shared feature representation between source and target domains while reducing the mismatch of their data distributions. In this paper, we propose a close yet discriminative domain adaptation method, namely CDDA, which generates a latent feature representation with two interesting properties. First, the discrepancy between the source and target domain, measured in terms of both marginal and conditional probability distribution via Maximum Mean Discrepancy is minimized so as to attract two domains close to each other. More importantly, we also design a repulsive force term, which maximizes the distances between each label dependent sub-domain to all others so as to drag different class dependent sub-domains far away from each other and thereby increase the discriminative power of the adapted domain. Moreover, given the fact that the underlying data manifold could have complex geometric structure, we further propose the constraints of label smoothness and geometric structure consistency for label propagation. Extensive experiments are conducted on 36 cross-domain image classification tasks over four public datasets. The comprehensive results show that the proposed method consistently outperforms the state-of-the-art methods with significant margins.", "Domain adaptation is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant representations has shown excellent properties, in particular since it allows to train a unique classifier effective in all domains. In this paper, we propose a regularized unsupervised optimal transportation model to perform the alignment of the representations in the source and target domains. We learn a transportation plan matching both PDFs, which constrains labeled samples of the same class in the source domain to remain close during transport. This way, we exploit at the same time the labeled samples in the source and the distributions observed in both domains. Experiments on toy and challenging real visual adaptation examples show the interest of the method, that consistently outperforms state of the art approaches. In addition, numerical experiments show that our approach leads to better performances on domain invariant deep learning features and can be easily adapted to the semi-supervised case where few labeled samples are available in the target domain.", "Domain adaptation is a promising technique when addressing limited or no labeled target data by borrowing well-labeled knowledge from the auxiliary source data. Recently, researchers have exploited multi-layer structures for discriminative feature learning to reduce the domain discrepancy. However, there are limited research efforts on simultaneously building a deep structure and a discriminative classifier over both labeled source and unlabeled target. In this paper, we propose a semi-supervised deep domain adaptation framework, in which the multi-layer feature extractor and a multi-class classifier are jointly learned to benefit from each other. Specifically, we develop a novel semi-supervised class-wise adaptation manner to fight off the conditional distribution mismatch between two domains by assigning a probabilistic label to each target sample, i.e., multiple class labels with different probabilities. Furthermore, a multi-class classifier is simultaneously trained on labeled source and unlabeled target samples in a semi-supervised fashion. In this way, the deep structure can formally alleviate the domain divergence and enhance the feature transferability. Experimental evaluations on several standard cross-domain benchmarks verify the superiority of our proposed approach." ] }
1907.11484
2966798122
In recent years, object detection has shown impressive results using supervised deep learning, but it remains challenging in a cross-domain environment. The variations of illumination, style, scale, and appearance in different domains can seriously affect the performance of detection models. Previous works use adversarial training to align global features across the domain shift and to achieve image information transfer. However, such methods do not effectively match the distribution of local features, resulting in limited improvement in cross-domain object detection. To solve this problem, we propose a multi-level domain adaptive model to simultaneously align the distributions of local-level features and global-level features. We evaluate our method with multiple experiments, including adverse weather adaptation, synthetic data adaptation, and cross camera adaptation. In most object categories, the proposed method achieves superior performance against state-of-the-art techniques, which demonstrates the effectiveness and robustness of our method.
Huang al propose that aligning the distributions of activations of intermediate layers can alleviate the covariate shift @cite_20 . This idea is similar to our work partly. However, instead of using a least squares generative adversarial network(LSGAN) @cite_18 loss to align distributions for semantic segmentation, we use multi-level image patch loss for object detection.
{ "cite_N": [ "@cite_18", "@cite_20" ], "mid": [ "2895168809", "2738171447", "2964055354", "2464606141" ], "abstract": [ "We introduce a layer-wise unsupervised domain adaptation approach for semantic segmentation. Instead of merely matching the output distributions of the source and target domains, our approach aligns the distributions of activations of intermediate layers. This scheme exhibits two key advantages. First, matching across intermediate layers introduces more constraints for training the network in the target domain, making the optimization problem better conditioned. Second, the matched activations at each layer provide similar inputs to the next layer for both training and adaptation, and thus alleviate covariate shift. We use a Generative Adversarial Network (or GAN) to align activation distributions. Experimental results show that our approach achieves state-of-the-art results on a variety of popular domain adaptation tasks, including (1) from GTA to Cityscapes for semantic segmentation, (2) from SYNTHIA to Cityscapes for semantic segmentation, and (3) adaptations on USPS and MNIST for image classification (The website of this paper is https: rsents.github.io dam.html).", "Recent work has demonstrated the emergence of semantic object-part detectors in activation patterns of convolutional neural networks (CNNs), but did not account for the distributed multi-layer neural activations in such networks. In this work, we propose a novel method to extract distributed patterns of activations from a CNN and show that such patterns correspond to high-level visual attributes. We propose an unsupervised learning module that sits above a pre-trained CNN and learns distributed activation patterns of the network. We utilize elastic non-negative matrix factorization to analyze the responses of a pretrained CNN to an input image and extract salient image regions. The corresponding patterns of neural activations for the extracted salient regions are then clustered via unsupervised deep embedding for clustering (DEC) framework. We demonstrate that these distributed activations contain high-level image features that could be explicitly used for image classification.", "In this paper, we make two contributions to unsupervised domain adaptation (UDA) using the convolutional neural network (CNN). First, our approach transfers knowledge in all the convolutional layers through attention alignment. Most previous methods align high-level representations, e.g., activations of the fully connected (FC) layers. In these methods, however, the convolutional layers which underpin critical low-level domain knowledge cannot be updated directly towards reducing domain discrepancy. Specifically, we assume that the discriminative regions in an image are relatively invariant to image style changes. Based on this assumption, we propose an attention alignment scheme on all the target convolutional layers to uncover the knowledge shared by the source domain. Second, we estimate the posterior label distribution of the unlabeled data for target network training. Previous methods, which iteratively update the pseudo labels by the target network and refine the target network by the updated pseudo labels, are vulnerable to label estimation errors. Instead, our approach uses category distribution to calculate the cross-entropy loss for training, thereby ameliorating the error accumulation of the estimated labels. The two contributions allow our approach to outperform the state-of-the-art methods by +2.6 on the Office-31 dataset.", "We propose a new technique to jointly recover cosegmentation and dense per-pixel correspondence in two images. Our method parameterizes the correspondence field using piecewise similarity transformations and recovers a mapping between the estimated common \"foreground\" regions in the two images allowing them to be precisely aligned. Our formulation is based on a hierarchical Markov random field model with segmentation and transformation labels. The hierarchical structure uses nested image regions to constrain inference across multiple scales. Unlike prior hierarchical methods which assume that the structure is given, our proposed iterative technique dynamically recovers the structure along with the labeling. This joint inference is performed in an energy minimization framework using iterated graph cuts. We evaluate our method on a new dataset of 400 image pairs with manually obtained ground truth, where it outperforms state-of-the-art methods designed specifically for either cosegmentation or correspondence estimation." ] }
1907.11468
2965936489
Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature.
Neuro-symbolic approaches @cite_30 express the internal or output structure of the learner using logic. First-Order Logic (FOL) is often selected as the declarative framework for the knowledge because of its flexibility and expressive power. This class of methodologies is rooted in previous work from the Statistical Relational Learning community, which developed frameworks for performing logic inference in the presence of uncertainty. For example, Markov Logic Networks @cite_2 and Probabilistic Soft Logic @cite_19 integrate First Order Logic (FOL) and graphical models. A common solution to integrate logic reasoning with uncertainty and deep learning relies on using deep networks to approximate the FOL predicates, and the overall architecture is optimized end-to-end by relaxing the FOL into a differentiable form, which translates into a set of constraints. This approach is followed with minor variants by Semantic Based Regularization @cite_28 , the Lyrics framework @cite_17 , Logic Tensor Networks @cite_14 , the Semantic Loss @cite_0 and DeepProbLog @cite_4 extending the ProbLog @cite_13 @cite_15 framework by using predicates approximated by jointly learned functions.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_4", "@cite_28", "@cite_0", "@cite_19", "@cite_2", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "2617995259", "2556644972", "2048919592", "1986767090" ], "abstract": [ "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.", "The effective integration of learning and reasoning is a well-known and challenging area of research within artificial intelligence. Neural-symbolic systems seek to integrate learning and reasoning by combining neural networks and symbolic knowledge representation. In this paper, a novel methodology is proposed for the extraction of relational knowledge from neural networks which are trainable by the efficient application of the backpropagation learning algorithm. First-order logic rules are extracted from the neural networks, offering interpretable symbolic relational models on which logical reasoning can be performed. The wellknown knowledge extraction algorithm TREPAN was adapted and incorporated into the first-order version of the neural-symbolic system CILP++. Empirical results obtained in comparison with a probabilistic model for relational learning, Markov Logic Networks, and a state-of-the-art Inductive Logic Programming system, Aleph, indicate that the proposed methodology achieves competitive accuracy results consistently in all datasets investigated, while either Markov Logic Networks or Aleph show considerably worse results in at least one dataset. It is expected that effective knowledge extraction from neural networks can contribute to the integration of heterogeneous knowledge representations.", "We propose a general framework to incorporate first-order logic (FOL) clauses, that are thought of as an abstract and partial representation of the environment, into kernel machines that learn within a semi-supervised scheme. We rely on a multi-task learning scheme where each task is associated with a unary predicate defined on the feature space, while higher level abstract representations consist of FOL clauses made of those predicates. We re-use the kernel machine mathematical apparatus to solve the problem as primal optimization of a function composed of the loss on the supervised examples, the regularization term, and a penalty term deriving from forcing real-valued constraints deriving from the predicates. Unlike for classic kernel machines, however, depending on the logic clauses, the overall function to be optimized is not convex anymore. An important contribution is to show that while tackling the optimization by classic numerical schemes is likely to be hopeless, a stage-based learning scheme, in which we start learning the supervised examples until convergence is reached, and then continue by forcing the logic clauses is a viable direction to attack the problem. Some promising experimental results are given on artificial learning tasks and on the automatic tagging of bibtex entries to emphasize the comparison with plain kernel machines.", "Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 of features can be achieved with a small loss of accuracy." ] }
1907.11468
2965936489
Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature.
Within this class of approaches, it is of fundamental importance to define how to perform the fuzzy relaxation of the formulas in the knowledge base. For instance, @cite_31 introduces a learning framework where formulas are converted according to ukasiewicz logic t-norm and t-conorms. @cite_18 also proposes to convert the formulas according to ukasiewicz logic, however this paper exploits the weak conjunction in place of the t-norms to get convex functional constraints. A more practical approach has been considered in (SBR), where all the fundamental t-norms have been evaluated on different learning tasks @cite_28 . However, it does not emerge from this prior work a unified principle to express the cost function to be optimized with respect to the selected fuzzy logic. For example, all the aforementioned approaches rely on a fixed loss function linearly measuring the distance of the formulas from the 1-value. Even if it may be justified from a logical point of view ( @math ), it is not clear whether this choice is principled from a learning standpoint, since all deep learning approaches use very different loss functions to enforce the fitting of the supervised data.
{ "cite_N": [ "@cite_28", "@cite_31", "@cite_18" ], "mid": [ "2201744460", "2793731154", "2803642127", "2605102252" ], "abstract": [ "Abstract This paper proposes a unified approach to learning from constraints, which integrates the ability of classical machine learning techniques to learn from continuous feature-based representations with the ability of reasoning using higher-level semantic knowledge typical of Statistical Relational Learning. Learning tasks are modeled in the general framework of multi-objective optimization, where a set of constraints must be satisfied in addition to the traditional smoothness regularization term. The constraints translate First Order Logic formulas, which can express learning-from-example supervisions and general prior knowledge about the environment by using fuzzy logic. By enforcing the constraints also on the test set, this paper presents a natural extension of the framework to perform collective classification. Interestingly, the theory holds for both the case of data represented by feature vectors and the case of data simply expressed by pattern identifiers, thus extending classic kernel machines and graph regularization, respectively. This paper also proposes a probabilistic interpretation of the proposed learning scheme, and highlights intriguing connections with probabilistic approaches like Markov Logic Networks. Experimental results on classic benchmarks provide clear evidence of the remarkable improvements that are obtained with respect to related approaches.", "We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature representation; and conventional margin methods for neural networks only enforce margin at the output layer. Such methods are therefore not well suited for deep networks. In this work, we propose a novel loss function to impose a margin on any chosen set of layers of a deep network (including input and hidden layers). Our formulation allows choosing any norm on the metric measuring the margin. We demonstrate that the decision boundary obtained by our loss has nice properties compared to standard classification loss functions. Specifically, we show improved empirical results on the MNIST, CIFAR-10 and ImageNet datasets on multiple tasks: generalization from small training sets, corrupted labels, and robustness against adversarial perturbations. The resulting loss is general and complementary to existing data augmentation (such as random adversarial input transform) and regularization techniques (such as weight decay, dropout, and batch norm).", "We study binary classification in the presence of class-conditional random noise, where the learner gets to see labels that are flipped independently with some probability, and where the flip probability depends on the class. Our goal is to devise learning algorithms that are efficient and statistically consistent with respect to commonly used utility measures. In particular, we look at a family of measures motivated by their application in domains where cost-sensitive learning is necessary (for example, when there is class imbalance). In contrast to most of the existing literature on consistent classification that are limited to the classical 0-1 loss, our analysis includes more general utility measures such as the AM measure (arithmetic mean of True Positive Rate and True Negative Rate). For this problem of cost-sensitive learning under class-conditional random noise, we develop two approaches that are based on suitably modifying surrogate losses. First, we provide a simple unbiased estimator of any loss, and obtain performance bounds for empirical utility maximization in the presence of i.i.d. data with noisy labels. If the loss function satis_es a simple symmetry condition, we show that using unbiased estimator leads to an efficient algorithm for empirical maximization. Second, by leveraging a reduction of risk minimization under noisy labels to classification with weighted 0-1 loss, we suggest the use of a simple weighted surrogate loss, for which we are able to obtain strong utility bounds. This approach implies that methods already used in practice, such as biased SVM and weighted logistic regression, are provably noise-tolerant. For two practically important measures in our family, we show that the proposed methods are competitive with respect to recently proposed methods for dealing with label noise in several benchmark data sets.", "We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship – an anchor point x is similar to a set of positive points Y , and dissimilar to a set of negative points Z, and a loss defined over these distances is minimized. While the specifics of the optimization differ, in this work we collectively call this type of supervision Triplets and all methods that follow this pattern Triplet-Based methods. These methods are challenging to optimize. A main issue is the need for finding informative triplets, which is usually achieved by a variety of tricks such as increasing the batch size, hard or semi-hard triplet mining, etc. Even with these tricks, the convergence rate of such methods is slow. In this paper we propose to optimize the triplet loss on a different space of triplets, consisting of an anchor data point and similar and dissimilar proxy points which are learned as well. These proxies approximate the original data points, so that a triplet loss over the proxies is a tight upper bound of the original loss. This proxy-based loss is empirically better behaved. As a result, the proxy-loss improves on state-of-art results for three standard zero-shot learning datasets, by up to 15 points, while converging three times as fast as other triplet-based losses." ] }
1907.11468
2965936489
Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. Deep architectures are typically trained following a supervised scheme and, therefore, they rely on the availability of a large amount of labeled training data to effectively learn their parameters. Neuro-symbolic approaches have recently gained popularity to inject prior knowledge into a deep learner without requiring it to induce this knowledge from data. These approaches can potentially learn competitive solutions with a significant reduction of the amount of supervised data. A large class of neuro-symbolic approaches is based on First-Order Logic to represent prior knowledge, that is relaxed to a differentiable form using fuzzy logic. This paper shows that the loss function expressing these neuro-symbolic learning tasks can be unambiguously determined given the selection of a t-norm generator. When restricted to simple supervised learning, the presented theoretical apparatus provides a clean justification to the popular cross-entropy loss, that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. One advantage of the proposed learning formulation is that it can be extended to all the knowledge that can be represented by a neuro-symbolic method, and it allows the development of a novel class of loss functions, that the experimental results show to lead to faster convergence rates than other approaches previously proposed in the literature.
From a learning point of view, different quantifier conversions can be taken into account and validated, as well. For instance, the arithmetic mean and maximum operator have been used to convert the universal and existential quantifiers in @cite_28 , respectively. Different possibilities have been considered for the universal quantifier in @cite_14 , while the existential quantifier depends on this choice via the application of the strong negation using the DeMorgan law. The Arithmetic mean operator has been shown to achieve better performances in the conversion of the universal quantifier @cite_14 , with the existential quantifier implemented by Skolemization. However, the universal and existential quantifiers can be thought of as a generalized AND and OR, respectively. Therefore, converting the quantifiers using a mean operator has no direct justification inside a logic theory.
{ "cite_N": [ "@cite_28", "@cite_14" ], "mid": [ "1511737804", "2175917793", "2003531456", "1515930456" ], "abstract": [ "Motivated by applications to program verification, we study a decision procedure for satisfiability in an expressive fragment of a theory of arrays, which is parameterized by the theories of the array elements. The decision procedure reduces satisfiability of a formula of the fragment to satisfiability of an equisatisfiable quantifier-free formula in the combined theory of equality with uninterpreted functions (EUF), Presburger arithmetic, and the element theories. This fragment allows a constrained use of universal quantification, so that one quantifier alternation is allowed, with some syntactic restrictions. It allows expressing, for example, that an assertion holds for all elements in a given index range, that two arrays are equal in a given range, or that an array is sorted. We demonstrate its expressiveness through applications to verification of sorting algorithms and parameterized systems. We also prove that satisfiability is undecidable for several natural extensions to the fragment. Finally, we describe our implementation in the πVC verifying compiler.", "Many real-world knowledge-based systems must deal with information coming from different sources that invariably leads to incompleteness, overspecification, or inherently uncertain content. The presence of these varying levels of uncertainty doesn't mean that the information is worthless --- rather, these are hurdles that the knowledge engineer must learn to work with. In this paper, we continue work on an argumentation-based framework that extends the well-known Defeasible Logic Programming (DeLP) language with probabilistic uncertainty, giving rise to the Defeasible Logic Programming with Presumptions and Probabilistic Environments (DeLP3E) model. Our prior work focused on the problem of belief revision in DeLP3E, where we proposed a non-prioritized class of revision operators called AFO (Annotation Function-based Operators) to solve this problem. In this paper, we further study this class and argue that in some cases it may be desirable to define revision operators that take quantitative aspects into account, such as how the probabilities of certain literals or formulas of interest change after the revision takes place. To the best of our knowledge, this problem has not been addressed in the argumentation literature to date. We propose the QAFO (Quantitative Annotation Function-based Operators) class of operators, a subclass of AFO, and then go on to study the complexity of several problems related to their specification and application in revising knowledge bases. Finally, we present an algorithm for computing the probability that a literal is warranted in a DeLP3E knowledge base, and discuss how it could be applied towards implementing QAFO-style operators that compute approximations rather than exact operations.", "The use of conventional classical logic is misleading for characterizing the behavior of logic programs because a logic program, when queried, will do one of three things: succeed with the query, fail with it, or not respond because it has fallen into infinite backtracking. In [7] Kleene proposed a three-valued logic for use in recursive function theory. The so-called third truth value was really undefined: truth value not determined. This logic is a useful tool in logic-program specification, and in particular, for describing models. (See [11].) Tarski showed that formal languages, like arithmetic, cannot contain their own truth predicate because one could then construct a paradoxical sentence that effectively asserts its own falsehood. Natural languages do allow the use of \"is true\", so by Tarski's argument a semantics for natural language must leave truth-value gaps: some sentences must fail to have a truth value. In [8] Kripke showed how a model having truth-value gaps, using Kleene's three-valued logic, could be specified. The mechanism he used is a famiUar one in program semantics: consider the least fixed point of a certain monotone operator. But that operator must be defined on a space involving three-valued logic, and for Kripke's application it will not be continuous. We apply techniques similar to Kripke's to logic programs. We associate with each program a monotone operator on a space of three-valued logic interpretations, or better partial interpretations. This space is not a complete lattice, and the operators are not, in general, continuous. But least and other fixed points do exist. These fixed points are shown to provide suitable three-valued program models. They relate closely to the least and greatest fixed points of the operators used in [1]. Because of the extra machinery involved, our treatment allows for a natural consideration of negation, and indeed, of the other prepositional connectives as well. And because of the elaborate structure of fixed points available, we are able to", "The problem of deciding the satisfiability of a quantifier-free formula with respect to a background theory, also known as Satisfiability Modulo Theories (SMT), is gaining increasing relevance in verification: representation capabilities beyond propositional logic allow for a natural modeling of real-world problems (e.g., pipeline and RTL circuits verification, proof obligations in software systems). In this paper, we focus on the case where the background theory is the combination T1∪T2 of two simpler theories. Many SMT procedures combine a boolean model enumeration with a decision procedure for T1∪T2, where conjunctions of literals can be decided by an integration schema such as Nelson-Oppen, via a structured exchange of interface formulae (e.g., equalities in the case of convex theories, disjunctions of equalities otherwise). We propose a new approach for SMT(T1∪T2), called Delayed Theory Combination, which does not require a decision procedure for T1∪T2, but only individual decision procedures for T1 and T2, which are directly integrated into the boolean model enumerator. This approach is much simpler and natural, allows each of the solvers to be implemented and optimized without taking into account the others, and it nicely encompasses the case of non-convex theories. We show the effectiveness of the approach by a thorough experimental comparison." ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
A factor that can help expand the IoT is to create confidence for users of this technology in terms of maintaining their privacy and security, because if there is no proper security in its infrastructure, damage to IoT-based equipment, the possibility of losing personal information, the loss of privacy, and even the disclosure of economic and other data, are highly likely, and this may lead to the inability to use it in critical applications. Ronen and Shamir in @cite_6 pointed out that if security is not taken into account in the IoT-based infrastructure, the technology will threaten the future of the world as a nuclear bomb.
{ "cite_N": [ "@cite_6" ], "mid": [ "2573616158", "2766595485", "2523324202", "1992595252" ], "abstract": [ "Internet of Things (IoT) is a technology in which for any object the ability to send data via communications networks is provided. Ensuring the security of Internet services and applications is an important factor in attracting users to use this platform. In the other words, if people are unable to trust that the equipment and information will be reasonably safe against damage, abuse and the other security threats, this lack of trust leads to a reduction in the use of IoT-based applications. Recently, Tewari and Gupta (J Supercomput 1–18, 2016) have proposed an ultralightweight RFID authentication protocol to provide desired security for objects in IoT. In this paper, we consider the security of the proposed protocol and present a passive secret disclosure attack against it. The success probability of the attack is ‘1’ while the complexity of the attack is only eavesdropping one session of the protocol. The presented attack has negligible complexity. We verify the correctness of the presented attack by simulation.", "Abstract In large-scale Internet of Things (IoT) systems, huge volumes of data are collected from anywhere at any time, which may invade people’s privacy, especially when the systems are used in medical or daily living environments. Preserving privacy is an important issue, and higher privacy demands usually tend to require weaker identity. However, previous research has indicated that strong security tends to demand strong identity, especially in authentication processes. Thus, defining a good tradeoff between privacy and security remains a challenging problem. This motivates us to develop a privacy-preserving and accountable authentication protocol for IoT end-devices with weaker identity, which integrates an adapted construction of short group signatures and Shamir’s secret sharing scheme. We analyze the security properties of our protocol in the context of six typical attacks and verify the formal security using the Proverif tool. Experiments using our implementation in MacBook Pro and Intel Edison development platforms show that our authentication protocol is feasible in practice.", "With the advent of the Internet of Things (IoT), billions of devices are expected to continuously collect and process sensitive data (e.g., location, personal health). Due to limited computational capacity available on IoT devices, the current de facto model for building IoT applications is to send the gathered data to the cloud for computation. While private cloud infrastructures for handling large amounts of data streams are expensive to build, using low cost public (untrusted) cloud infrastructures for processing continuous queries including on sensitive data leads to concerns over data confidentiality. This paper presents STYX, a novel programming abstraction and managed runtime system, that ensures confidentiality of IoT applications whilst leveraging the public cloud for continuous query processing. The key idea is to intelligently utilize partially homomorphic encryption to perform as many computationally intensive operations as possible in the untrusted cloud. STYX provides a simple abstraction to the IoT developer to hide the complexities of (1) applying complex cryptographic primitives, (2) reasoning about performance of such primitives, (3) deciding which computations can be executed in an untrusted tier, and (4) optimizing cloud resource usage. An empirical evaluation with benchmarks and case studies shows the feasibility of our approach.", "Emerging Internet of Things (IoTs) technologies provide many benefits to the improvement of eHealth. The successful deployment of IoTs depends on ensuring security and privacy that need to adapt to their processing capabilities and resource use. IoTs are vulnerable to attacks since communications are mostly wireless, unattended things are usually vulnerable to physical attacks, and most IoT components are constrained by energy, communications, and computation capabilities necessary for the implementation of complex security-supporting schemes. This paper describes a risk-based adaptive security framework for IoTs in eHealth that will estimate and predict risk damages and future benefits using game theory and context-awareness techniques. The paper also describes the validation case study." ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
Reviewing the proposed mechanisms and designing new models that are compatible with IoT-based devices are also very important. Given that an IoT system can include many objects with limited resources, it requires special protocols to ensure that privacy and security are guaranteed. Therefore, with the further development of IoT, its security concerns are expected to receive more attention. So far, several security protocols have been proposed to ensure IoT security, e.g. @cite_14 @cite_26 @cite_2 @cite_22 @cite_12 , however, most of them have failed in providing their security goals @cite_8 @cite_4 @cite_0 @cite_30 @cite_9 @cite_29 and various attacks, such as the protocol's secret values disclosure, DoS, traceability, impersonation and etc. were reported against them. The presentation of these attacks resulted in the development of the protocol's designing knowledge and the protocol designers are designing their protocols in such a way as to be as safe as the published attacks so far. Unfortunately, there are still attacks against newly designed protocols, and this science has not yet matured.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_22", "@cite_4", "@cite_8", "@cite_9", "@cite_29", "@cite_0", "@cite_2", "@cite_12" ], "mid": [ "2962487379", "2887736592", "2766595485", "2755191230" ], "abstract": [ "Abstract The safety of medical data and equipment plays a vital role in today’s world of Medical Internet of Things (MIoT). These IoT devices have many constraints (e.g., memory size, processing capacity, and power consumption) that make it challenging to use cost-effective and energy-efficient security solutions. Recently, researchers have proposed a few Radio-Frequency Identification (RFID) based security solutions for MIoT. The use of RFID technology in securing IoT systems is rapidly increasing because it provides secure and lightweight safety mechanisms for these systems. More recently, authors have proposed a lightweight RFID mutual authentication (LRMI) protocol. The authors argue that LRMI meets the necessary security requirements for RFID systems, and the same applies to MIoT applications as well. In this paper, our contribution has two-folds, firstly we analyze the LRMI protocol’s security to demonstrate that it is vulnerable to various attacks such as secret disclosure, reader impersonation, and tag traceability. Also, it is not able to preserve the anonymity of the tag and the reader. Secondly, we propose a new secure and lightweight mutual RFID authentication (SecLAP) protocol, which provides secure communication and preserves privacy in MIoT systems. Our security analysis shows that the SecLAP protocol is robust against de-synchronization, replay, reader tag impersonation, and traceability attacks, and it ensures forward and backward data communication security. We use Burrows-Abadi-Needham (BAN) logic to validate the security features of SecLAP. Moreover, we compare SecLAP with the state-of-the-art and validate its performance through a Field Programmable Gate Array (FPGA) implementation, which shows that it is lightweight, consumes fewer resources on tags concerning computation functions, and requires less number of flows.", "Internet of Things (IoT) has stimulated great interest in many researchers owing to its capability to connect billions of physical devices to the internet via heterogeneous access network. Security is a paramount aspect of IoT that needs to be addressed urgently to keep sensitive data private. However, from previous research studies, a number of security flaws in terms of keeping data private can be identified. Tewari and Gupta proposed an ultra-lightweight mutual authentication pRotocol that utilizes bitwise operation to achieve security in IoT networks that use RFID tags. The pRotocol is improved by Wang et. al. to prevent a full key disclosure attack. However, this paper shows that both of the pRotocols are susceptible to full disclosure, man-in-the-middle, tracking, and de-synchronization attacks. A detailed security analysis is conducted and results are presented to prove their vulnerability. Based on the aforementioned analysis, the pRotocol is modified and improved using a three pass mutual authentication. GNY logic is used to formally verify the security of the pRotocol.", "Abstract In large-scale Internet of Things (IoT) systems, huge volumes of data are collected from anywhere at any time, which may invade people’s privacy, especially when the systems are used in medical or daily living environments. Preserving privacy is an important issue, and higher privacy demands usually tend to require weaker identity. However, previous research has indicated that strong security tends to demand strong identity, especially in authentication processes. Thus, defining a good tradeoff between privacy and security remains a challenging problem. This motivates us to develop a privacy-preserving and accountable authentication protocol for IoT end-devices with weaker identity, which integrates an adapted construction of short group signatures and Shamir’s secret sharing scheme. We analyze the security properties of our protocol in the context of six typical attacks and verify the formal security using the Proverif tool. Experiments using our implementation in MacBook Pro and Intel Edison development platforms show that our authentication protocol is feasible in practice.", "In recent years, RFID (radio-frequency identification) systems are widely used in many applications. One of the most important applications for this technology is the Internet of things (IoT). Therefore, researchers have proposed several authentication protocols that can be employed in RFID-based IoT systems, and they have claimed that their protocols can satisfy all security requirements of these systems. However, in RFID-based IoT systems we have mobile readers that can be compromised by the adversary. Due to this attack, the adversary can compromise a legitimate reader and obtain its secrets. So, the protocol designers must consider the security of their proposals even in the reader compromised scenario. In this paper, we consider the security of the ultra-lightweight RFID mutual authentication (ULRMAPC) protocol recently proposed by They claimed that their protocol could be applied in the IoT systems and provide strong security. However, in this paper we show that their protocol is vulnerable to denial of service, reader and tag impersonation and de-synchronization attacks. To provide a solution, we present a new authentication protocol, which is more secure than the ULRMAPC protocol and also can be employed in RFID-based IoT systems." ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
Among different designing strategies of security protocols, attempts to design a secure ultralightweight protocol for constrained environment has a long (unsuccessful) history. Pioneer examples include SASI @cite_27 , RAPP @cite_1 , SLAP @cite_25 , LMAP @cite_10 and R @math AP @cite_11 and among the recent proposals is SecLAP @cite_15 , and many other broken protocols that have been compromised by the later third parties analysis @cite_23 @cite_13 @cite_5 @cite_3 @cite_17 @cite_19 @cite_28 @cite_20 . All those protocols tried to provide enough security only using few lightweight operations such as bitwise operations, e.g. logical AND, OR, XOR and rotation. However, the mentioned analysis have shown that it is not easy to design a strong protocol using cryptographically-weak components.
{ "cite_N": [ "@cite_13", "@cite_28", "@cite_1", "@cite_17", "@cite_3", "@cite_19", "@cite_27", "@cite_23", "@cite_5", "@cite_15", "@cite_10", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2093422627", "1508967933", "2057811891", "2152926062" ], "abstract": [ "One of the key problems in RFID is security and privacy. The implementation of authentication protocols is a flexible and effective way to solve this problem. This letter proposes a new ultralightweight RFID authentication protocol with permutation (RAPP). RAPP avoids using unbalanced OR and AND operations and introduces a new operation named permutation. The tags only involve three operations: bitwise XOR, left rotation and permutation. In addition, unlike other existing ultralightweight protocols, the last messages exchanged in RAPP are sent by the reader so as to resist de-synchronization attacks. Security analysis shows that RAPP achieves the functionalities of the authentication protocol and is resistant to various attacks. Performance evaluation illustrates that RAPP uses fewer resources on tags in terms of computation operation, storage requirement and communication cost.", "Since the 1980s, two approaches have been developed for analyzing security protocols. One of the approaches relies on a computational model that considers issues of complexity and probability. This approach captures a strong notion of security, guaranteed against all probabilistic polynomial-time attacks. The other approach relies on a symbolic model of protocol executions in which cryptographic primitives are treated as black boxes. Since the seminal work of Dolev and Yao, it has been realized that this latter approach enables significantly simpler and often automated proofs. However, the guarantees that it offers have been quite unclear. In this paper, we show that it is possible to obtain the best of both worlds: fully automated proofs and strong, clear security guarantees. Specifically, for the case of protocols that use signatures and asymmetric encryption, we establish that symbolic integrity and secrecy proofs are sound with respect to the computational model. The main new challenges concern secrecy properties for which we obtain the first soundness result for the case of active adversaries. Our proofs are carried out using Casrul, a fully automated tool.", "Protocols for secure computation enable mutually distrustful parties to jointly compute on their private inputs without revealing anything but the result. Over recent years, secure computation has become practical and considerable effort has been made to make it more and more efficient. A highly important tool in the design of two-party protocols is Yao's garbled circuit construction (Yao 1986), and multiple optimizations on this primitive have led to performance improvements of orders of magnitude over the last years. However, many of these improvements come at the price of making very strong assumptions on the underlying cryptographic primitives being used (e.g., that AES is secure for related keys, that it is circular secure, and even that it behaves like a random permutation when keyed with a public fixed key). The justification behind making these strong assumptions has been that otherwise it is not possible to achieve fast garbling and thus fast secure computation. In this paper, we take a step back and examine whether it is really the case that such strong assumptions are needed. We provide new methods for garbling that are secure solely under the assumption that the primitive used (e.g., AES) is a pseudorandom function. Our results show that in many cases, the penalty incurred is not significant, and so a more conservative approach to the assumptions being used can be adopted.", "We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry's bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2λ security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with O(λ · L3) per-gate computation -- i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is O(λ2), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results to the above for LWE, but with worse performance. Based on the Ring LWE assumption, we introduce a number of further optimizations to our schemes. As an example, for circuits of large width -- e.g., where a constant fraction of levels have width at least λ -- we can reduce the per-gate computation of the bootstrapped version to O(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω(λ3.5) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011)." ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
In the line of designing ultralightweight protocols, in @cite_7 , Tewari and Gupta proposed a new ultralightweight authentication protocol for IoT and claimed that their protocol satisfies all security requirements. However, in @cite_19 , an efficient passive secret disclosure attack is applied to this protocol. Moreover, in @cite_4 , Wang cryptanalyzed the Tewari and Gupta protocol and also proposed an improved version of it. This protocol later analysed by Khor and Sidorov @cite_16 , where they also proposed an improved protocol following the same designing paradigm. In this paper, we consider the security of this improved protocol which has been proposed by Khor and Sidorov, and for simplicity, we call it KSP (stands for Khor and Sidorov protocol) and show that KSP is vulnerable to desynchronization attack and also against secret disclosure attack.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_4", "@cite_7" ], "mid": [ "1484207026", "2732131698", "2093422627", "2082342720" ], "abstract": [ "proposed a novel ultralightweight RFID mutual authentication protocol [1] that has recently been analyzed in several articles. In this letter, we first propose a desynchronization attack that succeeds with probability almost 1, which improves upon the 0.25 given in a previous analysis by We also show that the bad properties of the proposed permutation function can be exploited to disclose several bits of the tag's secret rather than just 1bit as previously shown by , which increases the power of a traceability attack. Finally, we show how to extend the aforementioned attack to run a full disclosure attack, which requires to eavesdrop less protocol runs than the proposed attack by i.e., 192<<2 30. Copyright © 2013 John Wiley & Sons, Ltd.", "Recently, Tewari and Gupta proposed a ultra-lightweight mutual authentication protocol in IoT environments for RFID tags. Their protocol aims to provide secure communication with least cost in both storage and computation. Unfortunately, in this paper, we exploit the vulnerability of this protocol. In this attack, an attacker can obtain the key shared between a back-end database server and a tag. We also explore the possibility in patching the system with some modifications.", "One of the key problems in RFID is security and privacy. The implementation of authentication protocols is a flexible and effective way to solve this problem. This letter proposes a new ultralightweight RFID authentication protocol with permutation (RAPP). RAPP avoids using unbalanced OR and AND operations and introduces a new operation named permutation. The tags only involve three operations: bitwise XOR, left rotation and permutation. In addition, unlike other existing ultralightweight protocols, the last messages exchanged in RAPP are sent by the reader so as to resist de-synchronization attacks. Security analysis shows that RAPP achieves the functionalities of the authentication protocol and is resistant to various attacks. Performance evaluation illustrates that RAPP uses fewer resources on tags in terms of computation operation, storage requirement and communication cost.", "RAPP (RFID Authentication Protocol with Permutation) is a recently proposed and efficient ultralightweight authentication protocol. Although it maintains the structure of the other existing ultralightweight protocols, the operation used in it is totally different due to the use of new introduced data dependent permutations and avoidance of modular arithmetic operations and biased logical operations such as AND and OR. The designers of RAPP claimed that this protocol resists against desynchronization attacks since the last messages of the protocol is sent by the reader and not by the tag. This letter challenges this assumption and shows that RAPP is vulnerable against desynchronization attack. This attack has a reasonable probability of success and is effective whether Hamming weight-based or modular-based rotations are used by the protocol." ] }
1907.11322
2966078947
By expanding the connection of objects to the Internet and their entry to human life, the issue of security and privacy has become important. In order to enhance security and privacy on the Internet, many security protocols have been developed. Unfortunately, the security analyzes that have been carried out on these protocols show that they are vulnerable to one or few attacks, which eliminates the use of these protocols. Therefore, the need for a security protocol on the Internet of Things (IoT) has not yet been resolved. Recently, Khor and Sidorov cryptanalyzed the protocol and presented an improved version of it. In this paper, at first, we show that this protocol also does not have sufficient security and so it is not recommended to be used in any application. More precisely, we present a full secret disclosure attack against this protocol, which extracted the whole secrets of the protocol by two communication with the target tag. In addition, recently proposed an ultralightweight mutual authentication RFID protocol for blockchain enabled supply chains, supported by formal and informal security proofs. However, we present a full secret disclosure attack against this protocol as well.
As a new emerging technology, blockchain is believed to provide higher data protection, reliability, transparency, and lower management costs compared to a conventional centralized database. Hence, it could be a promising solution for large scale IoT systems. Targeting those benefits Sidorov recently proposed an ultralightweight mutual authentication RFID protocol for blockchain-enabled supply chains @cite_24 . Although they have claimed security against various attacks, we present an efficient secret disclosure attack on it. For the sake of simplicity, we call this protocol SOVNOKP .
{ "cite_N": [ "@cite_24" ], "mid": [ "2907112888", "2755191230", "2732131698", "1484207026" ], "abstract": [ "Previous research studies mostly focused on enhancing the security of radio frequency identification (RFID) protocols for various RFID applications that rely on a centralized database. However, blockchain technology is quickly emerging as a novel distributed and decentralized alternative that provides higher data protection, reliability, immutability, transparency, and lower management costs compared with a conventional centralized database. These properties make it extremely suitable for integration in a supply chain management system. In order to successfully fuse RFID and blockchain technologies together, a secure method of communication is required between the RFID tagged goods and the blockchain nodes. Therefore, this paper proposes a robust ultra-lightweight mutual authentication RFID protocol that works together with a decentralized database to create a secure blockchain-enabled supply chain management system. Detailed security analysis is performed to prove that the proposed protocol is secure from key disclosure, replay, man-in-the-middle, de-synchronization, and tracking attacks. In addition to that, a formal analysis is conducted using Gong, Needham, and Yahalom logic and automated validation of internet security protocols and applications tool to verify the security of the proposed protocol. The protocol is proven to be efficient with respect to storage, computational, and communication costs. In addition to that, a further step is taken to ensure the robustness of the protocol by analyzing the probability of data collision written to the blockchain.", "In recent years, RFID (radio-frequency identification) systems are widely used in many applications. One of the most important applications for this technology is the Internet of things (IoT). Therefore, researchers have proposed several authentication protocols that can be employed in RFID-based IoT systems, and they have claimed that their protocols can satisfy all security requirements of these systems. However, in RFID-based IoT systems we have mobile readers that can be compromised by the adversary. Due to this attack, the adversary can compromise a legitimate reader and obtain its secrets. So, the protocol designers must consider the security of their proposals even in the reader compromised scenario. In this paper, we consider the security of the ultra-lightweight RFID mutual authentication (ULRMAPC) protocol recently proposed by They claimed that their protocol could be applied in the IoT systems and provide strong security. However, in this paper we show that their protocol is vulnerable to denial of service, reader and tag impersonation and de-synchronization attacks. To provide a solution, we present a new authentication protocol, which is more secure than the ULRMAPC protocol and also can be employed in RFID-based IoT systems.", "Recently, Tewari and Gupta proposed a ultra-lightweight mutual authentication protocol in IoT environments for RFID tags. Their protocol aims to provide secure communication with least cost in both storage and computation. Unfortunately, in this paper, we exploit the vulnerability of this protocol. In this attack, an attacker can obtain the key shared between a back-end database server and a tag. We also explore the possibility in patching the system with some modifications.", "proposed a novel ultralightweight RFID mutual authentication protocol [1] that has recently been analyzed in several articles. In this letter, we first propose a desynchronization attack that succeeds with probability almost 1, which improves upon the 0.25 given in a previous analysis by We also show that the bad properties of the proposed permutation function can be exploited to disclose several bits of the tag's secret rather than just 1bit as previously shown by , which increases the power of a traceability attack. Finally, we show how to extend the aforementioned attack to run a full disclosure attack, which requires to eavesdrop less protocol runs than the proposed attack by i.e., 192<<2 30. Copyright © 2013 John Wiley & Sons, Ltd." ] }
1907.11481
2966777976
Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
-- Code quality is multi-faceted and covers, for instance, testability, maintainability, and readability. To examine these aspects, certain quality attributes (e.g., coupling, complexity, size) are quantified by underlying software metrics; for instance, McCabe's software complexity metrics measure readability aspects of the code @cite_13 . For object-oriented systems, a popular set of metrics is the CK suite introduced by Chidamber and Kemerer @cite_43 and the QMOOD metrics (Quality Model for Object-Oriented Design) @cite_15 . Many approaches employ such metrics suites to distinguish parts of the source code in terms of good, acceptable, or bad quality @cite_64 @cite_44 or to identify code smells (problematic properties and anti-patterns of the code) @cite_51 . We also build on object-oriented metrics and use threshold-based approaches to analyze code quality and smells (see ).
{ "cite_N": [ "@cite_64", "@cite_44", "@cite_43", "@cite_51", "@cite_15", "@cite_13" ], "mid": [ "2121866145", "2127623179", "2107643286", "2160538621" ], "abstract": [ "This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in (Chidamber and Kemerer, 1994). More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in (Li and Henry, 1993) where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than \"traditional\" code metrics, which can only be collected at a later phase of the software development processes.", "There are lots of different software metrics discovered and used for defect prediction in the literature. Instead of dealing with so many metrics, it would be practical and easy if we could determine the set of metrics that are most important and focus on them more to predict defectiveness. We use Bayesian networks to determine the probabilistic influential relationships among software metrics and defect proneness. In addition to the metrics used in Promise data repository, we define two more metrics, i.e. NOD for the number of developers and LOCQ for the source code quality. We extract these metrics by inspecting the source code repositories of the selected Promise data repository data sets. At the end of our modeling, we learn the marginal defect proneness probability of the whole software system, the set of most effective metrics, and the influential relationships among metrics and defectiveness. Our experiments on nine open source Promise data repository data sets show that response for class (RFC), lines of code (LOC), and lack of coding quality (LOCQ) are the most effective metrics whereas coupling between objects (CBO), weighted method per class (WMC), and lack of cohesion of methods (LCOM) are less effective metrics on defect proneness. Furthermore, number of children (NOC) and depth of inheritance tree (DIT) have very limited effect and are untrustworthy. On the other hand, based on the experiments on Poi, Tomcat, and Xalan data sets, we observe that there is a positive correlation between the number of developers (NOD) and the level of defectiveness. However, further investigation involving a greater number of projects is needed to confirm our findings.", "To produce high quality object-oriented (OO) applications, a strong emphasis on design aspects, especially during the early phases of software development, is necessary. Design metrics play an important role in helping developers understand design aspects of software and, hence, improve software quality and developer productivity. In this paper, we provide empirical evidence supporting the role of OO design complexity metrics, specifically a subset of the Chidamber and Kemerer (1991, 1994) suite (CK metrics), in determining software defects. Our results, based on industry data from software developed in two popular programming languages used in OO development, indicate that, even after controlling for the size of the software, these metrics are significantly associated with defects. In addition, we find that the effects of these metrics on defects vary across the samples from two programming languages-C++ and Java. We believe that these results have significant implications for designing high-quality software products using the OO approach.", "Much effort has been devoted to the development and empirical validation of object-oriented metrics. The empirical validations performed thus far would suggest that a core set of validated metrics is close to being identified. However, none of these studies allow for the potentially confounding effect of class size. We demonstrate a strong size confounding effect and question the results of previous object-oriented metrics validation studies. We first investigated whether there is a confounding effect of class size in validation studies of object-oriented metrics and show that, based on previous work, there is reason to believe that such an effect exists. We then describe a detailed empirical methodology for identifying those effects. Finally, we perform a study on a large C++ telecommunications framework to examine if size is really a confounder. This study considered the Chidamber and Kemerer metrics and a subset of the Lorenz and Kidd metrics. The dependent variable was the incidence of a fault attributable to a field failure (fault-proneness of a class). Our findings indicate that, before controlling for size, the results are very similar to previous studies. The metrics that are expected to be validated are indeed associated with fault-proneness." ] }
1907.11481
2966777976
Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
-- Visualizations included into the lines or paragraphs of a text are known as @cite_56 , @cite_36 , or graphics @cite_12 . They allow close and coherent integration of the textual and visual representations of data. Some approaches apply these in the context of software engineering and embed them into the code to assist developers in understanding a program. @cite_7 and Sul ' @cite_45 suggest augmenting the source code with visualizations to keep track of the state and properties of the code. @cite_46 @cite_21 implement embedded visualizations for understanding program behavior and performance bottlenecks. Similarly, @cite_41 and @cite_33 augment source code with visualizations to aid understanding of runtime behavior. We embed visualizations into natural language text (not into source code) to support better understanding of the quality of the source code.
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_36", "@cite_41", "@cite_21", "@cite_56", "@cite_45", "@cite_46", "@cite_12" ], "mid": [ "2796075754", "1995073788", "2898193680", "2108958711" ], "abstract": [ "Programmers must draw explicit connections between their code and runtime state to properly assess the correctness of their programs. However, debugging tools often decouple the program state from the source code and require explicitly invoked views to bridge the rift between program editing and program understanding. To unobtrusively reveal runtime behavior during both normal execution and debugging, we contribute techniques for visualizing program variables directly within the source code. We describe a design space and placement criteria for embedded visualizations. We evaluate our in situ visualizations in an editor for the Vega visualization grammar. Compared to a baseline development environment, novice Vega users improve their overall task grade by about 2 points when using the in situ visualizations and exhibit significant positive effects on their self-reported speed and accuracy.", "We present an exploration and a design space that characterize the usage and placement of word-scale visualizations within text documents. Word-scale visualizations are a more general version of sparklines-small, word-sized data graphics that allow meta-information to be visually presented in-line with document text. In accordance with Edward Tufte's definition, sparklines are traditionally placed directly before or after words in the text. We describe alternative placements that permit a wider range of word-scale graphics and more flexible integration with text layouts. These alternative placements include positioning visualizations between lines, within additional vertical and horizontal space in the document, and as interactive overlays on top of the text. Each strategy changes the dimensions of the space available to display the visualizations, as well as the degree to which the text must be adjusted or reflowed to accommodate them. We provide an illustrated design space of placement options for word-scale visualizations and identify six important variables that control the placement of the graphics and the level of disruption of the source text. We also contribute a quantitative analysis that highlights the effect of different placements on readability and text disruption. Finally, we use this analysis to propose guidelines to support the design and placement of word-scale visualizations.", "Abstract Source code written in textual programming languages is typically edited in integrated development environments (IDEs) or specialized code editors. These tools often display various visual items, such as icons, color highlights or more advanced graphical overlays directly in the main editable source code view. We call such visualizations source code editor augmentation. In this paper, we present a first systematic mapping study of source code editor augmentation tools and approaches. We manually reviewed the metadata of 5553 articles published during the last twenty years in two phases – keyword search and references search. The result is a list of 103 relevant articles and a taxonomy of source code editor augmentation tools with seven dimensions, which we used to categorize the resulting list of the surveyed articles. We also provide the definition of the term source code editor augmentation, along with a brief overview of historical development and augmentations available in current industrial IDEs.", "Finding and fixing performance bottlenecks requires sound knowledge of the program that is to be optimized. In this paper, we propose an approach for presenting performance-related information to software engineers by visually augmenting source code shown in an editor. Small diagrams at each method declaration and method call visualize the propagation of runtime consumption through the program as well as the interplay of threads in parallelized programs. Advantages of in situ visualization like this over traditional representations, where code and profiling information are shown in different places, promise to be the prevention of a split-attention effect caused by multiple views; information is presented where required, which supports understanding and navigation. We implemented the approach as an IDE plug-in and tested it in a user study with four developers improving the performance of their own programs. The user study provides insights into the process of understanding performance bottlenecks with our approach." ] }