aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.11481
2966777976
Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
-- The interactive linking of text and visualizations has only been explored to some extent. Beck and Weiskopf @cite_36 propose an abstract interaction model for documents containing text, word-sized graphics, and regular visualizations; all three types of data representations are linked via brushing-and-linking. @cite_8 describe an authoring solution for web documents to produce some of those interactions. Our interaction model also uses and extends the model by Beck and Weiskopf. @cite_57 advocate for linking to facilitate document reading. In their approach, the linking is supported between text in the main body and text in tables. Few of the systems that generate both text and visualization---for instance, @cite_31 and @cite_28 ---discuss interactions, but still focus more on explanations and offer limited data exploration. @cite_27 , in contrast, focuses more on interactions and supports the data exploration process by offering short descriptions about key findings in the data. However, it does not generate a comprehensive report with longer descriptions.
{ "cite_N": [ "@cite_8", "@cite_36", "@cite_28", "@cite_57", "@cite_27", "@cite_31" ], "mid": [ "2590534375", "137863291", "2012118336", "2166261239" ], "abstract": [ "Generating visualizations at the size of a word creates dense information representations often called sparklines . The integration of word-sized graphics into text could avoid additional cognitive load caused by splitting the readers’ attention between figures and text. In scientific publications, these graphics make statements easier to understand and verify because additional quantitative information is available where needed. In this work, we perform a literature review to find out how researchers have already applied such word-sized representations. Illustrating the versatility of the approach, we leverage these representations for reporting empirical and bibliographic data in three application examples. For interactive Web-based publications, we explore levels of interactivity and discuss interaction patterns to link visualization and text. We finally call the visualization community to be a pioneer in exploring new visualization-enriched and interactive publication formats.", "Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.", "Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.", "We present the design and evaluation of FI3D, a direct-touch data exploration technique for 3D visualization spaces. The exploration of three-dimensional data is core to many tasks and domains involving scientific visualizations. Thus, effective data navigation techniques are essential to enable comprehension, understanding, and analysis of the information space. While evidence exists that touch can provide higher-bandwidth input, somesthetic information that is valuable when interacting with virtual worlds, and awareness when working in collaboration, scientific data exploration in 3D poses unique challenges to the development of effective data manipulations. We present a technique that provides touch interaction with 3D scientific data spaces in 7 DOF. This interaction does not require the presence of dedicated objects to constrain the mapping, a design decision important for many scientific datasets such as particle simulations in astronomy or physics. We report on an evaluation that compares the technique to conventional mouse-based interaction. Our results show that touch interaction is competitive in interaction speed for translation and integrated interaction, is easy to learn and use, and is preferred for exploration and wayfinding tasks. To further explore the applicability of our basic technique for other types of scientific visualizations we present a second case study, adjusting the interaction to the illustrative visualization of fiber tracts of the brain and the manipulation of cutting planes in this context." ] }
1907.11481
2966777976
Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.
In summary, although existing approaches present source code information, they lack in putting data into context and providing explanations. None of the systems, also outside the software engineering community, supports exploranation as a process blending explanations and explorations in a way that we envision leveraging the interactive combination of textual and visual descriptions. We are inspired by the abstract idea of interactive linking of text and visualizations by Beck and Weiskopf @cite_36 to support exploranation. We adopt CK, QMOOD, and McCabe's metrics (listed in Table ) and use them in combination with pre-defined thresholds to analyze and present source code quality.
{ "cite_N": [ "@cite_36" ], "mid": [ "2108958711", "2786674049", "2018844270", "2964024144" ], "abstract": [ "Finding and fixing performance bottlenecks requires sound knowledge of the program that is to be optimized. In this paper, we propose an approach for presenting performance-related information to software engineers by visually augmenting source code shown in an editor. Small diagrams at each method declaration and method call visualize the propagation of runtime consumption through the program as well as the interplay of threads in parallelized programs. Advantages of in situ visualization like this over traditional representations, where code and profiling information are shown in different places, promise to be the prevention of a split-attention effect caused by multiple views; information is presented where required, which supports understanding and navigation. We implemented the approach as an IDE plug-in and tested it in a user study with four developers improving the performance of their own programs. The user study provides insights into the process of understanding performance bottlenecks with our approach.", "In this paper, we present the results of long-term research conducted in order to study the contribution made by software models based on the Unified Modeling Language (UML) to the comprehensibility of Java source-code deprived of comments. We have conducted 12 controlled experiments in different experimental contexts and on different sites with participants with different levels of expertise (i.e., Bachelor’s, Master’s, and PhD students and software practitioners from Italy and Spain). A total of 333 observations were obtained from these experiments. The UML models in our experiments were those produced in the analysis and design phases. The models produced in the analysis phase were created with the objective of abstracting the environment in which the software will work (i.e., the problem domain), while those produced in the design phase were created with the goal of abstracting implementation aspects of the software (i.e., the solution application domain). Source-code comprehensibility was assessed with regard to correctness of understanding, time taken to accomplish the comprehension tasks, and efficiency as regards accomplishing those tasks. In order to study the global effect of UML models on source-code comprehensibility, we aggregated results from the individual experiments using a meta-analysis. We made every effort to account for the heterogeneity of our experiments when aggregating the results obtained from them. The overall results suggest that the use of UML models affects the comprehensibility of source-code, when it is deprived of comments. Indeed, models produced in the analysis phase might reduce source-code comprehensibility, while increasing the time taken to complete comprehension tasks. That is, browsing source code and this kind of models together negatively impacts on the time taken to complete comprehension tasks without having a positive effect on the comprehensibility of source code. One plausible justification for this is that the UML models produced in the analysis phase focus on the problem domain. That is, models produced in the analysis phase say nothing about source code and there should be no expectation that they would, in any way, be beneficial to comprehensibility. On the other hand, UML models produced in the design phase improve source-code comprehensibility. One possible justification for this result is that models produced in the design phase are more focused on implementation details. Therefore, although the participants had more material to read and browse, this additional effort was paid back in the form of an improved comprehension of source code.", "During software evolution a developer must investigate source code to locate then understand the entities that must be modified to complete a change task. To help developers in this task, proposed text summarization based approaches to the automatic generation of class and method summaries, and via a study of four developers, they evaluated source code summaries generated using their techniques. In this paper we propose a new topic modeling based approach to source code summarization, and via a study of 14 developers, we evaluate source code summaries generated using the proposed technique. Our study partially replicates the original study by in that it uses the objects, the instruments, and a subset of the summaries from the original study, but it also expands the original study in that it includes more subjects and new summaries. The results of our study both support the findings of the original and provide new insights into the processes and criteria that developers use to evaluate source code summaries. Based on our results, we suggest future directions for research on source code summarization.", "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions." ] }
1907.11519
2966168818
Making a single network effectively address diverse contexts---learning the variations within a dataset or multiple datasets---is an intriguing step towards achieving generalized intelligence. Existing approaches of deepening, widening, and assembling networks are not cost effective in general. In view of this, networks which can allocate resources according to the context of the input and regulate flow of information across the network are effective. In this paper, we present Context-Aware Multipath Network (CAMNet), a multi-path neural network with data-dependant routing between parallel tensors. We show that our model performs as a generalized model capturing variations in individual datasets and multiple different datasets, both simultaneously and sequentially. CAMNet surpasses the performance of classification and pixel-labeling tasks in comparison with the equivalent single-path, multi-path, and deeper single-path networks, considering datasets individually, sequentially, and in combination. The data-dependent routing between tensors in CAMNet enables the model to control the flow of information end-to-end, deciding which resources to be common or domain-specific.
Having deep networks for generalizing to wide-range datasets is a common practice . Although the depth influences the performance of a neural network, training such networks is difficult and the general intelligence of this kind of a network is questionable @cite_23 . Hence, more attention is paid towards generally intelligent neural networks. One such approach is to harvest more information within a layer in a neural network. Capsule networks @cite_9 @cite_19 involves extracting information about pose and orientation where, instead of convolutional scalars, there are vectors and matrices as layer outputs.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_23" ], "mid": [ "2593110912", "2953324412", "2775143585", "2300779272" ], "abstract": [ "Deep neural network is difficult to train and this predicament becomes worse as the depth increases. The essence of this problem exists in the magnitude of backpropagated errors that will result in gradient vanishing or exploding phenomenon. We show that a variant of regularizer which utilizes orthonormality among different filter banks can alleviate this problem. Moreover, we design a backward error modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. Equipped with these two ingredients, we propose several novel optimization solutions that can be utilized for training a specific-structured (repetitively triple modules of Conv-BNReLU) extremely deep convolutional neural network (CNN) WITHOUT any shortcuts identity mappings from scratch. Experiments show that our proposed solutions can achieve distinct improvements for a 44-layer and a 110-layer plain networks on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train plain CNNs to match the performance of the residual counterparts. Besides, we propose new principles for designing network structure from the insights evoked by orthonormality. Combined with residual structure, we achieve comparative performance on the ImageNet dataset.", "Deep neural network is difficult to train and this predicament becomes worse as the depth increases. The essence of this problem exists in the magnitude of backpropagated errors that will result in gradient vanishing or exploding phenomenon. We show that a variant of regularizer which utilizes orthonormality among different filter banks can alleviate this problem. Moreover, we design a backward error modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. Equipped with these two ingredients, we propose several novel optimization solutions that can be utilized for training a specific-structured (repetitively triple modules of Conv-BNReLU) extremely deep convolutional neural network (CNN) WITHOUT any shortcuts identity mappings from scratch. Experiments show that our proposed solutions can achieve distinct improvements for a 44-layer and a 110-layer plain networks on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train plain CNNs to match the performance of the residual counterparts. Besides, we propose new principles for designing network structure from the insights evoked by orthonormality. Combined with residual structure, we achieve comparative performance on the ImageNet dataset.", "In recent years, convolutional neural networks (CNN) have played an important role in the field of deep learning. Variants of CNN's have proven to be very successful in classification tasks across different domains. However, there are two big drawbacks to CNN's: their failure to take into account of important spatial hierarchies between features, and their lack of rotational invariance. As long as certain key features of an object are present in the test data, CNN's classify the test data as the object, disregarding features' relative spatial orientation to each other. This causes false positives. The lack of rotational invariance in CNN's would cause the network to incorrectly assign the object another label, causing false negatives. To address this concern, propose a novel type of neural network using the concept of capsules in a recent paper. With the use of dynamic routing and reconstruction regularization, the capsule network model would be both rotation invariant and spatially aware. The capsule network has shown its potential by achieving a state-of-the-art result of 0.25 test error on MNIST without data augmentation such as rotation and scaling, better than the previous baseline of 0.39 . To further test out the application of capsule networks on data with higher dimensionality, we attempt to find the best set of configurations that yield the optimal test error on CIFAR10 dataset.", "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation." ] }
1907.11519
2966168818
Making a single network effectively address diverse contexts---learning the variations within a dataset or multiple datasets---is an intriguing step towards achieving generalized intelligence. Existing approaches of deepening, widening, and assembling networks are not cost effective in general. In view of this, networks which can allocate resources according to the context of the input and regulate flow of information across the network are effective. In this paper, we present Context-Aware Multipath Network (CAMNet), a multi-path neural network with data-dependant routing between parallel tensors. We show that our model performs as a generalized model capturing variations in individual datasets and multiple different datasets, both simultaneously and sequentially. CAMNet surpasses the performance of classification and pixel-labeling tasks in comparison with the equivalent single-path, multi-path, and deeper single-path networks, considering datasets individually, sequentially, and in combination. The data-dependent routing between tensors in CAMNet enables the model to control the flow of information end-to-end, deciding which resources to be common or domain-specific.
The common multitasking (MTL) in computer vision refers to processing multiple different tasks in a single input (e.g., semantic segmentation and surface-normal prediction). Conventional approaches to MTL includes using shared layers to some extent along with task-specific layers. Choosing the number of task-specific layers and shared layers is task dependant. However, recent approaches solve the problem of choosing from possible combinations in this context by letting the model to learn the use of shared and task-specific layers according to the task. Cross-stitch networks @cite_44 and sluice networks @cite_10 introduce sharing resources between parallel networks where communication between parallel layers is done through learning a linear combination of parallel tensors. NDDR-CNN @cite_20 use discriminative dimensionality reduction to fuse features from parallel tensors.
{ "cite_N": [ "@cite_44", "@cite_10", "@cite_20" ], "mid": [ "2900964459", "2966182616", "2963704251", "2186054958" ], "abstract": [ "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)--(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL.", "Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.", "In multi-task learning (MTL), multiple tasks are learnt jointly. A major assumption for this paradigm is that all those tasks are indeed related so that the joint training is appropriate and beneficial. In this paper, we study the problem of multi-task learning of shared feature representations among tasks, while simultaneously determining \"with whom\" each task should share. We formulate the problem as a mixed integer programming and provide an alternating minimization technique to solve the optimization problem of jointly identifying grouping structures and parameters. The algorithm mono-tonically decreases the objective function and converges to a local optimum. Compared to the standard MTL paradigm where all tasks are in a single group, our algorithm improves its performance with statistical significance for three out of the four datasets we have studied. We also demonstrate its advantage over other task grouping techniques investigated in literature." ] }
1907.11519
2966168818
Making a single network effectively address diverse contexts---learning the variations within a dataset or multiple datasets---is an intriguing step towards achieving generalized intelligence. Existing approaches of deepening, widening, and assembling networks are not cost effective in general. In view of this, networks which can allocate resources according to the context of the input and regulate flow of information across the network are effective. In this paper, we present Context-Aware Multipath Network (CAMNet), a multi-path neural network with data-dependant routing between parallel tensors. We show that our model performs as a generalized model capturing variations in individual datasets and multiple different datasets, both simultaneously and sequentially. CAMNet surpasses the performance of classification and pixel-labeling tasks in comparison with the equivalent single-path, multi-path, and deeper single-path networks, considering datasets individually, sequentially, and in combination. The data-dependent routing between tensors in CAMNet enables the model to control the flow of information end-to-end, deciding which resources to be common or domain-specific.
Lifelong learning involves learning from multiple datasets one after the other. Conventional approaches include fine tuning @cite_12 and feature extraction @cite_14 which suffer from catastrophic forgetting. Rebuffi al @cite_7 introduced incremental classification as opposed to batch training in order to overcome catastrophic forgetting. Learning without Forgetting (LwF) @cite_47 and Elastic Weight Consolidation (EWC) @cite_39 are also two approaches introduced to overcome this issue in terms of modifying the objective function. In contrast, PackNet @cite_21 and Piggyback @cite_38 methods use binary masking on dense weight filters once trained in a dataset in order to free up the least used weights to learn from the next dataset. However, these approaches need larger filters depending on the number of datasets to be trained sequentially.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_7", "@cite_21", "@cite_39", "@cite_47", "@cite_12" ], "mid": [ "2786498526", "2610828655", "2787358858", "2470456807" ], "abstract": [ "In this paper, we address the incremental classifier learning problem, which suffers from catastrophic forgetting. The main reason for catastrophic forgetting is that the past data are not available during learning. Typical approaches keep some exemplars for the past classes and use distillation regularization to retain the classification capability on the past classes and balance the past and new classes. However, there are four main problems with these approaches. First, the loss function is not efficient for classification. Second, there is unbalance problem between the past and new classes. Third, the size of pre-decided exemplars is usually limited and they might not be distinguishable from unseen new classes. Forth, the exemplars may not be allowed to be kept for a long time due to privacy regulations. To address these problems, we propose (a) a new loss function to combine the cross-entropy loss and distillation loss, (b) a simple way to estimate and remove the unbalance between the old and new classes , and (c) using Generative Adversarial Networks (GANs) to generate historical data and select representative exemplars during generation. We believe that the data generated by GANs have much less privacy issues than real images because GANs do not directly copy any real image patches. We evaluate the proposed method on CIFAR-100, Flower-102, and MS-Celeb-1M-Base datasets and extensive experiments demonstrate the effectiveness of our method.", "Multi-class supervised learning systems require the knowledge of the entire range of labels they predict. Often when learnt incrementally, they suffer from catastrophic forgetting. To avoid this, generous leeways have to be made to the philosophy of incremental learning that either forces a part of the machine to not learn, or to retrain the machine again with a selection of the historic data. While these hacks work to various degrees, they do not adhere to the spirit of incremental learning. In this article, we redefine incremental learning with stringent conditions that do not allow for any undesirable relaxations and assumptions. We design a strategy involving generative models and the distillation of dark knowledge as a means of hallucinating data along with appropriate targets from past distributions. We call this technique, phantom sampling.We show that phantom sampling helps avoid catastrophic forgetting during incremental learning. Using an implementation based on deep neural networks, we demonstrate that phantom sampling dramatically avoids catastrophic forgetting. We apply these strategies to competitive multi-class incremental learning of deep neural networks. Using various benchmark datasets and through our strategy, we demonstrate that strict incremental learning could be achieved. We further put our strategy to test on challenging cases, including cross-domain increments and incrementing on a novel label space. We also propose a trivial extension to unbounded-continual learning and identify potential for future development.", "In recent years, Convolutional Neural Networks (CNNs) have shown remarkable performance in many computer vision tasks such as object recognition and detection. However, complex training issues, such as \"catastrophic forgetting\" and hyper-parameter tuning, make incremental learning in CNNs a difficult challenge. In this paper, we propose a hierarchical deep neural network, with CNNs at multiple levels, and a corresponding training method for lifelong learning. The network grows in a tree-like manner to accommodate the new classes of data without losing the ability to identify the previously trained classes. The proposed network was tested on CIFAR-10 and CIFAR-100 datasets, and compared against the method of fine tuning specific layers of a conventional CNN. We obtained comparable accuracies and achieved 40 and 20 reduction in training effort in CIFAR-10 and CIFAR 100 respectively. The network was able to organize the incoming classes of data into feature-driven super-classes. Our model improves upon existing hierarchical CNN models by adding the capability of self-growth and also yields important observations on feature selective classification.", "Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin." ] }
1907.11519
2966168818
Making a single network effectively address diverse contexts---learning the variations within a dataset or multiple datasets---is an intriguing step towards achieving generalized intelligence. Existing approaches of deepening, widening, and assembling networks are not cost effective in general. In view of this, networks which can allocate resources according to the context of the input and regulate flow of information across the network are effective. In this paper, we present Context-Aware Multipath Network (CAMNet), a multi-path neural network with data-dependant routing between parallel tensors. We show that our model performs as a generalized model capturing variations in individual datasets and multiple different datasets, both simultaneously and sequentially. CAMNet surpasses the performance of classification and pixel-labeling tasks in comparison with the equivalent single-path, multi-path, and deeper single-path networks, considering datasets individually, sequentially, and in combination. The data-dependent routing between tensors in CAMNet enables the model to control the flow of information end-to-end, deciding which resources to be common or domain-specific.
Approaches that can gradually build customized networks according to the input are also inspiring for our research. ConvNet-AIG @cite_33 and BlockDrop @cite_13 are two approaches introduced for data dependant choosing of residual blocks in a deep network as alternatives to conventional Residual Networks @cite_31 . These approaches learn which residual block to keep according to the nature of the input.
{ "cite_N": [ "@cite_31", "@cite_13", "@cite_33" ], "mid": [ "2770042371", "2962944050", "2565538933", "2962949867" ], "abstract": [ "Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20 on average, going as high as 36 for some images, while maintaining the same 76.4 top-1 accuracy on ImageNet.", "Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20 on average, going as high as 36 for some images, while maintaining the same 76.4 top-1 accuracy on ImageNet.", "An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normalization, but was also key to the immense success of residual networks. @PARASPLIT In this work, we put the principle of identity parameterization on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. @PARASPLIT Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.", "It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for example, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse. An analysis of i-RevNet’s learned representations suggests an explanation of the good accuracy by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural images representations." ] }
1907.11416
2966213978
In this article, we study generalized liar's dominating set problem in graphs. Let @math be a simple undirected graph. The generalized liar's dominating set, called as the distance- @math @math -liar's dominating set, is a subset @math such that (i) each vertex in @math is distance- @math dominated by at least @math vertices in @math , and (ii) each pair of distinct vertices in @math is distance- @math dominated by at least @math vertices in @math , where @math are positive integers and @math . Here, a vertex @math is distance- @math dominated by another vertex @math means the shortest path distance between @math and @math is at most @math in @math . We first consider distance-1 @math -liar's dominating set problem and prove that it is NP-complete. Next, we consider distance- @math @math -liar's dominating set problem and show that it is also NP-complete. These liar's dominating set problems are generalized version of liar's dominating set problem as researcher studied only distance- @math @math -liar's dominating set problem in literature. We also prove that (i) distance-1 @math -liar's dominating set problem cannot be approximated within a factor of @math for any @math , unless NP @math DTIME @math , and (ii) distance- @math @math -liar's dominating set problem cannot be approximated within a factor of @math for any @math , unless NP @math DTIME @math .
@cite_0 studied the approximability of the problem in general graphs and given an @math -factor approximation algorithm, where @math is the maximum degree of the given graph. For proper interval graphs also Panda and Paul @cite_9 considered the problem and proposed a linear time algorithm. They also studied the minimum distance- @math @math -LDS problem for bounded degree graphs, and @math -claw free graphs @cite_0 . Sterling @cite_8 presented bounds on liar's domination number by considering the problem on two-dimensional grid graphs.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_8" ], "mid": [ "1993437988", "2001663593", "2080619861", "1564010364" ], "abstract": [ "Let G=(V,E) be a graph without isolated vertices and having at least 3 vertices. A set L@?V(G) is a liar@?s dominating set if (1) |N\"G[v]@?L|>=2 for all v@?V(G), and (2) |(N\"G[u]@?N\"G[v])@?L|>=3 for every pair u,v@?V(G) of distinct vertices in G, where N\"G[x]= y@?V|xy@?E @? x is the closed neighborhood of x in G. Given a graph G and a positive integer k, the liar@?s domination problem is to check whether G has a liar@?s dominating set of size at most k. The liar@?s domination problem is known to be NP-complete for general graphs. In this paper, we propose a linear time algorithm for computing a minimum cardinality liar@?s dominating set in a proper interval graph. We also strengthen the NP-completeness result of liar@?s domination problem for general graphs by proving that the problem remains NP-complete even for undirected path graphs which is a super class of proper interval graphs.", "par>We prove some non-approximability results for restrictions of basic combinatorial optimization problems to instances of bounded “degreeror bounded “width.” Specifically: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approximate to within a factor @math , unless @math . H stad [18] proved that the problem is approximable to within a factor @math in polynomial time, and that is hard to approximate to within a factor @math . Our result uses a new randomized reduction from general instances of Max 3SAT to bounded-occurrences instances. The randomized reduction applies to other Max SNP problems as well. We observe that the Set Cover problem on instances where each set has size at most B is hard to approximate to within a factor @math unless @math . The result follows from an appropriate setting of parameters in Feige's reduction [11]. This is essentially tight in light of the existence of @math -approximate algorithms [20, 23, 9] We present a new PCP construction, based on applying parallel repetition to the inner verifier,'' and we provide a tight analysis for it. Using the new construction, and some modifications to known reductions from PCP to Hitting Set, we prove that Hitting Set with sets of size B is hard to approximate to within a factor @math . The problem can be approximated to within a factor B [19], and it is the Vertex Cover problem for B =2. The relationship between hardness of approximation and set size seems to have not been explored before. We observe that the Independent Set problem on graphs having degree at most B is hard to approximate to within a factor @math , unless P = NP . This follows from a comination of results by Clementi and Trevisan [28] and Reingold, Vadhan and Wigderson [27]. It had been observed that the problem is hard to approximate to within a factor @math unless P = NP [1]. An algorithm achieving factor @math is also known [21, 2, 30, 16 .", "A subset L ? V of a graph G = ( V , E ) is called a liar's dominating set of G if (i) | N G u ] ? L | ? 2 for every vertex u ? V , and (ii) | ( N G u ] ? N G v ] ) ? L | ? 3 for every pair of distinct vertices u , v ? V . The Min Liar Dom Set problem is to find a liar's dominating set of minimum cardinality of a given graph G and the Decide Liar Dom Set problem is the decision version of the Min Liar Dom Set problem. The Decide Liar Dom Set problem is known to be NP-complete for general graphs. In this paper, we first present approximation algorithms and hardness of approximation results of the Min Liar Dom Set problem in general graphs, bounded degree graphs, and p-claw free graphs. We then show that the Decide Liar Dom Set problem is NP-complete for doubly chordal graphs and propose a linear time algorithm for computing a minimum liar's dominating set in block graphs.", "Thorup and Zwick showed that for any integer k≥ 1, it is possible to preprocess any positively weighted undirected graph G=(V,E), with |E|=m and |V|=n, in O(kmn @math ) expected time and construct a data structure (a (2k–1)-approximate distance oracle) of size O(kn @math ) capable of returning in O(k) time an approximation @math of the distance δ(u,v) from u to v in G that satisfies @math , for any two vertices u,v∈ V. They also presented a much slower O(kmn) time deterministic algorithm for constructing approximate distance oracle with the slightly larger size of O(kn @math log n). We present here a deterministic O(kmn @math ) time algorithm for constructing oracles of size O(kn @math ). Our deterministic algorithm is slower than the randomized one by only a logarithmic factor. Using our derandomization technique we also obtain the first deterministic linear time algorithm for constructing optimal spanners of weighted graphs. We do that by derandomizing the O(km) expected time algorithm of Baswana and Sen (ICALP’03) for constructing (2k–1)-spanners of size O(kn @math ) of weighted undirected graphs without incurring any asymptotic loss in the running time or in the size of the spanners produced." ] }
1907.11569
2965957874
Research on neural networks has gained significant momentum over the past few years. A plethora of neural networks is currently being trained on available data in research as well as in industry. Because training is a resource-intensive process and training data cannot always be made available to everyone, there has been a recent trend to attempt to re-use already-trained neural networks. As such, neural networks themselves have become research data. In this paper, we present the Neural Network Ontology, an ontology to make neural networks findable, accessible, interoperable and reusable as suggested by the well-established FAIR guiding principles for scientific data management and stewardship. We created the new FAIRnets Dataset that comprises about 2,000 neural networks openly accessible on the internet and uses the Neural Network Ontology to semantically annotate and represent the neural networks. For each of the neural networks in the FAIRnets Dataset, the relevant properties according to the Neural Network Ontology such as the description and the architecture are stored. Ultimately, the FAIRnets Dataset can be queried with a set of desired properties and responds with a set of neural networks that have these properties. We provide the service FAIRnets Search which is implemented on top of a SPARQL endpoint and allows for querying, searching and finding trained neural networks annotated with the Neural Network Ontology. The service is demonstrated by a browser-based frontend to the SPARQL endpoint.
Neural networks have been applied as a machine learning method to improve ontologies, in recent years. They are used to align @cite_1 @cite_6 @cite_2 , match @cite_13 @cite_21 or map ontologies @cite_4 @cite_19 . Furthermore, ontologies were combined with neural networks to solve different problems @cite_18 @cite_15 . However, there is no standard ontology to describe neureal networks. Still, there exists an ontology which focuses on the description of weights @cite_17 but does not fulfill the Linked-Data Principles.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_21", "@cite_1", "@cite_6", "@cite_19", "@cite_2", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "2798986185", "2963857521", "2802636049", "2526782364" ], "abstract": [ "Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks. Here we characterize both the error and the scaling of the error with the size of the network by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or \"loss\" function used to train the network. We show that, when the number @math of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of @math . We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as @math . Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as @math .", "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95 to 0.5 .In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100 probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "Abstract Neural networks are applicable in many solutions for classification, prediction, control, etc. The variety of purposes is growing but with each new application the expectations are higher. We want neural networks to be more precise independently of the input data. Efficiency of the processing in a large manner depends on the training algorithm. Basically this procedure is based on the random selection of weights in which neurons connections are burdened. During training process we implement a method which involves modification of the weights to minimize the response error of the entire structure. Training continues until the minimum error value is reached — however in general the smaller it is, the time of weight modification is longer. Another problem is that training with the same set of data can cause different training times depending on the initial weight selection. To overcome arising problems we need a method that will boost the procedure and support final precision. In this article, we propose the use of multi-threading mechanism to minimize training time by rejecting unnecessary weights selection. In the mechanism we use a multi-core solution to select the best weights between all parallel trained networks. Proposed solution was tested for three types of neural networks (classic, sparking and convolutional) using sample classification problems. The results have shown positive aspects of the proposed idea: shorter training time and better efficiency in various tasks.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method." ] }
1907.11569
2965957874
Research on neural networks has gained significant momentum over the past few years. A plethora of neural networks is currently being trained on available data in research as well as in industry. Because training is a resource-intensive process and training data cannot always be made available to everyone, there has been a recent trend to attempt to re-use already-trained neural networks. As such, neural networks themselves have become research data. In this paper, we present the Neural Network Ontology, an ontology to make neural networks findable, accessible, interoperable and reusable as suggested by the well-established FAIR guiding principles for scientific data management and stewardship. We created the new FAIRnets Dataset that comprises about 2,000 neural networks openly accessible on the internet and uses the Neural Network Ontology to semantically annotate and represent the neural networks. For each of the neural networks in the FAIRnets Dataset, the relevant properties according to the Neural Network Ontology such as the description and the architecture are stored. Ultimately, the FAIRnets Dataset can be queried with a set of desired properties and responds with a set of neural networks that have these properties. We provide the service FAIRnets Search which is implemented on top of a SPARQL endpoint and allows for querying, searching and finding trained neural networks annotated with the Neural Network Ontology. The service is demonstrated by a browser-based frontend to the SPARQL endpoint.
The paper Model Cards for Model Reporting' @cite_20 suggests relevant infor- mation about neural networks that should be considered when saving information about them. Information such as description, date of the last modification, link to papers or other resources for further information, as well as the intended purpose of a neural network, are taken into account. Storing these information makes the neural networks more transparent.
{ "cite_N": [ "@cite_20" ], "mid": [ "2951055356", "2540831494", "2610935556", "2943925420" ], "abstract": [ "Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attributes of neural networks can be exposed from a sequence of queries. This has multiple implications. On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -- we show that the revealed internal information helps generate more effective adversarial examples against the black box model. On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples. Our paper suggests that it is actually hard to draw a line between white box and black box models.", "Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences. Whether and how such an approach can be extended to help model larger spans of text, e.g., documents, is intriguing, and further investigation would still be desirable. This paper aims to enhance neural network models for such a purpose. A typical problem of document-level modeling is automatic summarization, which aims to model documents in order to generate summaries. In this paper, we propose neural models to train computers not just to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content of a document so as to better grasp the overall meaning for summarization. Without engineering any features, we train the models on two large datasets. The models achieve the state-of-the-art performance, and they significantly benefit from the distraction modeling, particularly when input documents are long.", "Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision and NLP tasks, such improvements have not yet been observed in ranking for information retrieval. The reason may be the complexity of the ranking problem, as it is not obvious how to learn from queries and documents when no supervised signal is available. Hence, in this paper, we propose to train a neural ranking model using weak supervision, where labels are obtained automatically without human annotators or any external resources (e.g., click data). To this aim, we use the output of an unsupervised ranking model, such as BM25, as a weak supervision signal. We further train a set of simple yet effective ranking models based on feed-forward neural networks. We study their effectiveness under various learning scenarios (point-wise and pair-wise models) and using different input representations (i.e., from encoding query-document pairs into dense sparse vectors to using word embedding representation). We train our networks using tens of millions of training instances and evaluate it on two standard collections: a homogeneous news collection (Robust) and a heterogeneous large-scale web collection (ClueWeb). Our experiments indicate that employing proper objective functions and letting the networks to learn the input representation based on weakly supervised data leads to impressive performance, with over 13 and 35 MAP improvements over the BM25 model on the Robust and the ClueWeb collections. Our findings also suggest that supervised neural ranking models can greatly benefit from pre-training on large amounts of weakly labeled data that can be easily obtained from unsupervised IR models.", "Abstract This paper presents an efficient technique to reduce the inference cost of deep and or wide convolutional neural network models by pruning redundant features (or filters). Previous studies have shown that over-sized deep neural network models tend to produce a lot of redundant features that are either shifted version of one another or are very similar and show little or no variations, thus resulting in filtering redundancy. We propose to prune these redundant features along with their related feature maps according to their relative cosine distances in the feature space, thus leading to smaller networks with reduced post-training inference computational costs and competitive performance. We empirically show on select models (VGG-16, ResNet-56, ResNet-110, and ResNet-34) and dataset (MNIST Handwritten digits, CIFAR-10, and ImageNet) that inference costs (in FLOPS) can be significantly reduced while overall performance is still competitive with the state-of-the-art." ] }
1907.11440
2965160406
Pooling is one of the main elements in convolutional neural networks. The pooling reduces the size of the feature map, enabling training and testing with a limited amount of computation. This paper proposes a new pooling method named universal pooling. Unlike the existing pooling methods such as average pooling, max pooling, and stride pooling with fixed pooling function, universal pooling generates any pooling function, depending on a given problem and dataset. Universal pooling was inspired by attention methods and can be considered as a channel-wise form of local spatial attention. Universal pooling is trained jointly with the main network and it is shown that it includes the existing pooling methods. Finally, when applied to two benchmark problems, the proposed method outperformed the existing pooling methods and performed with the expected diversity, adapting to the given problem.
Max pooling divides the feature map into blocks and collects the maximum feature value in each block into a smaller output matrix. Max pooling is commonly used between convolution layers and is employed in AlexNet @cite_4 and VGG @cite_7 . Average pooling operates similarly to max pooling, but outputs the average of each block in the feature map. Global average pooling (GAP), which applies average pooling over the entire feature map, is commonly used in convolutional networks such as ResNet @cite_13 and DenseNet @cite_1 . In DenseNet, average pooling is applied between the convolution layers. Meanwhile, stride pooling is equivalent to importing values from a fixed position after convoluting across the entire area. This approach is adopted in ResNet.
{ "cite_N": [ "@cite_13", "@cite_1", "@cite_4", "@cite_7" ], "mid": [ "2963544187", "2559156603", "2950328304", "2896786907" ], "abstract": [ "Deep neural networks with alternating convolutional, max-pooling and decimation layers are widely used in state of the art architectures for computer vision. Max-pooling purposefully discards precise spatial information in order to create features that are more robust, and typically organized as lower resolution spatial feature maps. On some tasks, such as whole-image classification, max-pooling derived features are well suited, however, for tasks requiring precise localization, such as pixel level prediction and segmentation, max-pooling destroys exactly the information required to perform well. Precise localization may be preserved by shallow convnets without pooling but at the expense of robustness. Can we have our max-pooled multilayered cake and eat it too? Several papers have proposed summation and concatenation based methods for combining upsampled coarse, abstract features with finer features to produce robust pixel level predictions. Here we introduce another model — dubbed Recombinator Networks — where coarse features inform finer features early in their formation such that finer features can make use of several layers of computation in deciding how to use coarse features. The model is trained once, end-to-end and performs better than summation-based architectures, reducing the error from the previous state of the art on two facial keypoint datasets, AFW and AFLW, by 30 and beating the current state-of-the-art on 300W without using extra data. We improve performance even further by adding a denoising prediction model based on a novel convnet formulation.", "Feature pooling layers (e.g., max pooling) in convolutional neural networks (CNNs) serve the dual purpose of providing increasingly abstract representations as well as yielding computational savings in subsequent convolutional layers. We view the pooling operation in CNNs as a two step procedure: first, a pooling window (e.g., 2× 2) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e.g., top-left) manner. Our starting point in this work is the observation that this regularly spaced downsampling arising from non-overlapping windows, although intuitive from a signal processing perspective (which has the goal of signal reconstruction), is not necessarily optimal for learning (where the goal is to generalize). We study this aspect and propose a novel pooling strategy with stochastic spatial sampling (S3Pool), where the regular downsampling is replaced by a more general stochastic version. We observe that this general stochasticity acts as a strong regularizer, and can also be seen as doing implicit data augmentation by introducing distortions in the feature maps. We further introduce a mechanism to control the amount of distortion to suit different datasets and architectures. To demonstrate the effectiveness of the proposed approach, we perform extensive experiments on several popular image classification benchmarks, observing excellent improvements over baseline models.", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them", "We propose a novel discrete Fourier transform-based pooling layer for convolutional neural networks. The DFT magnitude pooling replaces the traditional max average pooling layer between the convolution and fully-connected layers to retain translation invariance and shape preserving (aware of shape difference) properties based on the shift theorem of the Fourier transform. Thanks to the ability to handle image misalignment while keeping important structural information in the pooling stage, the DFT magnitude pooling improves the classification accuracy significantly. In addition, we propose the DFT+ method for ensemble networks using the middle convolution layer outputs. The proposed methods are extensively evaluated on various classification tasks using the ImageNet, CUB 2010-2011, MIT Indoors, Caltech 101, FMD and DTD datasets. The AlexNet, VGG-VD 16, Inception-v3, and ResNet are used as the base networks, upon which DFT and DFT+ methods are implemented. Experimental results show that the proposed methods improve the classification performance in all networks and datasets." ] }
1907.11440
2965160406
Pooling is one of the main elements in convolutional neural networks. The pooling reduces the size of the feature map, enabling training and testing with a limited amount of computation. This paper proposes a new pooling method named universal pooling. Unlike the existing pooling methods such as average pooling, max pooling, and stride pooling with fixed pooling function, universal pooling generates any pooling function, depending on a given problem and dataset. Universal pooling was inspired by attention methods and can be considered as a channel-wise form of local spatial attention. Universal pooling is trained jointly with the main network and it is shown that it includes the existing pooling methods. Finally, when applied to two benchmark problems, the proposed method outperformed the existing pooling methods and performed with the expected diversity, adapting to the given problem.
All of these pooling methods are efficient but simple, and it seems that there is a room to improve the performance. S3pool @cite_6 and stochastic pooling @cite_12 adopt a probability-based pooling approach. @math pooling of various coefficients' norms was proposed in @cite_0 and @cite_3 , and a fractional version of max pooling was proposed in @cite_9 . The spectral space was down-sampled through a filter in @cite_2 and @cite_10 . In @cite_5 , the existing simple pooling methods were combined to improve the pooling performance. Detail-preserving pooling, which preserves the feature details by applying existing down-sampling techniques in the image processing area and by learning the parameters of the function over the network, was proposed in @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_3", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2559156603", "2963544187", "2896786907", "2162931300" ], "abstract": [ "Feature pooling layers (e.g., max pooling) in convolutional neural networks (CNNs) serve the dual purpose of providing increasingly abstract representations as well as yielding computational savings in subsequent convolutional layers. We view the pooling operation in CNNs as a two step procedure: first, a pooling window (e.g., 2× 2) slides over the feature map with stride one which leaves the spatial resolution intact, and second, downsampling is performed by selecting one pixel from each non-overlapping pooling window in an often uniform and deterministic (e.g., top-left) manner. Our starting point in this work is the observation that this regularly spaced downsampling arising from non-overlapping windows, although intuitive from a signal processing perspective (which has the goal of signal reconstruction), is not necessarily optimal for learning (where the goal is to generalize). We study this aspect and propose a novel pooling strategy with stochastic spatial sampling (S3Pool), where the regular downsampling is replaced by a more general stochastic version. We observe that this general stochasticity acts as a strong regularizer, and can also be seen as doing implicit data augmentation by introducing distortions in the feature maps. We further introduce a mechanism to control the amount of distortion to suit different datasets and architectures. To demonstrate the effectiveness of the proposed approach, we perform extensive experiments on several popular image classification benchmarks, observing excellent improvements over baseline models.", "Deep neural networks with alternating convolutional, max-pooling and decimation layers are widely used in state of the art architectures for computer vision. Max-pooling purposefully discards precise spatial information in order to create features that are more robust, and typically organized as lower resolution spatial feature maps. On some tasks, such as whole-image classification, max-pooling derived features are well suited, however, for tasks requiring precise localization, such as pixel level prediction and segmentation, max-pooling destroys exactly the information required to perform well. Precise localization may be preserved by shallow convnets without pooling but at the expense of robustness. Can we have our max-pooled multilayered cake and eat it too? Several papers have proposed summation and concatenation based methods for combining upsampled coarse, abstract features with finer features to produce robust pixel level predictions. Here we introduce another model — dubbed Recombinator Networks — where coarse features inform finer features early in their formation such that finer features can make use of several layers of computation in deciding how to use coarse features. The model is trained once, end-to-end and performs better than summation-based architectures, reducing the error from the previous state of the art on two facial keypoint datasets, AFW and AFLW, by 30 and beating the current state-of-the-art on 300W without using extra data. We improve performance even further by adding a denoising prediction model based on a novel convnet formulation.", "We propose a novel discrete Fourier transform-based pooling layer for convolutional neural networks. The DFT magnitude pooling replaces the traditional max average pooling layer between the convolution and fully-connected layers to retain translation invariance and shape preserving (aware of shape difference) properties based on the shift theorem of the Fourier transform. Thanks to the ability to handle image misalignment while keeping important structural information in the pooling stage, the DFT magnitude pooling improves the classification accuracy significantly. In addition, we propose the DFT+ method for ensemble networks using the middle convolution layer outputs. The proposed methods are extensively evaluated on various classification tasks using the ImageNet, CUB 2010-2011, MIT Indoors, Caltech 101, FMD and DTD datasets. The AlexNet, VGG-VD 16, Inception-v3, and ResNet are used as the base networks, upon which DFT and DFT+ methods are implemented. Experimental results show that the proposed methods improve the classification performance in all networks and datasets.", "Many modern visual recognition algorithms incorporate a step of spatial 'pooling', where the outputs of several nearby feature detectors are combined into a local or global 'bag of features', in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks." ] }
1907.11307
2966138214
Optimization algorithms with momentum, e.g., Nesterov Accelerated Gradient and ADAM, have been widely used for building deep learning models because of their faster convergence rates compared to stochastic gradient descent (SGD). Momentum is a method that helps accelerate SGD in the relevant directions in variable updating, which can minify the oscillations of variables update route. Optimization algorithms with momentum usually allocate a fixed hyperparameter (e.g., ) as the weight of the momentum term. However, using a fixed weight is not applicable to some situations, and such a hyper-parameter can be extremely hard to tune in applications. In this paper, we will introduce a new optimization algorithm, namely DEAM (Discriminative wEight on Accumulated Momentum). Instead of assigning the momentum term with a fixed weight, DEAM proposes to compute the momentum weight in the learning process automatically. DEAM also involves a "backtrack" term, which can help accelerate the algorithm convergence by restricting redundant updates. Extensive experiments have been done on several real-world datasets. The experimental results demonstrate that DEAM can achieve a faster convergence rate than the existing optimization algorithms in training both the classic machine learning models and the recent deep learning models.
: @cite_21 @cite_0 is proposed based on SGD and momentum concept, and it also computes individual adaptive learning rates for different variables. The variable updating rules in can be represented by the following equations: . records the first-order momentum @math and the second-order momentum @math of the gradients using the moving average (controlled by the parameters @math and @math , respectively), and further computes the bias-corrected version of them ( @math and @math ). Based on , @cite_17 proposes to switch from to SGD during the training process. In this way, it can combine the advantages of both SGD and .
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_17" ], "mid": [ "2963702144", "2766164908", "2609701267", "2963476860" ], "abstract": [ "It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train Inception-ResNet-V2 on ImageNet to @math validation accuracy in under 2500 parameter updates, efficiently utilizing training batches of 65536 images.", "It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to @math validation accuracy in under 30 minutes.", "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of mini-batch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.", "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of mini-batch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation." ] }
1907.11307
2966138214
Optimization algorithms with momentum, e.g., Nesterov Accelerated Gradient and ADAM, have been widely used for building deep learning models because of their faster convergence rates compared to stochastic gradient descent (SGD). Momentum is a method that helps accelerate SGD in the relevant directions in variable updating, which can minify the oscillations of variables update route. Optimization algorithms with momentum usually allocate a fixed hyperparameter (e.g., ) as the weight of the momentum term. However, using a fixed weight is not applicable to some situations, and such a hyper-parameter can be extremely hard to tune in applications. In this paper, we will introduce a new optimization algorithm, namely DEAM (Discriminative wEight on Accumulated Momentum). Instead of assigning the momentum term with a fixed weight, DEAM proposes to compute the momentum weight in the learning process automatically. DEAM also involves a "backtrack" term, which can help accelerate the algorithm convergence by restricting redundant updates. Extensive experiments have been done on several real-world datasets. The experimental results demonstrate that DEAM can achieve a faster convergence rate than the existing optimization algorithms in training both the classic machine learning models and the recent deep learning models.
: @cite_7 is a modified version of . changes the definition of second-order momentum by @math , and other settings are almost the same as . What's more, applies a varied learning rate @math comparing to , but the definition of @math is not specified.
{ "cite_N": [ "@cite_7" ], "mid": [ "2896285425", "2257118316", "2963702144", "2769394111" ], "abstract": [ "Momentum is a popular technique to accelerate the convergence in practical training, and its impact on convergence guarantee has been well-studied for first-order algorithms. However, such a successful acceleration technique has not yet been proposed for second-order algorithms in nonconvex optimization.In this paper, we apply the momentum scheme to cubic regularized (CR) Newton's method and explore the potential for acceleration. Our numerical experiments on various nonconvex optimization problems demonstrate that the momentum scheme can substantially facilitate the convergence of cubic regularization, and perform even better than the Nesterov's acceleration scheme for CR. Theoretically, we prove that CR under momentum achieves the best possible convergence rate to a second-order stationary point for nonconvex optimization. Moreover, we study the proposed algorithm for solving problems satisfying an error bound condition and establish a local quadratic convergence rate. Then, particularly for finite-sum problems, we show that the proposed algorithm can allow computational inexactness that reduces the overall sample complexity without degrading the convergence rate.", "The ( )-cover time of the two dimensional torus by Brownian motion is the time it takes for the process to come within distance ( >0 ) from any point. Its leading order in the small ( )-regime has been established by (Ann Math 160:433–464, 2004). In this work, the second order correction is identified. The approach relies on a multi-scale refinement of the second moment method, and draws on ideas from the study of the extremes of branching Brownian motion.", "It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train Inception-ResNet-V2 on ImageNet to @math validation accuracy in under 2500 parameter updates, efficiently utilizing training batches of 65536 images.", "Nesterov's accelerated gradient descent (AGD), an instance of the general family of \"momentum methods\", provably achieves faster convergence rate than gradient descent (GD) in the convex setting. However, whether these methods are superior to GD in the nonconvex setting remains open. This paper studies a simple variant of AGD, and shows that it escapes saddle points and finds a second-order stationary point in @math iterations, faster than the @math iterations required by GD. To the best of our knowledge, this is the first Hessian-free algorithm to find a second-order stationary point faster than GD, and also the first single-loop algorithm with a faster rate than GD even in the setting of finding a first-order stationary point. Our analysis is based on two key ideas: (1) the use of a simple Hamiltonian function, inspired by a continuous-time perspective, which AGD monotonically decreases per step even for nonconvex functions, and (2) a novel framework called improve or localize, which is useful for tracking the long-term behavior of gradient-based optimization algorithms. We believe that these techniques may deepen our understanding of both acceleration algorithms and nonconvex optimization." ] }
1907.11321
2966850631
In spite of the rapidly increasing number of applications of machine learning in various domains, a principled and systematic approach to the incorporation of domain knowledge in the engineering process is still lacking and ad hoc solutions that are difficult to validate are still the norm in practice, which is of growing concern not only in mission-critical applications. In this note, we introduce Probabilistic Approximate Logic (PALO) as a logic based on the notion of mean approximate probability to overcome conceptual and computational difficulties inherent to strictly probabilistic logics. The logic is approximate in several dimensions. Logical independence assumptions are used to obtain approximate probabilities, but by averaging over many instances of formulas a useful estimate of mean probability with known confidence can usually be obtained. To enable efficient computational inference, the logic has a continuous semantics that reflects only a subset of the structural properties of classical logic, but this imprecision can be partly compensated by richer theories obtained by classical inference or other means. Computational inference, which refers to the construction of models and validation of logical properties, is based on Stochastic Gradient Descent (SGD) and Markov Chain Monte Carlo (MCMC) techniques and hence another dimension where approximations are involved. We also present the Logical Imagination Engine (LIME), a prototypical implementation of PALO based on TensorFlow. Albeit not limited to the biological domain, we illustrate its operation in a quite substantial bioinformatics machine learning application concerned with network synthesis and analysis in a recent DARPA project.
It is noteworthy that our approach of combining selected operators from Hajek's Product logic, ukasiewicz logic, and G "odel logic in a non-standard fashion needs to be differentiated from work in the area of fuzzy logics, which is not aiming at a probabilistic interpretation but an orthogonal notion of truthiness (see also @cite_11 for his population-based interpretation of Fuzzy Logic). For example, @cite_38 investigates a propositional fuzzy logic that contains Product logic, ukasiewicz logic, and G "odel logic as sublogics and the focus is on identifying a suitable axiomatization and a class of models so that soundness and completeness can be established. In contrast, our approach with PALO is purely semantic and motivated by computational feasibility. We do not attempt to establish an axiomatic system for symbolic inference in soft logic, but rather maintain a connection to classical logic for which symbolic methods and technologies are well developed.
{ "cite_N": [ "@cite_38", "@cite_11" ], "mid": [ "2039204373", "2003531456", "2166741250", "1638553575" ], "abstract": [ "Probability theory and fuzzy logic have been presented as quite distinct theoretical foundations for reasoning and decision making in situations of uncertainty. This paper establishes a common basis for both forms of logic of uncertainty in which a basic uncertainty logic is defined in terms of a valuation on a lattice of propositions. The (non-truth-functional) connectives for conjunction, disjunction, equivalence, implication, and negation are defined in terms which closely resemble those of probability theory. Addition of the axiom of the excluded middle to the basic logic gives a standard probability logic. Alternatively, addition of a requirement for strong truth-functionality (truth-value of connective determined by truth-value of constituents) gives a fuzzy logic with connectives, including implication, as in Lukasiewicz' infinitely valued logic. A common semantics for all such variants is given in terms of binary responses from a population. The type of population, e.g., physical events, people, or neurons, determines whether the model is of physical probability, subjective belief, or human decision-making. The formal theory and the semantics together illustrate clearly the precise similarities and differences between fuzzy and probability logics.", "The use of conventional classical logic is misleading for characterizing the behavior of logic programs because a logic program, when queried, will do one of three things: succeed with the query, fail with it, or not respond because it has fallen into infinite backtracking. In [7] Kleene proposed a three-valued logic for use in recursive function theory. The so-called third truth value was really undefined: truth value not determined. This logic is a useful tool in logic-program specification, and in particular, for describing models. (See [11].) Tarski showed that formal languages, like arithmetic, cannot contain their own truth predicate because one could then construct a paradoxical sentence that effectively asserts its own falsehood. Natural languages do allow the use of \"is true\", so by Tarski's argument a semantics for natural language must leave truth-value gaps: some sentences must fail to have a truth value. In [8] Kripke showed how a model having truth-value gaps, using Kleene's three-valued logic, could be specified. The mechanism he used is a famiUar one in program semantics: consider the least fixed point of a certain monotone operator. But that operator must be defined on a space involving three-valued logic, and for Kripke's application it will not be continuous. We apply techniques similar to Kripke's to logic programs. We associate with each program a monotone operator on a space of three-valued logic interpretations, or better partial interpretations. This space is not a complete lattice, and the operators are not, in general, continuous. But least and other fixed points do exist. These fixed points are shown to provide suitable three-valued program models. They relate closely to the least and greatest fixed points of the operators used in [1]. Because of the extra machinery involved, our treatment allows for a natural consideration of negation, and indeed, of the other prepositional connectives as well. And because of the elaborate structure of fixed points available, we are able to", "Unifying first-order logic and probability is a long-standing goal of AI, and in recent years many representations combining aspects of the two have been proposed. However, inference in them is generally still at the level of propositional logic, creating all ground atoms and formulas and applying standard probabilistic inference methods to the resulting network. Ideally, inference should be lifted as in first-order logic, handling whole sets of indistinguishable objects together, in time independent of their cardinality. Poole (2003) and (2005, 2006) developed a lifted version of the variable elimination algorithm, but it is extremely complex, generally does not scale to realistic domains, and has only been applied to very small artificial problems. In this paper we propose the first lifted version of a scalable probabilistic inference algorithm, belief propagation (loopy or not). Our approach is based on first constructing a lifted network, where each node represents a set of ground atoms that all pass the same messages during belief propagation. We then run belief propagation on this network. We prove the correctness and optimality of our algorithm. Experiments show that it can greatly reduce the cost of inference.", "We investigate mca-programs, that is, logic programs with clauses built of monotone cardinality atoms of the form kX, where k is a non-negative integer and X is a finite set of propositional atoms. We develop a theory of mca-programs. We demonstrate that the operational concept of the one-step provability operator generalizes to mca-programs, but the generalization involves nondeterminism. Our main results show that the formalism of mca-programs is a common generalization of (1) normal logic programming with its semantics of models, supported models and stable models, (2) logic programming with cardinality atoms and with the semantics of stable models, as defined by Niemela, Simons and Soininen, and (3) of disjunctive logic programming with the possible-model semantics of Sakama and Inoue." ] }
1907.11321
2966850631
In spite of the rapidly increasing number of applications of machine learning in various domains, a principled and systematic approach to the incorporation of domain knowledge in the engineering process is still lacking and ad hoc solutions that are difficult to validate are still the norm in practice, which is of growing concern not only in mission-critical applications. In this note, we introduce Probabilistic Approximate Logic (PALO) as a logic based on the notion of mean approximate probability to overcome conceptual and computational difficulties inherent to strictly probabilistic logics. The logic is approximate in several dimensions. Logical independence assumptions are used to obtain approximate probabilities, but by averaging over many instances of formulas a useful estimate of mean probability with known confidence can usually be obtained. To enable efficient computational inference, the logic has a continuous semantics that reflects only a subset of the structural properties of classical logic, but this imprecision can be partly compensated by richer theories obtained by classical inference or other means. Computational inference, which refers to the construction of models and validation of logical properties, is based on Stochastic Gradient Descent (SGD) and Markov Chain Monte Carlo (MCMC) techniques and hence another dimension where approximations are involved. We also present the Logical Imagination Engine (LIME), a prototypical implementation of PALO based on TensorFlow. Albeit not limited to the biological domain, we illustrate its operation in a quite substantial bioinformatics machine learning application concerned with network synthesis and analysis in a recent DARPA project.
@cite_14 is another approach to overcome the fact that a probabilistic interpretation of formulas is not truth-functional by using a less abstract semantics that interprets each formula as the set of assignments for which it holds so that conjunction becomes a simple intersection. Although this is an elegant solution, with our mean probability semantics that includes lower and upper bounds, it turns out that the bounds are sufficiently tight so that replacing our approximate by a strict probabilistic interpretation is unnecessary for the data-rich applications we are targeting. Two other practical difficulties with an exact probabilistic semantics are that dependencies between subformulas referring to external data are often unknown and even if all known dependencies would be taken into account it would lead to an unacceptably high computational complexity in the context of model generation and learning.
{ "cite_N": [ "@cite_14" ], "mid": [ "2963676309", "2135625884", "140581939", "2156621282" ], "abstract": [ "The distribution semantics is one of the most prominent approaches for the combination of logic programming and probability theory. Many languages follow this semantics, such as Independent Choice Logic, PRISM, pD, Logic Programs with Annotated Disjunctions (LPADs) and ProbLog. When a program contains functions symbols, the distribution semantics is well-defined only if the set of explanations for a query is finite and so is each explanation. Welldefinedness is usually either explicitly imposed or is achieved by severely limiting the class of allowed programs. In this paper we identify a larger class of programs for which the semantics is well-defined together with an efficient procedure for computing the probability of queries. Since LPADs offer the most general syntax, we present our results for them, but our results are applicable to all languages under the distribution semantics. We present the algorithm “Probabilistic Inference with Tabling and Answer subsumption” (PITA) that computes the probability of queries by transforming a probabilistic program into a normal program and then applying SLG resolution with answer subsumption. PITA has been implemented in XSB and tested on six domains: two with function symbols and four without. The execution times are compared with those of ProbLog, cplint and CVE. PITA was almost always able to solve larger problems in a shorter time, on domains with and without function symbols.", "1. Summary In Part I, four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction. Almost all, if not all problems in induction can be put in this form. Some strong heuristic arguments have been obtained for the equivalence of the last three models. One of these models is equivalent to a Bayes formulation, in which a priori probabilities are assigned to sequences of symbols on the basis of the lengths of inputs to a universal Turing machine that are required to produce the sequence of interest as output. Though it seems likely, it is not certain whether the first of the four models is equivalent to the other three. Few rigorous results are presented. Informal investigations are made of the properties of these models. There are discussions of their consistency and meaningfulness, of their degree of independence of the exact nature of the Turing machine used, and of the accuracy of their predictions in comparison to those of other induction methods. In Part II these models are applied to the solution of three problems—prediction of the Bernoulli sequence, extrapolation of a certain kind of Markov chain, and the use of phrase structure grammars for induction. Though some approximations are used, the first of these problems is treated most rigorously. The result is Laplace's rule of succession. The solution to the second problem uses less certain approximations, but the properties of the solution that are discussed, are fairly independent of these approximations. The third application, using phrase structure grammars, is least exact of the three. First a formal solution is presented. Though it appears to have certain deficiencies, it is hoped that presentation of this admittedly inadequate model will suggest acceptable improvements in it. This formal solution is then applied in an approximate way to the determination of the “optimum” phrase structure grammar for a given set of strings. The results that are obtained are plausible, but subject to the uncertainties of the approximation used.", "We present a semantics for interpreting probabilistic statements expressed in a first-order quantifier-free language. We show how this semantics places constraints on the probabilities which can be associated with such statements. We then consider its use in the area of story understanding. We show that for at least simple models of stories (equivalent to the script plan models) there arc ways to specify reasonably good probabilities. Lastly, we show that while the semantics dictates seemingly implausibly low prior probabilities for equality statements, once they are conditioned by an assumption of spatio-temporal locality of observation the probabilities become \"reasonable.\"", "We present an approach to learning a model-theoretic semantics for natural language tied to Freebase. Crucially, our approach uses an open predicate vocabulary, enabling it to produce denotations for phrases such as \"Republican front-runner from Texas\" whose semantics cannot be represented using the Freebase schema. Our approach directly converts a sentence's syntactic CCG parse into a logical form containing predicates derived from the words in the sentence, assigning each word a consistent semantics across sentences. This logical form is evaluated against a learned probabilistic database that defines a distribution over denotations for each textual predicate. A training phase produces this probabilistic database using a corpus of entity-linked text and probabilistic matrix factorization with a novel ranking objective function. We evaluate our approach on a compositional question answering task where it outperforms several competitive baselines. We also compare our approach against manually annotated Freebase queries, finding that our open predicate vocabulary enables us to answer many questions that Freebase cannot." ] }
1907.11565
2965034157
When describing images with natural language, the descriptions can be made more informative if tuned using downstream tasks. This is often achieved by training two networks: a "speaker network" that generates sentences given an image, and a "listener network" that uses them to perform a task. Unfortunately, training multiple networks jointly to communicate to achieve a joint task, faces two major challenges. First, the descriptions generated by a speaker network are discrete and stochastic, making optimization very hard and inefficient. Second, joint training usually causes the vocabulary used during communication to drift and diverge from natural language. We describe an approach that addresses both challenges. We first develop a new effective optimization based on partial-sampling from a multinomial distribution combined with straight-through gradient updates, which we name PSST for Partial-Sampling Straight-Through. Second, we show that the generated descriptions can be kept close to natural by constraining them to be similar to human descriptions. Together, this approach creates descriptions that are both more discriminative and more natural than previous approaches. Evaluations on the standard COCO benchmark show that PSST Multinomial dramatically improve the recall@10 from 60 to 86 maintaining comparable language naturalness, and human evaluations show that it also increases naturalness while keeping the discriminative power of generated captions.
Image captioning has been studied intensively since encoder-decoder models were introduced . Large efforts have been invested in making captions more natural and diverse. For example, @cite_11 used conditional GANs to train a caption generator to improve fidelity, naturalness, and diversity. Using GANs allows avoiding the hard challenge of defining explicit language naturalness loss. Instead, the discriminator can receive fake or incorrect captions or images as negatives. @cite_7 used a conditional GAN with two discriminators, a CNN and an RNN. @cite_6 further used a hierarchical compositional model over captions to increase diversity and naturalness. More related to the optimization techniques discussed in this paper, @cite_3 trained an adversarial network using a straight-through Gumbel approach. As we discuss below, training cooperative agents allows using more effective optimization techniques compared to training GANs, because the generator is allowed to provide any useful information to the (cooperative) discriminator. Specifically, during training, the speaker can represent generated captions differently than human captions.
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_7", "@cite_11" ], "mid": [ "2962968835", "2597985671", "2806935606", "2785967511" ], "abstract": [ "Despite the substantial progress in recent years, the image captioning techniques are still far from being perfect. Sentences produced by existing methods, e.g. those based on RNNs, are often overly rigid and lacking in variability. This issue is related to a learning principle widely used in practice, that is, to maximize the likelihood of training samples. This principle encourages high resemblance to the “ground-truth” captions, while suppressing other reasonable descriptions. Conventional evaluation metrics, e.g. BLEU and METEOR, also favor such restrictive methods. In this paper, we explore an alternative approach, with the aim to improve the naturalness and diversity – two essential properties of human expression. Specifically, we propose a new framework based on Conditional Generative Adversarial Networks (CGAN), which jointly learns a generator to produce descriptions conditioned on images and an evaluator to assess how well a description fits the visual content. It is noteworthy that training a sequence generator is nontrivial. We overcome the difficulty by Policy Gradient, a strategy stemming from Reinforcement Learning, which allows the generator to receive early feedback along the way. We tested our method on two large datasets, where it performed competitively against real people in our user study and outperformed other methods on various tasks.", "Despite the substantial progress in recent years, the image captioning techniques are still far from being perfect.Sentences produced by existing methods, e.g. those based on RNNs, are often overly rigid and lacking in variability. This issue is related to a learning principle widely used in practice, that is, to maximize the likelihood of training samples. This principle encourages high resemblance to the \"ground-truth\" captions while suppressing other reasonable descriptions. Conventional evaluation metrics, e.g. BLEU and METEOR, also favor such restrictive methods. In this paper, we explore an alternative approach, with the aim to improve the naturalness and diversity -- two essential properties of human expression. Specifically, we propose a new framework based on Conditional Generative Adversarial Networks (CGAN), which jointly learns a generator to produce descriptions conditioned on images and an evaluator to assess how well a description fits the visual content. It is noteworthy that training a sequence generator is nontrivial. We overcome the difficulty by Policy Gradient, a strategy stemming from Reinforcement Learning, which allows the generator to receive early feedback along the way. We tested our method on two large datasets, where it performed competitively against real people in our user study and outperformed other methods on various tasks.", "We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.", "Despite of the success of Generative Adversarial Networks (GANs) for image generation tasks, the trade-off between image diversity and visual quality are an well-known issue. Conventional techniques achieve either visual quality or image diversity; the improvement in one side is often the result of sacrificing the degradation in the other side. In this paper, we aim to achieve both simultaneously by improving the stability of training GANs. A key idea of the proposed approach is to implicitly regularizing the discriminator using a representative feature. For that, this representative feature is extracted from the data distribution, and then transferred to the discriminator for enforcing slow updates of the gradient. Consequently, the entire training process is stabilized because the learning curve of discriminator varies slowly. Based on extensive evaluation, we demonstrate that our approach improves the visual quality and diversity of state-of-the art GANs." ] }
1907.11565
2965034157
When describing images with natural language, the descriptions can be made more informative if tuned using downstream tasks. This is often achieved by training two networks: a "speaker network" that generates sentences given an image, and a "listener network" that uses them to perform a task. Unfortunately, training multiple networks jointly to communicate to achieve a joint task, faces two major challenges. First, the descriptions generated by a speaker network are discrete and stochastic, making optimization very hard and inefficient. Second, joint training usually causes the vocabulary used during communication to drift and diverge from natural language. We describe an approach that addresses both challenges. We first develop a new effective optimization based on partial-sampling from a multinomial distribution combined with straight-through gradient updates, which we name PSST for Partial-Sampling Straight-Through. Second, we show that the generated descriptions can be kept close to natural by constraining them to be similar to human descriptions. Together, this approach creates descriptions that are both more discriminative and more natural than previous approaches. Evaluations on the standard COCO benchmark show that PSST Multinomial dramatically improve the recall@10 from 60 to 86 maintaining comparable language naturalness, and human evaluations show that it also increases naturalness while keeping the discriminative power of generated captions.
Beyond the naturalness of communication, several studies looked into the problem of generating captions that allow discriminating an image from other similar images. @cite_18 showed how captions can take into account a distractor image at inference time and create a caption that discriminates a target image from a distractor image. A similar approach was taken earlier by @cite_0 . @cite_16 recently described a dataset that contains pairs of closely similar images, that can be used as hard-negatives for evaluating image retrieval tasks.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_16" ], "mid": [ "2950401034", "2885822952", "2963170456", "2962968835" ], "abstract": [ "We introduce an inference technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation). For example, given images and captions of \"siamese cat\" and \"tiger cat\", we generate language that describes the \"siamese cat\" in a way that distinguishes it from \"tiger cat\". Our key novelty is that we show how to do joint inference over a language model that is context-agnostic and a listener which distinguishes closely-related concepts. We first apply our technique to a justification task, namely to describe why an image contains a particular fine-grained category as opposed to another closely-related category of the CUB-200-2011 dataset. We then study discriminative image captioning to generate language that uniquely refers to one of two semantically-similar images in the COCO dataset. Evaluations with discriminative ground truth for justification and human studies for discriminative image captioning reveal that our approach outperforms baseline generative and speaker-listener approaches for discrimination.", "Image captioning, which aims to automatically generate a sentence description for an image, has attracted much research attention in cognitive computing. The task is rather challenging, since it requires cognitively combining the techniques from both computer vision and natural language processing domains. Existing CNN-RNN framework-based methods suffer from two main problems: in the training phase, all the words of captions are treated equally without considering the importance of different words; in the caption generation phase, the semantic objects or scenes might be misrecognized. In our paper, we propose a method based on the encoder-decoder framework, named Reference based Long Short Term Memory (R-LSTM), aiming to lead the model to generate a more descriptive sentence for the given image by introducing reference information. Specifically, we assign different weights to the words according to the correlation between words and images during the training phase. We additionally maximize the consensus score between the captions generated by the captioning model and the reference information from the neighboring images of the target image, which can reduce the misrecognition problem. We have conducted extensive experiments and comparisons on the benchmark datasets MS COCO and Flickr30k. The results show that the proposed approach can outperform the state-of-the-art approaches on all metrics, especially achieving a 10.37 improvement in terms of CIDEr on MS COCO. By analyzing the quality of the generated captions, we come to a conclusion that through the introduction of reference information, our model can learn the key information of images and generate more trivial and relevant words for images.", "The aim of image captioning is to generate captions by machine to describe image contents. Despite many efforts, generating discriminative captions for images remains non-trivial. Most traditional approaches imitate the language structure patterns, thus tend to fall into a stereotype of replicating frequent phrases or sentences and neglect unique aspects of each image. In this work, we propose an image captioning framework with a self-retrieval module as training guidance, which encourages generating discriminative captions. It brings unique advantages: (1) the self-retrieval guidance can act as a metric and an evaluator of caption discriminativeness to assure the quality of generated captions. (2) The correspondence between generated captions and images are naturally incorporated in the generation process without human annotations, and hence our approach could utilize a large amount of unlabeled images to boost captioning performance with no additional annotations. We demonstrate the effectiveness of the proposed retrieval-guided method on COCO and Flickr30k captioning datasets, and show its superior captioning performance with more discriminative captions.", "Despite the substantial progress in recent years, the image captioning techniques are still far from being perfect. Sentences produced by existing methods, e.g. those based on RNNs, are often overly rigid and lacking in variability. This issue is related to a learning principle widely used in practice, that is, to maximize the likelihood of training samples. This principle encourages high resemblance to the “ground-truth” captions, while suppressing other reasonable descriptions. Conventional evaluation metrics, e.g. BLEU and METEOR, also favor such restrictive methods. In this paper, we explore an alternative approach, with the aim to improve the naturalness and diversity – two essential properties of human expression. Specifically, we propose a new framework based on Conditional Generative Adversarial Networks (CGAN), which jointly learns a generator to produce descriptions conditioned on images and an evaluator to assess how well a description fits the visual content. It is noteworthy that training a sequence generator is nontrivial. We overcome the difficulty by Policy Gradient, a strategy stemming from Reinforcement Learning, which allows the generator to receive early feedback along the way. We tested our method on two large datasets, where it performed competitively against real people in our user study and outperformed other methods on various tasks." ] }
1907.11565
2965034157
When describing images with natural language, the descriptions can be made more informative if tuned using downstream tasks. This is often achieved by training two networks: a "speaker network" that generates sentences given an image, and a "listener network" that uses them to perform a task. Unfortunately, training multiple networks jointly to communicate to achieve a joint task, faces two major challenges. First, the descriptions generated by a speaker network are discrete and stochastic, making optimization very hard and inefficient. Second, joint training usually causes the vocabulary used during communication to drift and diverge from natural language. We describe an approach that addresses both challenges. We first develop a new effective optimization based on partial-sampling from a multinomial distribution combined with straight-through gradient updates, which we name PSST for Partial-Sampling Straight-Through. Second, we show that the generated descriptions can be kept close to natural by constraining them to be similar to human descriptions. Together, this approach creates descriptions that are both more discriminative and more natural than previous approaches. Evaluations on the standard COCO benchmark show that PSST Multinomial dramatically improve the recall@10 from 60 to 86 maintaining comparable language naturalness, and human evaluations show that it also increases naturalness while keeping the discriminative power of generated captions.
Several authors studied the properties of languages that are learned when agents communicate in visual tasks, @cite_4 @cite_13 @cite_2 @cite_15 . The current paper purposefully focuses on keeping the language close to natural, rather than study properties or emergent language.
{ "cite_N": [ "@cite_13", "@cite_15", "@cite_4", "@cite_2" ], "mid": [ "2730230212", "2953189990", "2627585944", "2156050092" ], "abstract": [ "A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, all learned without any human supervision! In this paper, using a Task and Tell reference game between two agents as a testbed, we present a sequence of 'negative' results culminating in a 'positive' one -- showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge 'naturally', despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.", "There is growing interest in the language developed by agents interacting in emergent-communication settings. Earlier studies have focused on the agents' symbol usage, rather than on their representation of visual input. In this paper, we consider the referential games of (2017) and investigate the representations the agents develop during their evolving interaction. We find that the agents establish successful communication by inducing visual representations that almost perfectly align with each other, but, surprisingly, do not capture the conceptual properties of the objects depicted in the input images. We conclude that, if we are interested in developing language-like communication systems, we must pay more attention to the visual semantics agents associate to the symbols they use.", "We are increasingly surrounded by artificially intelligent technology that takes decisions and executes actions on our behalf. This creates a pressing need for general means to communicate with, instruct and guide artificial agents, with human language the most compelling means for such communication. To achieve this in a scalable fashion, agents must be able to relate language to the world and to actions; that is, their understanding of language must be grounded and embodied. However, learning grounded language is a notoriously challenging problem in artificial intelligence research. Here we present an agent that learns to interpret language in a simulated 3D environment where it is rewarded for the successful execution of written instructions. Trained via a combination of reinforcement and unsupervised learning, and beginning with minimal prior knowledge, the agent learns to relate linguistic symbols to emergent perceptual representations of its physical surroundings and to pertinent sequences of actions. The agent's comprehension of language extends beyond its prior experience, enabling it to apply familiar language to unfamiliar situations and to interpret entirely novel instructions. Moreover, the speed with which this agent learns new words increases as its semantic knowledge grows. This facility for generalising and bootstrapping semantic knowledge indicates the potential of the present approach for reconciling ambiguous natural language with the complexity of the physical world.", "A spoken language generation system has been developed that learns to describe objects in computer-generated visual scenes. The system is trained by a ‘show-and-tell\" procedure in which visual scenes are paired with natural language descriptions. Learning algorithms acquire probabilistic structures which encode the visual semantics of phrase structure, word classes, and individual words. Using these structures, a planning algorithm integrates syntactic, semantic, and contextual constraints to generate natural and unambiguous descriptions of objects in novel scenes. The system generates syntactically well-formed compound adjective noun phrases, as well as relative spatial clauses. The acquired linguistic structures generalize from training data, enabling the production of novel word sequences which were never observed during training. The output of the generation system is synthesized using word-based concatenative synthesis drawing from the original training speech corpus. In evaluations of semantic comprehension by human judges, the performance of automatically generated spoken descriptions was comparable to human-generated descriptions. This work is motivated by our long-term goal of developing spoken language processing systems which grounds semantics in machine perception and action. ! 2002 Elsevier Science Ltd. All rights reserved." ] }
1907.11065
2963744496
Variants dropout methods have been designed for the fully-connected layer, convolutional layer and recurrent layer in neural networks, and shown to be effective to avoid overfitting. As an appealing alternative to recurrent and convolutional layers, the fully-connected self-attention layer surprisingly lacks a specific dropout method. This paper explores the possibility of regularizing the attention weights in Transformers to prevent different contextualized feature vectors from co-adaption. Experiments on a wide range of tasks show that DropAttention can improve performance and reduce overfitting.
We present a summary of existing models by highlighting differences among , and as shown in Table . The original idea of Dropout is proposed by @cite_26 for fully-connected networks, which is regarded as an effective regularization method. After that, many dropout techniques for specific network architectures, such as CNNs and RNNs, have been proposed. For CNNs, most successful methods require the noise to be structured @cite_18 @cite_15 @cite_22 @cite_17 @cite_25 @cite_24 @cite_12 . For example, SpatialDropout @cite_7 is used to address the spatial correlation problem. DropConnect @cite_5 sets a randomly selected subset of weights within the network to zero. For RNNs, Variational Dropout @cite_28 and ZoneOut @cite_23 are most widely used methods. In Variational Dropout, dropout rate is learned and the same neurons are dropped at every timestep. In ZoneOut, it stochastically forces some hidden units to maintain their previous values instead of dropping. Different from these methods, in this paper, we explore how to drop information on self-attention layers.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_22", "@cite_7", "@cite_28", "@cite_24", "@cite_23", "@cite_5", "@cite_15", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2890166761", "2963117513", "2885062394", "1591801644" ], "abstract": [ "Deep neural networks often work well when they are over-parameterized and trained with a massive amount of noise and regularization, such as weight decay and dropout. Although dropout is widely used as a regularization technique for fully connected layers, it is often less effective for convolutional layers. This lack of success of dropout for convolutional layers is perhaps due to the fact that neurons in a contiguous region in convolutional layers are strongly correlated so information can still flow through convolutional networks despite dropout. Thus a structured form of dropout is needed to regularize convolutional networks. In this paper, we introduce DropBlock, a form of structured dropout, where neurons in a contiguous region of a feature map are dropped together. Extensive experiments show that DropBlock works much better than dropout in regularizing convolutional networks. On ImageNet, DropBlock with ResNet-50 architecture achieves 77.65 accuracy, which is more than 1 improvement on the previous result of this architecture.", "Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves gener- alization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural net- works and provides structured sparsity, e.g. removes neurons and or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is com- puted in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer.", "Multi-layer neural networks have lead to remarkable performance on many kinds of benchmark tasks in text, speech and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting and misspecification. One approach to these estimation and related problems (local minima, colinearity, feature discovery etc.) is called Dropout (Hinton, et al 2012, 2016). The Dropout algorithm removes hidden units according to a Bernoulli random variable with probability @math prior to each update, creating random \"shocks\" to the network that are averaged over updates. In this paper we will show that Dropout is a special case of a more general model published originally in 1990 called the Stochastic Delta Rule, or SDR (Hanson, 1990). SDR redefines each weight in the network as a random variable with mean @math and standard deviation @math . Each weight random variable is sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights. Both parameters are updated according to prediction error, thus resulting in weight noise injections that reflect a local history of prediction error and local model averaging. SDR therefore implements a more sensitive local gradient-dependent simulated annealing per weight converging in the limit to a Bayes optimal network. Tests on standard benchmarks (CIFAR) using a modified version of DenseNet shows the SDR outperforms standard Dropout in test error by approx. @math with DenseNet-BC 250 on CIFAR-100 and approx. @math in smaller networks. We also show that SDR reaches the same accuracy that Dropout attains in 100 epochs in as few as 35 epochs.", "We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation." ] }
1907.11035
2963146525
Robotic grasping in cluttered environments is often infeasible due to obstacles preventing possible grasps. Then, pre-grasping manipulation like shifting or pushing an object becomes necessary. We developed an algorithm that can learn, in addition to grasping, to shift objects in such a way that their grasp probability increases. Our research contribution is threefold: First, we present an algorithm for learning the optimal pose of manipulation primitives like clamping or shifting. Second, we learn non-prehensible actions that explicitly increase the grasping probability. Making one skill (shifting) directly dependent on another (grasping) removes the need of sparse rewards, leading to more data-efficient learning. Third, we apply a real-world solution to the industrial task of bin picking, resulting in the ability to empty bins completely. The system is trained in a self-supervised manner with around 25000 grasp and 2500 shift actions. Our robot is able to grasp and file objects with 274 picks per hour. Furthermore, we demonstrate the system's ability to generalize to novel objects.
Object manipulation and in particular grasping are well-researched fields within robotics. @cite_4 differentiate between analytical and data-driven approaches to grasping. Historically, grasp synthesis was based on analytical constructions of force-closure grasps @cite_8 . In comparison, data-driven approaches are defined by sampling and ranking possible grasps. Popular ranking functions include classical mechanics and model-based grasp metrics @cite_15 @cite_12 . As modeling grasps itself is challenging, even more complex interactions like motion planning of pre-grasping actions were studied less frequently. Within this scope, Dogar et Srinivasa @cite_1 combined pushing and grasping into a single action, enabling them to grasp more cluttered objects from a table. @cite_2 presented a method for rotating objects to find more robust grasps for transport tasks.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_1", "@cite_2", "@cite_15", "@cite_12" ], "mid": [ "2951159816", "2963033241", "2003511559", "1899217968" ], "abstract": [ "We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.", "This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping. Our proposed Generative Grasping Convolutional Neural Network (GG-CNN) predicts the quality and pose of grasps at every pixel. This one-to-one mapping from a depth image overcomes limitations of current deep-learning grasping techniques by avoiding discrete sampling of grasp candidates and long computation times. Additionally, our GG-CNN is orders of magnitude smaller while detecting stable grasps with equivalent performance to current state-of-the-art techniques. The light- weight and single-pass generative nature of our GG-CNN allows for closed-loop control at up to 50Hz, enabling accurate grasping in non-static environments where objects move and in the presence of robot control inaccuracies. In our real-world tests, we achieve an 83 grasp success rate on a set of previously unseen objects with adversarial geometry and 88 on a set of household objects that are moved during the grasp attempt. We also achieve 81 accuracy when grasping in dynamic clutter.", "Abstract The problem of finding stable grasps has been widely studied in robotics. However, in many applications the resulting grasps should not only be stable but also applicable for a particular task. Task-specific grasps are closely linked to object categories so that objects in a same category can be often used to perform the same task. This paper presents a probabilistic approach for task-specific stable grasping of objects with shape variations inside the category. An optimal grasp is found as a grasp that is maximally likely to be task compatible and stable taking into account shape uncertainty in a probabilistic context. The method requires only partial models of new objects for grasp generation and only few models and example grasps are used during the training stage. The experiments show that the approach can use multiple models to generalize to new objects in that it outperforms grasping based on the closest model. The method is shown to generate stable grasps for new objects belonging to the same class as well as for similar in shape objects of different categories.", "We present a system for grasping unknown objects, even from piles or cluttered scenes, given a point cloud. Our method is based on the topography of a given scene and abstracts grasp-relevant structures to enable machine learning techniques for grasping tasks. We describe how Height Accumulated Features HAF and their extension, Symmetry Height Accumulated Features, extract grasp relevant local shapes. We investigate grasp quality using an F-score metric. We demonstrate the gain and the expressive power of HAF by comparing its trained classifier with one that resulted from training on simple height grids. An efficient way to calculate HAF is presented. We describe how the trained grasp classifier is used to explore the whole grasp space and introduce a heuristic to find the most robust grasp. We show how to use our approach to adapt the gripper opening width before grasping. In robotic experiments we demonstrate different aspects of our system on three robot platforms: a Schunk seven-degree-of-freedom arm, a PR2 and a Kuka LWR arm. We perform tasks to grasp single objects, autonomously unload a box and clear the table. Thereby we show that our approach is easily adaptable and robust with respect to different manipulators. As part of the experiments we compare our algorithm with a state-of-the-art method and show significant improvements. Concrete examples are used to illustrate the benefit of our approach compared with established grasp approaches. Finally, we show advantages of the symbiosis between our approach and object recognition." ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
It is known that in general not every scaling-based algorithm can be made strongly polynomial, see, e.g., Hochbaum's work on the allocation problem @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "1989262026", "2162960148", "2022049946", "1856442002" ], "abstract": [ "We demonstrate the impossibility of strongly polynomial algorithms for the allocation problem, in the comparison model and in the algebraic tree computation model, that follow from lower bound results. Consequently, there are no strongly polynomial algorithms for nonlinear (concave) separable optimization over a totally unimodular constraint matrix. This is in contrast to the case when the objective is linear. We present scaling-based algorithms that use a greedy algorithm as a subroutine. The algorithms are polynomial for the allocation problem and its extensions and are also optimal for the sample allocation problem and the generalized upper bounds allocation problem, in that the complexity meets the lower bound derived from the comparison model. For other extensions of the allocation problem the scaling-based algorithms presented here are the fastest known. These algorithms are also polynomial time algorithms for solving with e accuracy the allocation problem and its extension in continuous variables.", "We present a deterministic strongly polynomial algorithm that computes the permanent of a nonnegative n × n matrix to within a multiplicative factor of en. To this end we develop the first strongly polynomial-time algorithm for matrix scaling an important nonlinear optimization problem with many applications. Our work suggests a simple new (slow) polynomial time decision algorithm for bipartite perfect matching, conceptually different from classical approaches. ∗Hebrew University. Work supported in part by a grant of the Binational Israel-US Science Foundation. †Hebrew University ‡Hebrew University. Work partially supported by grant 032-7736 from the Israel Academy of Sciences. Part of this work was done during a visit to the Institute for Advanced Study, under the support of a Sloan Foundation grant 96-6-2.", "Combinatorial strongly polynomial algorithms for minimizing submodular functions have been developed by Iwata, Fleischer, and Fujishige (IFF) and by Schrijver. The IFF algorithm employs a scaling scheme for submodular functions, whereas Schrijver's algorithm achieves strongly polynomial bound with the aid of distance labeling. Subsequently, Fleischer and Iwata have described a push relabel version of Schrijver's algorithm to improve its time complexity. This paper combines the scaling scheme with the push relabel framework to yield a faster combinatorial algorithm for submodular function minimization. The resulting algorithm improves over the previously best known bound by essentially a linear factor in the size of the underlying ground set.", "The eigenvalues of a matrix polynomial can be determined classically by solving a generalized eigenproblem for a linearized matrix pencil, for instance by writing the matrix polynomial in companion form. We introduce a general scaling technique, based on tropical algebra, which applies in particular to this companion form. This scaling, which is inspired by an earlier work of Akian, Bapat, and Gaubert, relies on the computation of “tropical roots”. We give explicit bounds, in a typical case, indicating that these roots provide accurate estimates of the order of magnitude of the different eigenvalues, and we show by experiments that this scaling improves the accuracy (measured by normwise backward error) of the computations, particularly in situations in which the data have various orders of magnitude. In the case of quadratic polynomial matrices, we recover in this way a scaling due to Fan, Lin, and Van Dooren, which coincides with the tropical scaling when the two tropical roots are equal. If not, the eigenvalues generally split in two groups, and the tropical method leads to making one specific scaling for each of the groups." ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
For undirected graphs with weights in @math , APSP can be solved exactly in time @math @cite_18 @cite_33 @cite_1 @cite_35 , where @math is the matrix multiplication exponent @cite_47 . For directed graphs with weights in @math , presented an @math -time algorithm that also uses fast matrix multiplication (in fact, recent advances for rectangular matrix multiplication yield slightly stronger bounds @cite_49 @cite_39 ).
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_33", "@cite_1", "@cite_39", "@cite_49", "@cite_47" ], "mid": [ "2156047991", "1823654214", "1988067232", "2049500052" ], "abstract": [ "Let G=(V,E) be an unweighted undirected graph on n vertices. A simple argument shows that computing all distances in G with an additive one-sided error of at most 1 is as hard as Boolean matrix multiplication. Building on recent work of [SIAM J. Comput., 28 (1999), pp. 1167--1181], we describe an @math -time algorithm APASP2 for computing all distances in G with an additive one-sided error of at most 2. Algorithm APASP2 is simple, easy to implement, and faster than the fastest known matrix-multiplication algorithm. Furthermore, for every even k>2, we describe an @math -time algorithm APASPk for computing all distances in G with an additive one-sided error of at most k. We also give an @math -time algorithm @math for producing stretch 3 estimated distances in an unweighted and undirected graph on n vertices. No constant stretch factor was previously achieved in @math time. We say that a weighted graph F=(V,E') k-emulates an unweighted graph G=(V,E) if for every @math we have @math . We show that every unweighted graph on n vertices has a 2-emulator with @math edges and a 4-emulator with @math edges. These results are asymptotically tight. Finally, we show that any weighted undirected graph on n vertices has a 3-spanner with @math edges and that such a 3-spanner can be built in @math time. We also describe an @math -time algorithm for estimating all distances in a weighted undirected graph on n vertices with a stretch factor of at most 3.", "We show that the all pairs shortest paths (APSP) problem for undirected graphs with integer edge weights taken from the range 1, 2, ..., M can be solved using only a logarithmic number of distance products of matrices with elements in the range (1, 2, ..., M). As a result, we get an algorithm for the APSP problem in such graphs that runs in O (Mn sup spl omega ) time, where n is the number of vertices in the input graph, M is the largest edge weight in the graph, and spl omega <2.376 is the exponent of matrix multiplication. This improves, and also simplifies, an O (M sup ( spl omega +1) 2 n sup spl omega ) time algorithm of Galil and Margalit (1997).", "In the recent past, there has been considerable progress in devising algorithms for the all-pairs shortest paths (APSP) problem running in time significantly smaller than the obvious time bound of O(n3). Unfortunately, all the new algorithms are based on fast matrix multiplication algorithms that are notoriously impractical. Our work is motivated by the goal of devising purely combinatorial algorithms that match these improved running times. Our results come close to achieving this goal, in that we present algorithms with a small additive error in the length of the paths obtained. Our algorithms are easy to implement, have the desired property of being combinatorial in nature, and the hidden constants in the running time bound are fairly small. Our main result is an algorithm which solves the APSP problem in unweighted, undirected graphs with an additive error of 2 in time @math . This algorithm returns actual paths and not just the distances. In addition, we give more efficient algorithms with running time @math for the case where we are only required to determine shortest paths between k specified pairs of vertices rather than all pairs of vertices. The starting point for all our results is an @math algorithm for distinguishing between graphs of diameter 2 and 4, and this is later extended to obtaining a ratio 2 3 approximation to the diameter in time @math . Unlike in the case of APSP, our results for approximate diameter computation can be extended to the case of directed graphs with arbitrary positive real weights on the edges.", "We present two new algorithms for solving the All Pairs Shortest Paths (APSP) problem for weighted directed graphs. Both algorithms use fast matrix multiplication algorithms.The first algorithm solves the APSP problem for weighted directed graphs in which the edge weights are integers of small absolute value in O(n2+μ) time, where μ satisfies the equation ω(1, μ, 1) = 1 + 2μ and ω(1, μ, 1) is the exponent of the multiplication of an n × nμ matrix by an nμ × n matrix. Currently, the best available bounds on ω(1, μ, 1), obtained by Coppersmith, imply that μ 0 is an error parameter and W is the largest edge weight in the graph, after the edge weights are scaled so that the smallest non-zero edge weight in the graph is 1. It returns estimates of all the distances in the graph with a stretch of at most 1 + e. Corresponding paths can also be found efficiently." ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
For approximate APSP for real-valued graphs with weights in @math , presented an additive @math -approximation in time @math . More recently, among other results, gave an algorithm computing every distance @math up to an additive error of @math in time @math . For very small @math , this interpolates between Zwick's fastest exact algorithm and his approximation algorithm @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "1988067232", "2049500052", "2949588463", "1554473710" ], "abstract": [ "In the recent past, there has been considerable progress in devising algorithms for the all-pairs shortest paths (APSP) problem running in time significantly smaller than the obvious time bound of O(n3). Unfortunately, all the new algorithms are based on fast matrix multiplication algorithms that are notoriously impractical. Our work is motivated by the goal of devising purely combinatorial algorithms that match these improved running times. Our results come close to achieving this goal, in that we present algorithms with a small additive error in the length of the paths obtained. Our algorithms are easy to implement, have the desired property of being combinatorial in nature, and the hidden constants in the running time bound are fairly small. Our main result is an algorithm which solves the APSP problem in unweighted, undirected graphs with an additive error of 2 in time @math . This algorithm returns actual paths and not just the distances. In addition, we give more efficient algorithms with running time @math for the case where we are only required to determine shortest paths between k specified pairs of vertices rather than all pairs of vertices. The starting point for all our results is an @math algorithm for distinguishing between graphs of diameter 2 and 4, and this is later extended to obtaining a ratio 2 3 approximation to the diameter in time @math . Unlike in the case of APSP, our results for approximate diameter computation can be extended to the case of directed graphs with arbitrary positive real weights on the edges.", "We present two new algorithms for solving the All Pairs Shortest Paths (APSP) problem for weighted directed graphs. Both algorithms use fast matrix multiplication algorithms.The first algorithm solves the APSP problem for weighted directed graphs in which the edge weights are integers of small absolute value in O(n2+μ) time, where μ satisfies the equation ω(1, μ, 1) = 1 + 2μ and ω(1, μ, 1) is the exponent of the multiplication of an n × nμ matrix by an nμ × n matrix. Currently, the best available bounds on ω(1, μ, 1), obtained by Coppersmith, imply that μ 0 is an error parameter and W is the largest edge weight in the graph, after the edge weights are scaled so that the smallest non-zero edge weight in the graph is 1. It returns estimates of all the distances in the graph with a stretch of at most 1 + e. Corresponding paths can also be found efficiently.", "We study approximate distributed solutions to the weighted all-pairs-shortest-paths (APSP) problem in the CONGEST model. We obtain the following results. @math A deterministic @math -approximation to APSP in @math rounds. This improves over the best previously known algorithm, by both derandomizing it and by reducing the running time by a @math factor. In many cases, routing schemes involve relabeling, i.e., assigning new names to nodes and require that these names are used in distance and routing queries. It is known that relabeling is necessary to achieve running times of @math . In the relabeling model, we obtain the following results. @math A randomized @math -approximation to APSP, for any integer @math , running in @math rounds, where @math is the hop diameter of the network. This algorithm simplifies the best previously known result and reduces its approximation ratio from @math to @math . Also, the new algorithm uses uses labels of asymptotically optimal size, namely @math bits. @math A randomized @math -approximation to APSP, for any integer @math , running in time @math and producing compact routing tables of size @math . The node lables consist of @math bits. This improves on the approximation ratio of @math for tables of that size achieved by the best previously known algorithm, which terminates faster, in @math rounds.", "We consider the quantum time complexity of the all pairs shortest paths (APSP) problem and some of its variants. The trivial classical algorithm for APSP and most all pairs path problems runs in @math time, while the trivial algorithm in the quantum setting runs in @math time, using Grover search. A major open problem in classical algorithms is to obtain a truly subcubic time algorithm for APSP, i.e. an algorithm running in @math time for constant @math . To approach this problem, many truly subcubic time classical algorithms have been devised for APSP and its variants for structured inputs. Some examples of such problems are APSP in geometrically weighted graphs, graphs with small integer edge weights or a small number of weights incident to each vertex, and the all pairs earliest arrivals problem. In this paper we revisit these problems in the quantum setting and obtain the first nontrivial (i.e. @math time) quantum algorithms for the problems." ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
In this paper we will focus on the problem of @math -approximating APSP when @math is close to @math . For @math the problem is at least as hard as Boolean matrix multiplication @cite_7 and thus requires time @math . However, there are more efficient algorithms in the regime @math for undirected graphs, using and @cite_42 .
{ "cite_N": [ "@cite_42", "@cite_7" ], "mid": [ "1988067232", "1823654214", "2156047991", "1970052762" ], "abstract": [ "In the recent past, there has been considerable progress in devising algorithms for the all-pairs shortest paths (APSP) problem running in time significantly smaller than the obvious time bound of O(n3). Unfortunately, all the new algorithms are based on fast matrix multiplication algorithms that are notoriously impractical. Our work is motivated by the goal of devising purely combinatorial algorithms that match these improved running times. Our results come close to achieving this goal, in that we present algorithms with a small additive error in the length of the paths obtained. Our algorithms are easy to implement, have the desired property of being combinatorial in nature, and the hidden constants in the running time bound are fairly small. Our main result is an algorithm which solves the APSP problem in unweighted, undirected graphs with an additive error of 2 in time @math . This algorithm returns actual paths and not just the distances. In addition, we give more efficient algorithms with running time @math for the case where we are only required to determine shortest paths between k specified pairs of vertices rather than all pairs of vertices. The starting point for all our results is an @math algorithm for distinguishing between graphs of diameter 2 and 4, and this is later extended to obtaining a ratio 2 3 approximation to the diameter in time @math . Unlike in the case of APSP, our results for approximate diameter computation can be extended to the case of directed graphs with arbitrary positive real weights on the edges.", "We show that the all pairs shortest paths (APSP) problem for undirected graphs with integer edge weights taken from the range 1, 2, ..., M can be solved using only a logarithmic number of distance products of matrices with elements in the range (1, 2, ..., M). As a result, we get an algorithm for the APSP problem in such graphs that runs in O (Mn sup spl omega ) time, where n is the number of vertices in the input graph, M is the largest edge weight in the graph, and spl omega <2.376 is the exponent of matrix multiplication. This improves, and also simplifies, an O (M sup ( spl omega +1) 2 n sup spl omega ) time algorithm of Galil and Margalit (1997).", "Let G=(V,E) be an unweighted undirected graph on n vertices. A simple argument shows that computing all distances in G with an additive one-sided error of at most 1 is as hard as Boolean matrix multiplication. Building on recent work of [SIAM J. Comput., 28 (1999), pp. 1167--1181], we describe an @math -time algorithm APASP2 for computing all distances in G with an additive one-sided error of at most 2. Algorithm APASP2 is simple, easy to implement, and faster than the fastest known matrix-multiplication algorithm. Furthermore, for every even k>2, we describe an @math -time algorithm APASPk for computing all distances in G with an additive one-sided error of at most k. We also give an @math -time algorithm @math for producing stretch 3 estimated distances in an unweighted and undirected graph on n vertices. No constant stretch factor was previously achieved in @math time. We say that a weighted graph F=(V,E') k-emulates an unweighted graph G=(V,E) if for every @math we have @math . We show that every unweighted graph on n vertices has a 2-emulator with @math edges and a 4-emulator with @math edges. These results are asymptotically tight. Finally, we show that any weighted undirected graph on n vertices has a 3-spanner with @math edges and that such a 3-spanner can be built in @math time. We also describe an @math -time algorithm for estimating all distances in a weighted undirected graph on n vertices with a stretch factor of at most 3.", "The authors have solved the all pairs shortest distances (APSD) problem for graphs with integer edge lengths. Our algorithm is subcubic for edge lengths of small (?M) absolute value. In this paper we show how to transform these algorithms to solve the all pairs shortest paths (APSP), in the same time complexity, up to a polylogarithmic factor. Forn=|V| the number of vertices,Mthe bound on edge length, and?the exponent of matrix multiplication, we get the following results: 1. A directed nonnegative APSP(n, M) algorithm which runs inO(T(n, M)) time, where T(n, m)= 2. An undirected APSP(n, M) algorithm which runs inO(M(?+1) 2n?log(Mn)) time. 3. A general APSP(n, M) algorithm which runs inO((Mn)(3+?) 2)." ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
APSP and APBP can be easily computed in time @math on quantum computers @cite_40 . designed the first quantum algorithm for computing in time @math , and noted that every problem equivalent to APBP admits a nontrivial @math -time algorithm in the quantum realm.
{ "cite_N": [ "@cite_40" ], "mid": [ "1554473710", "2006290558", "2009948449", "2145860679" ], "abstract": [ "We consider the quantum time complexity of the all pairs shortest paths (APSP) problem and some of its variants. The trivial classical algorithm for APSP and most all pairs path problems runs in @math time, while the trivial algorithm in the quantum setting runs in @math time, using Grover search. A major open problem in classical algorithms is to obtain a truly subcubic time algorithm for APSP, i.e. an algorithm running in @math time for constant @math . To approach this problem, many truly subcubic time classical algorithms have been devised for APSP and its variants for structured inputs. Some examples of such problems are APSP in geometrically weighted graphs, graphs with small integer edge weights or a small number of weights incident to each vertex, and the all pairs earliest arrivals problem. In this paper we revisit these problems in the quantum setting and obtain the first nontrivial (i.e. @math time) quantum algorithms for the problems.", "Recently a great deal of attention has been focused on quantum computation following a sequence of results [Bernstein and Vazirani, in Proc. 25th Annual ACM Symposium Theory Comput., 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339], [Simon, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 116--123, SIAM J. Comput., 26 (1997), pp. 1340--1349], [Shor, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 124--134] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor's result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of @math can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random with probability 1 the class @math cannot be solved on a quantum Turing machine (QTM) in time @math . We also show that relative to a permutation oracle chosen uniformly at random with probability 1 the class @math cannot be solved on a QTM in time @math . The former bound is tight since recent work of Grover [in Proc. @math th Annual ACM Symposium Theory Comput. , 1996] shows how to accept the class @math relative to any oracle on a quantum computer in time @math .", "We construct a black box graph traversal problem that can be solved exponentially faster on a quantum computer than on a classical computer. The quantum algorithm is based on a continuous time quantum walk, and thus employs a different technique from previous quantum algorithms based on quantum Fourier transforms. We show how to implement the quantum walk efficiently in our black box setting. We then show how this quantum walk solves our problem by rapidly traversing a graph. Finally, we prove that no classical algorithm can solve the problem in subexponential time.", "The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that the se systems perform as they should, if we cannot efficiently comp ute predictions for their behavior? Vazirani has asked [Vaz07]: If computing predictions for Quantum Mechanics requires exponential resources, is Quantum Mechanics a falsifiable theory? In cryptographic settings, an untruste d future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To provide answers to these questions, we define Quantum Prov er Interactive Proofs (QPIP). Whereas in standard Interactive Proofs [GMR85] the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computati onal capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: ”Any language in BQP has a QPIP, and moreover, a fault tolerant one”. We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our � �" ] }
1907.11078
2920771172
Zwick's @math -approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time @math , where @math is the exponent of matrix multiplication and @math denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick's algorithm uses the scaling technique, it has a factor @math in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of @math ; this is called strongly polynomial. Our main results are as follows. - We design approximation schemes in strongly polynomial time @math for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. - For APSP on directed graphs we design an approximation scheme in strongly polynomial time @math . This is significantly faster than the best exact algorithm. - We explain why our approximation scheme for APSP on directed graphs has a worse exponent than @math : Any improvement over our exponent @math would improve the best known algorithm for Min-Max Product In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent.
It is also worth mentioning that there are efficient algorithms for products in other algebraic structures, e.g., dominance product, @math -product, @math -product (see, e.g., @cite_31 ).
{ "cite_N": [ "@cite_31" ], "mid": [ "2072647561", "2028763350", "2145005214", "2171900177" ], "abstract": [ "It is demonstrated that power-efficient software also requires simplicity and the use of elementary data structures in addition to asymptotically optimal CPU and memory requirements. Though in the past few decades much effort has been devoted to reporting all k intersecting pairs in a planar set of n iso-oriented rectangles, all the known algorithms using elementary data structures, such as linked lists, are either not optimal, report some intersections repeatedly or fail to report some altogether. A simpler algorithm is proposed that uses only linear arrays and that takes O(n log n + k) time and O(n) space, which are the best possible under the algebraic RAM model of computation. The algorithm is designed for systems with limited resources, such as mobile 3D graphics, and can be implemented in less than 100 lines of Java code.", "Competitive numerical algorithms for solving partial differential equations have to work with the most efficient numerical methods like multigrid and adaptive grid refinement and thus with hierarchical data structures. Unfortunately, in most implementations, hierarchical data—typically stored in trees—cause a nonnegligible overhead in data access. To overcome this quandary—numerical efficiency versus efficient implementation—our algorithm uses space-filling curves to build up data structures which are processed linearly. In fact, the only kind of data structure used in our implementation is stacks. Thus, data access becomes very fast—even faster than the common access to nonhierarchical data stored in matrices—and, in particular, cache misses are reduced considerably. Furthermore, the implementation of multigrid cycles and or higher order discretizations as well as the parallelization of the whole algorithm become very easy and straightforward on these data structures.", "In recent years it has been shown that for many linear algebra operations it is possible to create families of algorithms following a very systematic procedure. We do not refer to the fine tuning of a known algorithm, but to a methodology for the actual generation of both algorithms and routines to solve a given target matrix equation. Although systematic, the methodology relies on complex algebraic manipulations and non-obvious pattern matching, making the procedure challenging to be performed by hand, our goal is the development of a fully automated system that from the sole description of a target equation creates multiple algorithms and routines. We present CL1ck, a symbolic system written in Mathematica, that starts with an equation, decomposes it into multiple equations, and returns a set of loop-invariants for the algorithms -- yet to be generated -- that will solve the equation. In a successive step each loop-invariant is then mapped to its corresponding algorithm and routine. For a large class of equations, the methodology generates known algorithms as well as many previously unknown ones. Most interestingly, the methodology unifies algorithms traditionally developed in isolation. As an example, the five well known algorithms for the LU factorization are for the first time unified under a common root.", "A simple partitioning algorithm for merging two disjoint linearly ordered sets is given, and an upper bound on the average number of comparisons required is established. The upper bound is @math , where n is the number of elements in the larger of the two sets, m the number of the smaller, and @math . An immediate corollary is that any sorting problem can be done with an average number of comparisons within @math of the information theoretic bound using repeated merges; it does not matter what the merging order used is. Although the provable bound is @math over the lower bound, computations indicate that the algorithm will asymptotically make only @math more comparisons than the lower bound. The algorithm is compared with the Hwang-Lin algorithm, and modifications to improve average efficiency of this well known algorithm are given." ] }
1907.10931
2963645312
Nonlinear image registration continues to be a fundamentally important tool in medical image analysis. Diagnostic tasks, image-guided surgery and radiotherapy as well as motion analysis all rely heavily on accurate intra-patient alignment. Furthermore, inter-patient registration enables atlas-based segmentation or landmark localisation and shape analysis. When labelled scans are scarce and anatomical differences large, conventional registration has often remained superior to deep learning methods that have so far mainly dealt with relatively small or low-complexity deformations. We address this shortcoming by leveraging ideas from probabilistic dense displacement optimisation that has excelled in many registration tasks with large deformations. We propose to design a network with approximate min-convolutions and mean field inference for differentiable displacement regularisation within a discrete weakly-supervised registration setting. By employing these meaningful and theoretically proven constraints, our learnable registration algorithm contains very few trainable weights (primarily for feature extraction) and is easier to train with few labelled scans. It is very fast in training and inference and achieves state-of-the-art accuracies for the challenging inter-patient registration of abdominal CT outperforming previous deep learning approaches by 15 Dice overlap.
* Contributions We propose a new learning model for DLIR that better leverages the advantages of probabilistic dense displacement sampling by introducing strong regularisation with differentiable constraints that explicitly considers the 6D nature of the problem. We hence decouple convolutional feature learning from the fitting of a spatial transformation using mean-field inference for regularisation @cite_16 @cite_6 and approximate min-convolutions @cite_17 for computing inter-label compatibilities. Our feature extractor uses 3D deformable convolutions @cite_13 and is very lightweight. To our knowledge this is the first approach that combines discrete DLIR with the differentiable use of mean-field regularisation. In contrast to previous work, our model requires fewer trainable weights, captures larger deformations and can be trained from few labelled scans to high accuracy. We also introduce a new non-local label loss for improved guidance instead of the more widely used spatial transformer based loss.
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_6", "@cite_17" ], "mid": [ "2949257576", "2462462929", "2798785261", "2553156677" ], "abstract": [ "The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at this https URL", "In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.", "Convolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.", "Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback–Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks." ] }
1907.11117
2963229777
This work introduces verb-only representations for both recognition and retrieval of visual actions, in video. Current methods neglect legitimate semantic ambiguities between verbs, instead choosing unambiguous subsets of verbs along with objects to disambiguate the actions. We instead propose multiple verb-only labels, which we learn through hard or soft assignment as a regression. This enables learning a much larger vocabulary of verbs, including contextual overlaps of these verbs. We collect multi-verb annotations for three action video datasets and evaluate the verb-only labelling representations for action recognition and cross-modal retrieval (video-to-text and text-to-video). We demonstrate that multi-label verb-only representations outperform conventional single verb labels. We also explore other benefits of a multi-verb representation including cross-dataset retrieval and verb type manner and result verb types) retrieval.
Action Recognition in Videos Video Action Recognition datasets are commonly annotated with a reduced set of semantically distinct verb labels @cite_26 @cite_18 @cite_54 @cite_25 @cite_24 @cite_29 @cite_31 @cite_43 . Only in EPIC-Kitchens @cite_26 , verb labels are collected from narrations with an open vocabulary leading to overlapping labels, which are then manually clustered into unambiguous classes. Ambiguity and overlaps in verbs has been noted in @cite_51 @cite_48 . Our prior work @cite_51 uses the verb hierarchy in WordNet @cite_1 synsets to reduce ambiguity. We note how annotators were confused, and often could not distinguish between the different verb meanings. Khamis and Davis @cite_48 used multi-verb labels in action recognition, on a small set of (10) verbs. They jointly learn multi-label classification and label correlation, using a bi-linear approach, allowing an actor to be in a state of performing multiple actions such as walking and talking . This work is the closest to ours in motivation, however their approach uses hard assignment of verbs, and does not address single-verb ambiguity, assuming each verb to be non-ambiguous. Up to our knowledge, no other work has explored multi-label verb-only representations of actions in video.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_48", "@cite_54", "@cite_29", "@cite_1", "@cite_24", "@cite_43", "@cite_31", "@cite_51", "@cite_25" ], "mid": [ "2002706836", "2108710284", "2949594863", "2949827582" ], "abstract": [ "Recent progress in the field of human action recognition points towards the use of Spatio-Temporal Interest Points (STIPs) for local descriptor-based recognition strategies. In this paper, we present a novel approach for robust and selective STIP detection, by applying surround suppression combined with local and temporal constraints. This new method is significantly different from existing STIP detection techniques and improves the performance by detecting more repeatable, stable and distinctive STIPs for human actors, while suppressing unwanted background STIPs. For action representation we use a bag-of-video words (BoV) model of local N-jet features to build a vocabulary of visual-words. To this end, we introduce a novel vocabulary building strategy by combining spatial pyramid and vocabulary compression techniques, resulting in improved performance and efficiency. Action class specific Support Vector Machine (SVM) classifiers are trained for categorization of human actions. A comprehensive set of experiments on popular benchmark datasets (KTH and Weizmann), more challenging datasets of complex scenes with background clutter and camera motion (CVC and CMU), movie and YouTube video clips (Hollywood 2 and YouTube), and complex scenes with multiple actors (MSR I and Multi-KTH), validates our approach and show state-of-the-art performance. Due to the unavailability of ground truth action annotation data for the Multi-KTH dataset, we introduce an actor specific spatio-temporal clustering of STIPs to address the problem of automatic action annotation of multiple simultaneous actors. Additionally, we perform cross-data action recognition by training on source datasets (KTH and Weizmann) and testing on completely different and more challenging target datasets (CVC, CMU, MSR I and Multi-KTH). This documents the robustness of our proposed approach in the realistic scenario, using separate training and test datasets, which in general has been a shortcoming in the performance evaluation of human action recognition techniques.", "We are given a set of video clips, each one annotated with an ordered list of actions, such as “walk” then “sit” then “answer phone” extracted from, for example, the associated text script. We seek to temporally localize the individual actions in each clip as well as to learn a discriminative classifier for each action. We formulate the problem as a weakly supervised temporal assignment with ordering constraints. Each video clip is divided into small time intervals and each time interval of each video clip is assigned one action label, while respecting the order in which the action labels appear in the given annotations. We show that the action label assignment can be determined together with learning a classifier for each action in a discriminative manner. We evaluate the proposed model on a new and challenging dataset of 937 video clips with a total of 787720 frames containing sequences of 16 different actions from 69 Hollywood movies.", "We are given a set of video clips, each one annotated with an ordered list of actions, such as \"walk\" then \"sit\" then \"answer phone\" extracted from, for example, the associated text script. We seek to temporally localize the individual actions in each clip as well as to learn a discriminative classifier for each action. We formulate the problem as a weakly supervised temporal assignment with ordering constraints. Each video clip is divided into small time intervals and each time interval of each video clip is assigned one action label, while respecting the order in which the action labels appear in the given annotations. We show that the action label assignment can be determined together with learning a classifier for each action in a discriminative manner. We evaluate the proposed model on a new and challenging dataset of 937 video clips with a total of 787720 frames containing sequences of 16 different actions from 69 Hollywood movies.", "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. We will release the dataset publicly. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6 mAP, underscoring the need for developing new approaches for video understanding." ] }
1907.11117
2963229777
This work introduces verb-only representations for both recognition and retrieval of visual actions, in video. Current methods neglect legitimate semantic ambiguities between verbs, instead choosing unambiguous subsets of verbs along with objects to disambiguate the actions. We instead propose multiple verb-only labels, which we learn through hard or soft assignment as a regression. This enables learning a much larger vocabulary of verbs, including contextual overlaps of these verbs. We collect multi-verb annotations for three action video datasets and evaluate the verb-only labelling representations for action recognition and cross-modal retrieval (video-to-text and text-to-video). We demonstrate that multi-label verb-only representations outperform conventional single verb labels. We also explore other benefits of a multi-verb representation including cross-dataset retrieval and verb type manner and result verb types) retrieval.
Action Retrieval Distinct from recognition, cross-modal retrieval approaches have been proposed for visual actions both in images @cite_6 @cite_21 @cite_30 and videos @cite_49 @cite_15 @cite_27 . These works focus on instance retrieval, given a caption can the corresponding video image be retrieved and vice versa. This is different from our attempt to retrieve similar actions rather than only the corresponding video caption. Only Hahn al @cite_44 train an embedding space for videos and verbs only, using word2vec as the target space. They use verbs from UCF101 @cite_52 and HMDB51 @cite_39 in addition to verb-noun classes from Kinetics @cite_40 . These are coarser actions ( diving vs. running ) and as such have little overlap allowing the target space to perform well on unseen actions.
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_52", "@cite_6", "@cite_39", "@cite_44", "@cite_27", "@cite_40", "@cite_49", "@cite_15" ], "mid": [ "2908138876", "1982795953", "2551975789", "2770325561" ], "abstract": [ "We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the art results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.", "Recent research in video retrieval has been successful at finding videos when the query consists of tens or hundreds of sample relevant videos for training supervised models. Instead, we investigate unsupervised zero-shot retrieval where no training videos are provided: a query consists only of a text statement. For retrieval, we use text extracted from images in the videos, text recognized in the speech of its audio track, as well as automatically detected semantically meaningful visual video concepts identified with widely varying confidence in the videos. In this work we introduce a new method for automatically identifying relevant concepts given a text query using the Markov Random Field (MRF) retrieval framework. We use source expansion to build rich textual representations of semantic video concepts from large external sources such as the web. We find that concept-based retrieval significantly outperforms text based approaches in recall. Using an evaluation derived from the TRECVID MED'11 track, we present early results that an approach using multi-modal fusion can compensate for inadequacies in each modality, resulting in substantial effectiveness gains. With relevance feedback, our approach provides additional improvements of over 50 .", "By extracting spatial and temporal characteristics in one network, the two-stream ConvNets can achieve the state-of-the-art performance in action recognition. However, such a framework typically suffers from the separately processing of spatial and temporal information between the two standalone streams and is hard to capture long-term temporal dependence of an action. More importantly, it is incapable of finding the salient portions of an action, say, the frames that are the most discriminative to identify the action. To address these problems, a oint etwork based ttention (JNA) is proposed in this study. We find that the fully-connected fusion, branch selection and spatial attention mechanism are totally infeasible for action recognition. Thus in our joint network, the spatial and temporal branches share some information during the training stage. We also introduce an attention mechanism on the temporal domain to capture the long-term dependence meanwhile finding the salient portions. Extensive experiments are conducted on two benchmark datasets, UCF101 and HMDB51. Experimental results show that our method can improve the action recognition performance significantly and achieves the state-of-the-art results on both datasets.", "Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal retrieval performance. Unlike existing image-text retrieval approaches that embed image-text pairs as single feature vectors in a common representational space, we propose to incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only the global abstract features but also the local grounded features. Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset." ] }
1907.10992
2963178965
This paper addresses the problem of enhancing underexposed photos. Existing methods have tackled this problem from many different perspectives and achieved remarkable progress. However, they may fail to produce satisfactory results due to the presence of visual artifacts such as color distortion, loss of details and uneven exposure, etc. To obtain high-quality results free of these artifacts, we present a novel underexposed photo enhancement approach in this paper. Our main observation is that, the reason why existing methods induce the artifacts is because they break a perceptual consistency between the input and the enhanced output. Based on this observation, an effective criterion, called perceptually bidirectional similarity (PBS) is proposed for preserving the perceptual consistency during enhancement. Particularly, we cast the underexposed photo enhancement as PBS-constrained illumination estimation optimization, where the PBS is defined as three constraints for estimating the illumination that can recover the enhancement results with normal exposure, distinct contrast, clear details and vivid color. To make our method more efficient and scalable to high-resolution images, we introduce a sampling-based strategy for accelerating the illumination estimation. Moreover, we extend our method to handle underexposed videos. Qualitative and quantitative comparisons as well as the user study demonstrate the superiority of our method over the state-of-the-art methods.
Mapping pixel intensities with sigmoid functions is another commonly-used way to enhance photos. A well-known representative is Gamma Correction, which expands the dynamic range via a power-law function. As globally applying sigmoid mapping may generate visually distorted results, existing methods usually perform locally adaptive mapping. For instance, Bennett and McMillan @cite_5 decomposed the input image into a base and detail layers, and applied different mappings for the two layers to preserve the image details, while Yuan and Sun @cite_20 segmented the image into subregions and computed luminance-aware detail-preserving mapping for each subregion. Zhang al @cite_15 created multiple tone mapped versions for the input image and fused them into a well-exposed image. Since finding locally optimal sigmoid mappings and ensuring globally smooth transition are difficult, these methods often fail for complex images.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "2804654955", "2412926690", "1997701791", "2052094314" ], "abstract": [ "Because of the powerful learning capability of deep neural networks, counting performance via density map estimation has improved significantly during the past several years. However, it is still very challenging due to severe occlusion, large scale variations, and perspective distortion. Scale variations (from image to image) coupled with perspective distortion (within one image) result in huge scale changes of the object size. Earlier methods based on convolutional neural networks (CNN) typically did not handle this scale variation explicitly, until Hydra-CNN and MCNN. MCNN uses three columns, each with different filter sizes, to extract features at different scales. In this paper, in contrast to using filters of different sizes, we utilize an image pyramid to deal with scale variations. It is more effective and efficient to resize the input fed into the network, as compared to using larger filter sizes. Secondly, we adaptively fuse the predictions from different scales (using adaptively changing per-pixel weights), which makes our method adapt to scale changes within an image. The adaptive fusing is achieved by generating an across-scale attention map, which softly selects a suitable scale for each pixel, followed by a 1x1 convolution. Extensive experiments on three popular datasets show very compelling results.", "We propose a straightforward and efficient fusion-based method for enhancing weakly illumination images that uses several mature image processing techniques. First, we employ an illumination estimating algorithm based on morphological closing to decompose an observed image into a reflectance image and an illumination image. We then derive two inputs that represent luminance-improved and contrast-enhanced versions of the first decomposed illumination using the sigmoid function and adaptive histogram equalization. Designing two weights based on these inputs, we produce an adjusted illumination by fusing the derived inputs with the corresponding weights in a multi-scale fashion. Through a proper weighting and fusion strategy, we blend the advantages of different techniques to produce the adjusted illumination. The final enhanced image is obtained by compensating the adjusted illumination back to the reflectance. Through this synthesis, the enhanced image represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image. In the proposed fusion-based framework, images under different weak illumination conditions such as backlighting, non-uniform illumination and nighttime can be enhanced. HighlightsA fusion-based method for enhancing various weakly illuminated images is proposed.The proposed method requires only one input to obtain the enhanced image.Different mature image processing techniques can be blended in our framework.Our method has an efficient computation time for practical applications.", "This article presents a new, unified technique to perform general edge-sensitive editing operations on n-dimensional images and videos efficiently. The first contribution of the article is the introduction of a Generalized Geodesic Distance Transform (GGDT), based on soft masks. This provides a unified framework to address several edge-aware editing operations. Diverse tasks such as denoising and nonphotorealistic rendering are all dealt with fundamentally the same, fast algorithm. Second, a new Geodesic Symmetric Filter (GSF) is presented which imposes contrast-sensitive spatial smoothness into segmentation and segmentation-based editing tasks (cutout, object highlighting, colorization, panorama stitching). The effect of the filter is controlled by two intuitive, geometric parameters. In contrast to existing techniques, the GSF filter is applied to real-valued pixel likelihoods (soft masks), thanks to GGDTs and it can be used for both interactive and automatic editing. Complex object topologies are dealt with effortlessly. Finally, the parallelism of GGDTs enables us to exploit modern multicore CPU architectures as well as powerful new GPUs, thus providing great flexibility of implementation and deployment. Our technique operates on both images and videos, and generalizes naturally to n-dimensional data. The proposed algorithm is validated via quantitative and qualitative comparisons with existing, state-of-the-art approaches. Numerous results on a variety of image and video editing tasks further demonstrate the effectiveness of our method.", "If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scale-invariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure hightransitiontiltsillustration). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine." ] }
1907.10992
2963178965
This paper addresses the problem of enhancing underexposed photos. Existing methods have tackled this problem from many different perspectives and achieved remarkable progress. However, they may fail to produce satisfactory results due to the presence of visual artifacts such as color distortion, loss of details and uneven exposure, etc. To obtain high-quality results free of these artifacts, we present a novel underexposed photo enhancement approach in this paper. Our main observation is that, the reason why existing methods induce the artifacts is because they break a perceptual consistency between the input and the enhanced output. Based on this observation, an effective criterion, called perceptually bidirectional similarity (PBS) is proposed for preserving the perceptual consistency during enhancement. Particularly, we cast the underexposed photo enhancement as PBS-constrained illumination estimation optimization, where the PBS is defined as three constraints for estimating the illumination that can recover the enhancement results with normal exposure, distinct contrast, clear details and vivid color. To make our method more efficient and scalable to high-resolution images, we introduce a sampling-based strategy for accelerating the illumination estimation. Moreover, we extend our method to handle underexposed videos. Qualitative and quantitative comparisons as well as the user study demonstrate the superiority of our method over the state-of-the-art methods.
This kind of method is built upon the assumption that an underexposed image is the pixel-wise product of the expected enhancement result and a single-channel illumination map. In this fashion, the enhancement problem can be treated as an illumination estimation problem. Jobson al @cite_3 made an early attempt to this problem, but their results often look unnatural due to the frequently appeared artifacts such as loss of details, color distortion, and uneven exposure. Subsequent methods in this category focus on improving the results @cite_12 @cite_21 @cite_28 @cite_45 @cite_8 . However, they may also fail, especially for non-uniformly illuminated underexposed images. Our method also belongs to this category. However, by maintaining the proposed PBS, our method is able to robustly generate visually pleasing results free of the visual artifacts encountered by previous methods (see Fig. , Fig. and ).
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_21", "@cite_3", "@cite_45", "@cite_12" ], "mid": [ "2412926690", "2343431701", "2754419751", "2149550213" ], "abstract": [ "We propose a straightforward and efficient fusion-based method for enhancing weakly illumination images that uses several mature image processing techniques. First, we employ an illumination estimating algorithm based on morphological closing to decompose an observed image into a reflectance image and an illumination image. We then derive two inputs that represent luminance-improved and contrast-enhanced versions of the first decomposed illumination using the sigmoid function and adaptive histogram equalization. Designing two weights based on these inputs, we produce an adjusted illumination by fusing the derived inputs with the corresponding weights in a multi-scale fashion. Through a proper weighting and fusion strategy, we blend the advantages of different techniques to produce the adjusted illumination. The final enhanced image is obtained by compensating the adjusted illumination back to the reflectance. Through this synthesis, the enhanced image represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image. In the proposed fusion-based framework, images under different weak illumination conditions such as backlighting, non-uniform illumination and nighttime can be enhanced. HighlightsA fusion-based method for enhancing various weakly illuminated images is proposed.The proposed method requires only one input to obtain the enhanced image.Different mature image processing techniques can be blended in our framework.Our method has an efficient computation time for practical applications.", "Underexposed video enhancement aims at revealing hidden details that are barely noticeable in LDR video frames with noise. Previous work typically relies on a single heuristic tone mapping curve to expand the dynamic range, which inevitably leads to uneven exposure and visual artifacts. In this paper, we present a novel approach for underexposed video enhancement using an efficient perception-driven progressive fusion. For an input underexposed video, we first remap each video frame using a series of tentative tone mapping curves to generate an multi-exposure image sequence that contains different exposed versions of the original video frame. Guided by some visual perception quality measures encoding the desirable exposed appearance, we locate all the best exposed regions from multi-exposure image sequences and then integrate them into a well-exposed video in a temporally consistent manner. Finally, we further perform an effective texture-preserving spatio-temporal filtering on this well-exposed video to obtain a high-quality noise-free result. Experimental results have shown that the enhanced video exhibits uniform exposure, brings out noticeable details, preserves temporal coherence, and avoids visual artifacts. Besides, we demonstrate applications of our approach to a set of problems including video dehazing, video denoising and HDR video reconstruction.", "The conventional methods for estimating camera poses and scene structures from severely blurry or low resolution images often result in failure. The off-the-shelf deblurring or super-resolution methods may show visually pleasing results. However, applying each technique independently before matching is generally unprofitable because this naive series of procedures ignores the consistency between images. In this paper, we propose a pioneering unified framework that solves four problems simultaneously, namely, dense depth reconstruction, camera pose estimation, super-resolution, and deblurring. By reflecting a physical imaging process, we formulate a cost minimization problem and solve it using an alternating optimization technique. The experimental results on both synthetic and real videos show high-quality depth maps derived from severely degraded images that contrast the failures of naive multi-view stereo methods. Our proposed method also produces outstanding deblurred and super-resolved images unlike the independent application or combination of conventional video deblurring, super-resolution methods.", "A method was recently devised for the recovery of an invariant image from a 3-band colour image. The invariant image, originally 1D greyscale but here derived as a 2D chromaticity, is independent of lighting, and also has shading removed: it forms an intrinsic image that may be used as a guide in recovering colour images that are independent of illumination conditions. Invariance to illuminant colour and intensity means that such images are free of shadows, as well, to a good degree. The method devised finds an intrinsic reflectivity image based on assumptions of Lambertian reflectance, approximately Planckian lighting, and fairly narrowband camera sensors. Nevertheless, the method works well when these assumptions do not hold. A crucial piece of information is the angle for an “invariant direction” in a log-chromaticity space. To date, we have gleaned this information via a preliminary calibration routine, using the camera involved to capture images of a colour target under different lights. In this paper, we show that we can in fact dispense with the calibration step, by recognizing a simple but important fact: the correct projection is that which minimizes entropy in the resulting invariant image. To show that this must be the case we first consider synthetic images, and then apply the method to real images. We show that not only does a correct shadow-free image emerge, but also that the angle found agrees with that recovered from a calibration. As a result, we can find shadow-free images for images with unknown camera, and the method is applied successfully to remove shadows from unsourced imagery." ] }
1907.10992
2963178965
This paper addresses the problem of enhancing underexposed photos. Existing methods have tackled this problem from many different perspectives and achieved remarkable progress. However, they may fail to produce satisfactory results due to the presence of visual artifacts such as color distortion, loss of details and uneven exposure, etc. To obtain high-quality results free of these artifacts, we present a novel underexposed photo enhancement approach in this paper. Our main observation is that, the reason why existing methods induce the artifacts is because they break a perceptual consistency between the input and the enhanced output. Based on this observation, an effective criterion, called perceptually bidirectional similarity (PBS) is proposed for preserving the perceptual consistency during enhancement. Particularly, we cast the underexposed photo enhancement as PBS-constrained illumination estimation optimization, where the PBS is defined as three constraints for estimating the illumination that can recover the enhancement results with normal exposure, distinct contrast, clear details and vivid color. To make our method more efficient and scalable to high-resolution images, we introduce a sampling-based strategy for accelerating the illumination estimation. Moreover, we extend our method to handle underexposed videos. Qualitative and quantitative comparisons as well as the user study demonstrate the superiority of our method over the state-of-the-art methods.
An increasing amount of efforts focus on investigating learning-based enhancement methods since the pioneering work of Bychkovsky al @cite_43 , which provides the first and largest MIT-Adobe FiveK dataset consisting of input output image pairs for tone adjustment. Yan al @cite_23 achieved automatic color enhancement by tackling a learning-to-rank problem, while Yan al @cite_14 enabled semantic-aware image enhancement. Recently, Lore al @cite_36 presented a deep autoencoder-based approach for enhancing low-light images. Gharbi al @cite_4 proposed bilateral learning to enable real-time image enhancement, while Chen al @cite_39 designed an unpaired learning model for image enhancement based on a two-way generative adversarial networks (GANs). The main limitation of learning-based methods is that they typically do not generalize well to images that do not exist in the training datasets.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_36", "@cite_39", "@cite_43", "@cite_23" ], "mid": [ "2798844427", "2949212125", "2964338366", "2739540493" ], "abstract": [ "This paper proposes an unpaired learning method for image enhancement. Given a set of photographs with the desired characteristics, the proposed method learns a photo enhancer which transforms an input image into an enhanced image with those characteristics. The method is based on the framework of two-way generative adversarial networks (GANs) with several improvements. First, we augment the U-Net with global features and show that it is more effective. The global U-Net acts as the generator in our GAN model. Second, we improve Wasserstein GAN (WGAN) with an adaptive weighting scheme. With this scheme, training converges faster and better, and is less sensitive to parameters than WGAN-GP. Finally, we propose to use individual batch normalization layers for generators in two-way GANs. It helps generators better adapt to their own input distributions. All together, they significantly improve the stability of GAN training for our application. Both quantitative and visual results show that the proposed method is effective for enhancing images.", "Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images often fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that attempt to map representations between the two domains or learn to extract features that are domain-invariant. In this work, we present a new approach that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.", "Recently, Image-to-Image Translation (IIT) has achieved great progress in image style transfer and semantic context manipulation for images. However, existing approaches require exhaustively labelling training data, which is labor demanding, difficult to scale up, and hard to adapt to a new domain. To overcome such a key limitation, we propose Sparsely Grouped Generative Adversarial Networks (SG-GAN) as a novel approach that can translate images in sparsely grouped datasets where only a few train samples are labelled. Using a one-input multi-output architecture, SG-GAN is well-suited for tackling multi-task learning and sparsely grouped learning tasks. The new model is able to translate images among multiple groups using only a single trained model. To experimentally validate the advantages of the new model, we apply the proposed method to tackle a series of attribute manipulation tasks for facial images as a case study. Experimental results show that SG-GAN can achieve comparable results with state-of-the-art methods on adequately labelled datasets while attaining a superior image translation quality on sparsely grouped datasets is available at https: github.com zhangqianhui SGGAN-tensorflow..", "Despite the promising results on paired unpaired image-to-image translation achieved by Generative Adversarial Networks (GANs), prior works often only transfer the low-level information (e.g. color or texture changes), but fail to manipulate high-level semantic meanings (e.g., geometric structure or content) of different object regions. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, aiming at modifying the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow ( )sheep, motor ( )bicycle, cat ( )dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective which is able to perform all types of semantic translations with one category-conditional generator. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Extensive qualitative and quantitative experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs." ] }
1907.10827
2963935048
Deep reinforcement learning has achieved great successes in recent years, but there are still open challenges, such as convergence to locally optimal policies and sample inefficiency. In this paper, we contribute a novel self-supervised auxiliary task, i.e., Terminal Prediction (TP), estimating temporal closeness to terminal states for episodic tasks. The intuition is to help representation learning by letting the agent predict how close it is to a terminal state, while learning its control policy. Although TP could be integrated with multiple algorithms, this paper focuses on Asynchronous Advantage Actor-Critic (A3C) and demonstrating the advantages of A3C-TP. Our extensive evaluation includes: a set of Atari games, the BipedalWalker domain, and a mini version of the recently proposed multi-agent Pommerman game. Our results on Atari games and the BipedalWalker domain suggest that A3C-TP outperforms standard A3C in most of the tested domains and in others it has similar performance. In Pommerman, our proposed method provides significant improvement both in learning efficiency and converging to better policies against different opponents.
Reinforcement learning approaches mainly fall under three categories: value-based methods such as Q-learning @cite_1 or Deep-Q Network @cite_6 ; policy-based methods such as REINFORCE @cite_26 ; and a combination of value- and policy-based techniques, i.e. actor-critic methods @cite_25 . In particular, in the last category several distributed actor-critic based DRL algorithms have been recently proposed @cite_22 . One notable example is A3C (Asynchronous Advantage Actor-Critic) @cite_16 , which is an algorithm that employs a asynchronous training scheme (using multiple CPU cores) for efficiency.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_1", "@cite_6", "@cite_16", "@cite_25" ], "mid": [ "2260756217", "2964043796", "2781726626", "2962902376" ], "abstract": [ "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.", "Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds." ] }
1907.10827
2963935048
Deep reinforcement learning has achieved great successes in recent years, but there are still open challenges, such as convergence to locally optimal policies and sample inefficiency. In this paper, we contribute a novel self-supervised auxiliary task, i.e., Terminal Prediction (TP), estimating temporal closeness to terminal states for episodic tasks. The intuition is to help representation learning by letting the agent predict how close it is to a terminal state, while learning its control policy. Although TP could be integrated with multiple algorithms, this paper focuses on Asynchronous Advantage Actor-Critic (A3C) and demonstrating the advantages of A3C-TP. Our extensive evaluation includes: a set of Atari games, the BipedalWalker domain, and a mini version of the recently proposed multi-agent Pommerman game. Our results on Atari games and the BipedalWalker domain suggest that A3C-TP outperforms standard A3C in most of the tested domains and in others it has similar performance. In Pommerman, our proposed method provides significant improvement both in learning efficiency and converging to better policies against different opponents.
Another related work to ours is the UNREAL framework @cite_22 which is built on top of the A3C with several refinements and integration. In particular, UNREAL proposes to learn a reward prediction based task besides a pixel-control based task to speed up learning by improving representation learning. In contrast to on-policy A3C, UNREAL uses an experience replay buffer that is sampled with more priority given to positively rewarded interactions to improve the critic network. Our method, A3C-TP, differs from UNREAL in several ways: (i) We do not introduce the additional critic improvement step -- to better isolate the gain of our auxiliary task over vanilla A3C. (ii) Even though we also integrate an auxiliary task, we keep the resulting method still on-policy with minimal refinements without an experience replay buffer which might require correction for stale experience data. (iii) UNREAL's reward-prediction requires class balancing of observed rewards in an off-policy fashion depending on the game reward sparsity and distribution whereas is balanced automatically, it can be applied within on-policy DRL methods, and it generalizes better for episodic tasks independently of the domain-specific reward distribution.
{ "cite_N": [ "@cite_22" ], "mid": [ "2869375357", "1990671169", "2807340089", "2949257576" ], "abstract": [ "We present research using the latest reinforcement learning algorithm for end-to-end driving without any mediated perception (object recognition, scene understanding). The newly proposed reward and learning strategies lead together to faster convergence and more robust driving using only RGB image from a forward facing camera. An Asynchronous Actor Critic (A3C) framework is used to learn the car control in a physically and graphically realistic rally game, with the agents evolving simultaneously on tracks with a variety of road structures (turns, hills), graphics (seasons, location) and physics (road adherence). A thorough evaluation is conducted and generalization is proven on unseen tracks and using legal speed limits. Open loop tests on real sequences of images show some domain adaption capability of our method.", "HighlightsWe integrate user appraisals in a POMDP-based dialogue manager procedure.We employ additional socially-inspired rewards in a RL setup to guide the learning.A unified framework for speeding up the policy optimisation and user adaptation.We consider a potential-based reward shaping with a sample efficient RL algorithm.Evaluated using both user simulator (information retrieval) and user trials (HRI). This paper investigates some conditions under which polarized user appraisals gathered throughout the course of a vocal interaction between a machine and a human can be integrated in a reinforcement learning-based dialogue manager. More specifically, we discuss how this information can be cast into socially-inspired rewards for speeding up the policy optimisation for both efficient task completion and user adaptation in an online learning setting. For this purpose a potential-based reward shaping method is combined with a sample efficient reinforcement learning algorithm to offer a principled framework to cope with these potentially noisy interim rewards. The proposed scheme will greatly facilitate the system's development by allowing the designer to teach his system through explicit positive negative feedbacks given as hints about task progress, in the early stage of training. At a later stage, the approach will be used as a way to ease the adaptation of the dialogue policy to specific user profiles. Experiments carried out using a state-of-the-art goal-oriented dialogue management framework, the Hidden Information State (HIS), support our claims in two configurations: firstly, with a user simulator in the tourist information domain (and thus simulated appraisals), and secondly, in the context of man-robot dialogue with real user trials.", "We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning. It uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy. Our results show that in a novel navigation and planning task called Box-World, our agent finds interpretable solutions that improve upon baselines in terms of sample complexity, ability to generalize to more complex scenes than experienced during training, and overall performance. In the StarCraft II Learning Environment, our agent achieves state-of-the-art performance on six mini-games -- surpassing human grandmaster performance on four. By considering architectural inductive biases, our work opens new directions for overcoming important, but stubborn, challenges in deep RL.", "The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at this https URL" ] }
1907.10628
2963606129
Domain adaptation is essential to enable wide usage of deep learning based networks trained using large labeled datasets. Adversarial learning based techniques have shown their utility towards solving this problem using a discriminator that ensures source and target distributions are close. However, here we suggest that rather than using a point estimate, it would be useful if a distribution based discriminator could be used to bridge this gap. This could be achieved using multiple classifiers or using traditional ensemble methods. In contrast, we suggest that a Monte Carlo dropout based ensemble discriminator could suffice to obtain the distribution based discriminator. Specifically, we propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution and the corresponding reverse gradients are used to align the source and target feature representations. The detailed results and thorough ablation analysis show that our model outperforms state-of-the-art results.
A large number of methods have been proposed to tackle the domain adaptation problem. The basic common structure that has been followed is the Siamese architecture @cite_35 with two streams, representing the source and target models. It is trained with a combination of a classification loss and the other being one of discrepancy loss or an adversarial loss. The classification loss depends on the source data label, while the discrepancy loss reduces the shift between the two domains. A discrepancy based deep learning method is that of deep domain confusion (DDC) @cite_43 . The loss between a single FC (fully connected) layer of source and target feature extractor network is used to minimize the maximum mean discrepancy (MMD) between the source and the target. This approach is further extended by deep adaptation network (DAN) @cite_40 . Recently, a number of other methods have been proposed which use discrepancy of domain @cite_42 @cite_54 @cite_23 @cite_52 @cite_48 @cite_45 @cite_22 @cite_14 . Other similar works are also applied in vision and language work @cite_55 @cite_17
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_22", "@cite_48", "@cite_54", "@cite_42", "@cite_55", "@cite_52", "@cite_43", "@cite_40", "@cite_45", "@cite_23", "@cite_17" ], "mid": [ "2901021011", "2767179670", "2607350342", "2557626841" ], "abstract": [ "Unsupervised domain adaptation aims to mitigate the domain shift when transferring knowledge from a supervised source domain to an unsupervised target domain. Adversarial Feature Alignment has been successfully explored to minimize the domain discrepancy. However, existing methods are usually struggling to optimize mixed learning objectives and vulnerable to negative transfer when two domains do not share the identical label space. In this paper, we empirically reveal that the erratic discrimination of target domain mainly reflects in its much lower feature norm value with respect to that of the source domain. We present a non-parametric Adaptive Feature Norm AFN approach, which is independent of the association between label spaces of the two domains. We demonstrate that adapting feature norms of source and target domains to achieve equilibrium over a large range of values can result in significant domain transfer gains. Without bells and whistles but a few lines of code, our method largely lifts the discrimination of target domain (23.7 from the Source Only in VisDA2017) and achieves the new state of the art under the vanilla setting. Furthermore, as our approach does not require to deliberately align the feature distributions, it is robust to negative transfer and can outperform the existing approaches under the partial setting by an extremely large margin (9.8 on Office-Home and 14.1 on VisDA2017). Code is available at this https URL. We are responsible for the reproducibility of our method.", "In multimedia analysis, the task of domain adaptation is to adapt the feature representation learned in the source domain with rich label information to the target domain with less or even no label information. Significant research endeavors have been devoted to aligning the feature distributions between the source and the target domains in the top fully connected layers based on unsupervised DNN-based models. However, the domain adaptation has been arbitrarily constrained near the output ends of the DNN models, which thus brings about inadequate knowledge transfer in DNN-based domain adaptation process, especially near the input end. We develop an attention transfer process for convolutional domain adaptation. The domain discrepancy, measured in correlation alignment loss, is minimized on the second-order correlation statistics of the attention maps for both source and target domains. Then we propose Deep Unsupervised Convolutional Domain Adaptation DUCDA method, which jointly minimizes the supervised classification loss of labeled source data and the unsupervised correlation alignment loss measured on both convolutional layers and fully connected layers. The multi-layer domain adaptation process collaborately reinforces each individual domain adaptation component, and significantly enhances the generalization ability of the CNN models. Extensive cross-domain object classification experiments show DUCDA outperforms other state-of-the-art approaches, and validate the promising power of DUCDA towards large scale real world application.", "Domain adaptation is transfer learning which aims to generalize a learning model across training and testing data with different distributions. Most previous research tackle this problem in seeking a shared feature representation between source and target domains while reducing the mismatch of their data distributions. In this paper, we propose a close yet discriminative domain adaptation method, namely CDDA, which generates a latent feature representation with two interesting properties. First, the discrepancy between the source and target domain, measured in terms of both marginal and conditional probability distribution via Maximum Mean Discrepancy is minimized so as to attract two domains close to each other. More importantly, we also design a repulsive force term, which maximizes the distances between each label dependent sub-domain to all others so as to drag different class dependent sub-domains far away from each other and thereby increase the discriminative power of the adapted domain. Moreover, given the fact that the underlying data manifold could have complex geometric structure, we further propose the constraints of label smoothness and geometric structure consistency for label propagation. Extensive experiments are conducted on 36 cross-domain image classification tasks over four public datasets. The comprehensive results show that the proposed method consistently outperforms the state-of-the-art methods with significant margins.", "In this paper, we propose an approach to the domain adaptation, dubbed Second-or Higher-order Transfer of Knowledge (So-HoT), based on the mixture of alignments of second-or higher-order scatter statistics between the source and target domains. The human ability to learn from few labeled samples is a recurring motivation in the literature for domain adaptation. Towards this end, we investigate the supervised target scenario for which few labeled target training samples per category exist. Specifically, we utilize two CNN streams: the source and target networks fused at the classifier level. Features from the fully connected layers fc7 of each network are used to compute second-or even higher-order scatter tensors, one per network stream per class. As the source and target distributions are somewhat different despite being related, we align the scatters of the two network streams of the same class (within-class scatters) to a desired degree with our bespoke loss while maintaining good separation of the between-class scatters. We train the entire network in end-to-end fashion. We provide evaluations on the standard Office benchmark (visual domains) and RGB-D combined with Caltech256 (depth-to-rgb transfer). We attain state-of-the-art results." ] }
1907.10628
2963606129
Domain adaptation is essential to enable wide usage of deep learning based networks trained using large labeled datasets. Adversarial learning based techniques have shown their utility towards solving this problem using a discriminator that ensures source and target distributions are close. However, here we suggest that rather than using a point estimate, it would be useful if a distribution based discriminator could be used to bridge this gap. This could be achieved using multiple classifiers or using traditional ensemble methods. In contrast, we suggest that a Monte Carlo dropout based ensemble discriminator could suffice to obtain the distribution based discriminator. Specifically, we propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution and the corresponding reverse gradients are used to align the source and target feature representations. The detailed results and thorough ablation analysis show that our model outperforms state-of-the-art results.
In the domain adaptation setting, an adversarial network provides domain invariant representations by making the source and target domain indistinguishable by the discriminator. Adversarial Discriminative Domain Adaptation @cite_26 uses an inverted label GAN loss to split the optimization into two independent objectives. One such method is the domain confusion based model proposed in @cite_25 that considers a domain confusion objective. Domain-Adversarial Neural Networks (DANN) @cite_2 integrates a gradient reversal layer into the standard architecture to promote the emergence of the learned representations that are discriminative for the main learning task on the source domain and non-discriminative concerning the shift between the domains. Recently, some works have been proposed which use an adversarial discriminative approach in solving the domain adaptation problem @cite_15 @cite_12 @cite_0 @cite_46 @cite_3 @cite_13 @cite_51 @cite_18 . Similarly, the model proposed in @cite_44 @cite_49 exploits GANs with the aim to generate source-domain images such that they appear as if they were drawn from the target domain distribution. The closest related work to our approach is the work by @cite_10 that extends the gradient reversal method by a class-specific discriminator.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_46", "@cite_10", "@cite_3", "@cite_0", "@cite_44", "@cite_49", "@cite_2", "@cite_51", "@cite_15", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2767382337", "2964139811", "2593768305", "2949987290" ], "abstract": [ "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.", "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task." ] }
1907.10695
2963916533
We present a method for recovering the dense 3D surface of the hand by regressing the vertex coordinates of a mesh model from a single depth map. To this end, we use a two-stage 2D fully convolutional network architecture. In the first stage, the network estimates a dense correspondence field for every pixel on the depth map or image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. During inference, the network can predict all the mesh vertices, transformation matrices for every joint and the joint coordinates in a single forward pass. When given supervision on the sparse key-point coordinates, our method achieves state-of-the-art accuracy on NYU dataset for key point localization while recovering mesh vertices and a dense correspondence map. Our framework can also be learned through self-supervision by minimizing a set of data fitting and kinematic prior terms. With multi-camera rig during training to resolve self-occlusion, it can perform competitively with strongly supervised methods Without any human annotation.
Deep learning has significantly advanced state-of-the-art for hand pose estimation. The general trend has been the development of ever deeper and more sophisticated neural network architectures @cite_13 @cite_37 @cite_54 @cite_62 @cite_49 @cite_5 @cite_55 . However, such progress has also hinged on the availability of large amounts of annotated data @cite_51 @cite_57 @cite_50 . Obtaining accurate annotations, even for simple 3D joint coordinates, is extremely difficult and time consuming. Annotations generated by manually initializing trackers @cite_51 @cite_52 require carefully designed interfaces for 3D annotation on a 2D screen and there is often little consensus between human annotators @cite_58 . Motion-capture rigs @cite_50 and auxiliary sensors @cite_57 are fully automatic but are limited in the scenes in which they can be deployed. To mitigate the limitations of annotation, semi-supervised approaches @cite_7 @cite_56 @cite_10 and approaches coupling synthesized with real data @cite_0 @cite_59 @cite_31 have also been proposed.
{ "cite_N": [ "@cite_37", "@cite_62", "@cite_31", "@cite_7", "@cite_10", "@cite_54", "@cite_55", "@cite_52", "@cite_56", "@cite_57", "@cite_0", "@cite_49", "@cite_50", "@cite_51", "@cite_5", "@cite_59", "@cite_58", "@cite_13" ], "mid": [ "2963377353", "2606965392", "2892644985", "2963508807" ], "abstract": [ "State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose modelling the statistical relationship of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose or into a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth map. To prevent over-fitting and to better exploit unlabeled depth maps, the generator and discriminator are trained jointly. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized samples and unlabeled depth maps. The proposed discriminator network architecture is highly efficient and runs at 90fps on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks.", "In this paper we introduce a large-scale hand pose dataset, collected using a novel capture method. Existing datasets are either generated synthetically or captured using depth sensors: synthetic datasets exhibit a certain level of appearance difference from real depth images, and real datasets are limited in quantity and coverage, mainly due to the difficulty to annotate them. We propose a tracking system with six 6D magnetic sensors and inverse kinematics to automatically obtain 21-joints hand pose annotations of depth maps captured with minimal restriction on the range of motion. The capture protocol aims to fully cover the natural hand pose space. As shown in embedding plots, the new dataset exhibits a significantly wider and denser range of hand poses compared to existing benchmarks. Current state-of-the-art methods are evaluated on the dataset, and we demonstrate significant improvements in cross-benchmark performance. We also show significant improvements in egocentric hand pose estimation with a CNN trained on the new dataset.", "Compared with depth-based 3D hand pose estimation, it is more challenging to infer 3D hand pose from monocular RGB images, due to substantial depth ambiguity and the difficulty of obtaining fully-annotated training data. Different from existing learning-based monocular RGB-input approaches that require accurate 3D annotations for training, we propose to leverage the depth images that can be easily obtained from commodity RGB-D cameras during training, while during testing we take only RGB inputs for 3D joint predictions. In this way, we alleviate the burden of the costly 3D annotations in real-world dataset. Particularly, we propose a weakly-supervised method, adaptating from fully-annotated synthetic dataset to weakly-labeled real-world dataset with the aid of a depth regularizer, which generates depth maps from predicted 3D pose and serves as weak supervision for 3D pose regression. Extensive experiments on benchmark datasets validate the effectiveness of the proposed depth regularizer in both weakly-supervised and fully-supervised settings.", "Most of the existing deep learning-based methods for 3D hand and human pose estimation from a single depth map are based on a common framework that takes a 2D depth map and directly regresses the 3D coordinates of keypoints, such as hand or human body joints, via 2D convolutional neural networks (CNNs). The first weakness of this approach is the presence of perspective distortion in the 2D depth map. While the depth map is intrinsically 3D data, many previous methods treat depth maps as 2D images that can distort the shape of the actual object through projection from 3D to 2D space. This compels the network to perform perspective distortion-invariant estimation. The second weakness of the conventional approach is that directly regressing 3D coordinates from a 2D image is a highly nonlinear mapping, which causes difficulty in the learning procedure. To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint. We design our model as a 3D CNN that provides accurate estimates while running in real-time. Our system outperforms previous methods in almost all publicly available 3D hand and human pose estimation datasets and placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge. The code is available in1." ] }
1907.10695
2963916533
We present a method for recovering the dense 3D surface of the hand by regressing the vertex coordinates of a mesh model from a single depth map. To this end, we use a two-stage 2D fully convolutional network architecture. In the first stage, the network estimates a dense correspondence field for every pixel on the depth map or image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. During inference, the network can predict all the mesh vertices, transformation matrices for every joint and the joint coordinates in a single forward pass. When given supervision on the sparse key-point coordinates, our method achieves state-of-the-art accuracy on NYU dataset for key point localization while recovering mesh vertices and a dense correspondence map. Our framework can also be learned through self-supervision by minimizing a set of data fitting and kinematic prior terms. With multi-camera rig during training to resolve self-occlusion, it can perform competitively with strongly supervised methods Without any human annotation.
An alternative line of work @cite_29 @cite_53 @cite_60 @cite_63 @cite_9 @cite_64 @cite_45 @cite_14 tackles hand pose estimation by minimizing a model-fitting error. Model-fitting needs little to no human labels, but the accuracy is heavily dependent on the careful design of the energy function. A recent trend tries to bridge the gap between data-driven and model-fitting approaches @cite_26 @cite_15 @cite_41 by using a differentiable renderer and incorporating the model-fitting error as a part of the training loss. Our work resembles these methods, though we have two key differences. First, we re-parameterize the mesh with a 2D embedding, which allows us to use a 2D fully convolutional network architecture. Secondly, we can apply self-supervision on both the image grid and the mesh grid, leading to efficient gradient flow during back-propagation.
{ "cite_N": [ "@cite_64", "@cite_14", "@cite_26", "@cite_60", "@cite_41", "@cite_29", "@cite_9", "@cite_53", "@cite_45", "@cite_63", "@cite_15" ], "mid": [ "2963950354", "2964055354", "2963508807", "2896229066" ], "abstract": [ "We present a simple and effective method for 3D hand pose estimation from a single depth frame. As opposed to previous state-of-the-art methods based on holistic 3D regression, our method works on dense pixel-wise estimation. This is achieved by careful design choices in pose parameterization, which leverages both 2D and 3D properties of depth map. Specifically, we decompose the pose parameters into a set of per-pixel estimations, i.e., 2D heat maps, 3D heat maps and unit 3D directional vector fields. The 2D 3D joint heat maps and 3D joint offsets are estimated via multitask network cascades, which is trained end-to-end. The pixel-wise estimations can be directly translated into a vote casting scheme. A variant of mean shift is then used to aggregate local votes while enforcing consensus between the the estimated 3D pose and the pixel-wise 2D and 3D estimations by design. Our method is efficient and highly accurate. On MSRA and NYU hand dataset, our method outperforms all previous state-of-the-art approaches by a large margin. On the ICVL hand dataset, our method achieves similar accuracy compared to the nearly saturated result obtained by [5] and outperforms various other proposed methods. Code is available online1.", "In this paper, we make two contributions to unsupervised domain adaptation (UDA) using the convolutional neural network (CNN). First, our approach transfers knowledge in all the convolutional layers through attention alignment. Most previous methods align high-level representations, e.g., activations of the fully connected (FC) layers. In these methods, however, the convolutional layers which underpin critical low-level domain knowledge cannot be updated directly towards reducing domain discrepancy. Specifically, we assume that the discriminative regions in an image are relatively invariant to image style changes. Based on this assumption, we propose an attention alignment scheme on all the target convolutional layers to uncover the knowledge shared by the source domain. Second, we estimate the posterior label distribution of the unlabeled data for target network training. Previous methods, which iteratively update the pseudo labels by the target network and refine the target network by the updated pseudo labels, are vulnerable to label estimation errors. Instead, our approach uses category distribution to calculate the cross-entropy loss for training, thereby ameliorating the error accumulation of the estimated labels. The two contributions allow our approach to outperform the state-of-the-art methods by +2.6 on the Office-31 dataset.", "Most of the existing deep learning-based methods for 3D hand and human pose estimation from a single depth map are based on a common framework that takes a 2D depth map and directly regresses the 3D coordinates of keypoints, such as hand or human body joints, via 2D convolutional neural networks (CNNs). The first weakness of this approach is the presence of perspective distortion in the 2D depth map. While the depth map is intrinsically 3D data, many previous methods treat depth maps as 2D images that can distort the shape of the actual object through projection from 3D to 2D space. This compels the network to perform perspective distortion-invariant estimation. The second weakness of the conventional approach is that directly regressing 3D coordinates from a 2D image is a highly nonlinear mapping, which causes difficulty in the learning procedure. To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint. We design our model as a 3D CNN that provides accurate estimates while running in real-time. Our system outperforms previous methods in almost all publicly available 3D hand and human pose estimation datasets and placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge. The code is available in1.", "Convolutional Neural Networks (CNNs)-based methods for 3D hand pose estimation with depth cameras usually take 2D depth images as input and directly regress holistic 3D hand pose. Different from these methods, our proposed Point-to-Point Regression PointNet directly takes the 3D point cloud as input and outputs point-wise estimations, i.e., heat-maps and unit vector fields on the point cloud, representing the closeness and direction from every point in the point cloud to the hand joint. The point-wise estimations are used to infer 3D joint locations with weighted fusion. To better capture 3D spatial information in the point cloud, we apply a stacked network architecture for PointNet with intermediate supervision, which is trained end-to-end. Experiments show that our method can achieve outstanding results when compared with state-of-the-art methods on three challenging hand pose datasets." ] }
1907.10695
2963916533
We present a method for recovering the dense 3D surface of the hand by regressing the vertex coordinates of a mesh model from a single depth map. To this end, we use a two-stage 2D fully convolutional network architecture. In the first stage, the network estimates a dense correspondence field for every pixel on the depth map or image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. During inference, the network can predict all the mesh vertices, transformation matrices for every joint and the joint coordinates in a single forward pass. When given supervision on the sparse key-point coordinates, our method achieves state-of-the-art accuracy on NYU dataset for key point localization while recovering mesh vertices and a dense correspondence map. Our framework can also be learned through self-supervision by minimizing a set of data fitting and kinematic prior terms. With multi-camera rig during training to resolve self-occlusion, it can perform competitively with strongly supervised methods Without any human annotation.
It is highly intuitive to parameterize 3D inputs and or outputs as an occupancy grid or distance field and use for example a 3D voxel net @cite_46 @cite_42 @cite_49 . However, such an architecture is parameter heavy and severely limited in spatial resolution. PointNet @cite_18 is a light-weight alternative and while it can interpret 3D inputs a set of un-ordered points, it also largely ignores spatial contexts which may be important downstream.
{ "cite_N": [ "@cite_46", "@cite_42", "@cite_18", "@cite_49" ], "mid": [ "2603429625", "2174133987", "1986503980", "2091297047" ], "abstract": [ "We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image.", "This paper presents a novel probabilistic foundation for volumetric 3D reconstruction. We formulate the problem as inference in a Markov random field, which accurately captures the dependencies between the occupancy and appearance of each voxel, given all input images. Our main contribution is an approximate highly parallelized discrete-continuous inference algorithm to compute the marginal distributions of each voxel's occupancy and appearance. In contrast to the MAP solution, marginals encode the underlying uncertainty and ambiguity in the reconstruction. Moreover, the proposed algorithm allows for a Bayes optimal prediction with respect to a natural reconstruction loss. We compare our method to two state-of-the-art volumetric reconstruction algorithms on three challenging aerial datasets with LIDAR ground truth. Our experiments demonstrate that the proposed algorithm compares favorably in terms of reconstruction accuracy and the ability to expose reconstruction uncertainty.", "Occupancy grids have been a popular mapping technique in mobile robotics for nearly 30 years. Occupancy grids offer a discrete representation of the world and seek to determine the occupancy probability of each cell. Traditional occupancy grid mapping methods make two assumptions for computational efficiency and it has been shown that the full posterior is computationally intractable for real-world mapping applications without these assumptions. The two assumptions result in tuning parameters that control the information gained from each distance measurement. In this paper, several tuning parameters found in the literature are optimized against the full posterior in 1D. In addition, this paper presents a new parameterization of the update function that outperforms existing methods in terms of capturing residual uncertainty. Capturing the residual uncertainty better estimates the position of obstacles and prevents under- and over-confidence in both the occupied and unoccupied cells. The paper concludes by showing that the new update function better captures the residual uncertainty in each cell when compared to an offline mapping method for realistic 2D simulations.", "In this paper, we present a novel method, the first to date to our knowledge, which is capable of directly and automatically producing a concise and idealized 3D representation from unstructured point data of complex cluttered real-world scenes, with a high level of noise and a significant proportion of outliers, such as those obtained from passive stereo. Our algorithm can digest millions of input points into an optimized lightweight watertight polygonal mesh free of self-intersection, that preserves the structural components of the scene at a user-defined scale, and completes missing scene parts in a plausible manner. To achieve this, our algorithm incorporates priors on urban and architectural scenes, notably the prevalence of vertical structures and orthogonal intersections. A major contribution of our work is an adaptive decomposition of 3D space induced by planar primitives, namely a polyhedral cell complex. We experimentally validate our approach on several challenging noisy point clouds of urban and architectural scenes." ] }
1907.10695
2963916533
We present a method for recovering the dense 3D surface of the hand by regressing the vertex coordinates of a mesh model from a single depth map. To this end, we use a two-stage 2D fully convolutional network architecture. In the first stage, the network estimates a dense correspondence field for every pixel on the depth map or image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. During inference, the network can predict all the mesh vertices, transformation matrices for every joint and the joint coordinates in a single forward pass. When given supervision on the sparse key-point coordinates, our method achieves state-of-the-art accuracy on NYU dataset for key point localization while recovering mesh vertices and a dense correspondence map. Our framework can also be learned through self-supervision by minimizing a set of data fitting and kinematic prior terms. With multi-camera rig during training to resolve self-occlusion, it can perform competitively with strongly supervised methods Without any human annotation.
Since captured 3D inputs are inherently object surfaces, it is natural to consider them as a 2D embedding in 3D Euclidean space. As such, several works @cite_12 @cite_19 @cite_24 have modeled mesh surfaces as a graph and have applied graph network architectures to capture intrinsic and extrinsic geometric properties of the mesh. Our method also works on the hand surface, but it is a much simpler and more flexible network architecture which is easier to train and can handle different mesh topologies. Our method most resembles @cite_3 @cite_17 by mapping high dimension data to a 2D grid. However, instead of just working on points from depth map, we propose a dual grid network architecture, enabling the mapping of heterogeneous data from Euclidean space to mesh surfaces and vice versa.
{ "cite_N": [ "@cite_3", "@cite_24", "@cite_19", "@cite_12", "@cite_17" ], "mid": [ "832925222", "1993167366", "2108417695", "1964475161" ], "abstract": [ "Accurate recovery of 3D geometrical surfaces from calibrated 2D multi-view images is a fundamental yet active research area in computer vision. Despite the steady progress in multi-view stereo (MVS) reconstruction, many existing methods are still limited in recovering fine-scale details and sharp features while suppressing noises, and may fail in reconstructing regions with less textures. To address these limitations, this paper presents a detail-preserving and content-aware variational (DCV) MVS method, which reconstructs the 3D surface by alternating between reprojection error minimization and mesh denoising. In reprojection error minimization, we propose a novel inter-image similarity measure, which is effective to preserve fine-scale details of the reconstructed surface and builds a connection between guided image filtering and image registration. In mesh denoising, we propose a content-aware @math -minimization algorithm by adaptively estimating the @math value and regularization parameters. Compared with conventional isotropic mesh smoothing approaches, the proposed method is much more promising in suppressing noise while preserving sharp features. Experimental results on benchmark data sets demonstrate that our DCV method is capable of recovering more surface details, and obtains cleaner and more accurate reconstructions than the state-of-the-art methods. In particular, our method achieves the best results among all published methods on the Middlebury dino ring and dino sparse data sets in terms of both completeness and accuracy.", "This paper presents a novel approach that achieves complete matching of 3D dynamic surfaces. Surfaces are captured from multi-view video data and represented by sequences of 3D manifold meshes in motion (3D videos). We propose to perform dense surface matching between 3D video frames using geodesic diffeomorphisms. Our algorithm uses a coarse-to-fine strategy to derive a robust correspondence map, then a probabilistic formulation is coupled with a voting scheme in order to obtain local unicity of matching candidates and a smooth mapping. The significant advantage of the proposed technique compared to existing approaches is that it does not rely on a color-based feature extraction process. Hence, our method does not lose accuracy in poorly textured regions and is not bounded to be used on video sequences of a unique subject. Therefore our complete surface mapping can be applied to: (1) texture transfer between surface models extracted from different sequences, (2) dense motion flow estimation in 3D video, and (3) motion transfer from a 3D video to an unanimated 3D model. Experiments are performed on challenging publicly available real-world datasets and show compelling results.", "This paper proposes a quasi-dense approach to 3D surface model acquisition from uncalibrated images. First, correspondence information and geometry are computed based on new quasi-dense point features that are resampled subpixel points from a disparity map. The quasi-dense approach gives more robust and accurate geometry estimations than the standard sparse approach. The robustness is measured as the success rate of full automatic geometry estimation with all involved parameters fixed. The accuracy is measured by a fast gauge-free uncertainty estimation algorithm. The quasi-dense approach also works for more largely separated images than the sparse approach, therefore, it requires fewer images for modeling. More importantly, the quasi-dense approach delivers a high density of reconstructed 3D points on which a surface representation can be reconstructed. This fills the gap of insufficiency of the sparse approach for surface reconstruction, essential for modeling and visualization applications. Second, surface reconstruction methods from the given quasi-dense geometry are also developed. The algorithm optimizes new unified functionals integrating both 3D quasi-dense points and 2D image information, including silhouettes. Combining both 3D data and 2D images is more robust than the existing methods using only 2D information or only 3D data. An efficient bounded regularization method is proposed to implement the surface evolution by level-set methods. Its properties are discussed and proven for some cases. As a whole, a complete automatic and practical system of 3D modeling from raw images captured by hand-held cameras to surface representation is proposed. Extensive experiments demonstrate the superior performance of the quasi-dense approach with respect to the standard sparse approach in robustness, accuracy, and applicability.", "Introduces a new surface representation for recognizing curved objects. The authors approach begins by representing an object by a discrete mesh of points built from range data or from a geometric model of the object. The mesh is computed from the data by deforming a standard shaped mesh, for example, an ellipsoid, until it fits the surface of the object. The authors define local regularity constraints that the mesh must satisfy. The authors then define a canonical mapping between the mesh describing the object and a standard spherical mesh. A surface curvature index that is pose-invariant is stored at every node of the mesh. The authors use this object representation for recognition by comparing the spherical model of a reference object with the model extracted from a new observed scene. The authors show how the similarity between reference model and observed data can be evaluated and they show how the pose of the reference object in the observed scene can be easily computed using this representation. The authors present results on real range images which show that this approach to modelling and recognizing 3D objects has three main advantages: (1) it is applicable to complex curved surfaces that cannot be handled by conventional techniques; (2) it reduces the recognition problem to the computation of similarity between spherical distributions; in particular, the recognition algorithm does not require any combinatorial search; and (3) even though it is based on a spherical mapping, the approach can handle occlusions and partial views. >" ] }
1907.10781
2963008717
Nowadays, we are surrounded by more and more online news articles. Tens or hundreds of news articles need to be read if we wish to explore a hot news event or topic. So it is of vital importance to automatically synthesize a batch of news articles related to the event or topic into a new synthesis article (or overview article) for reader's convenience. It is so challenging to make news synthesis fully automatic that there is no successful solution by now. In this paper, we put forward a novel Interactive News Synthesis system (i.e. INS), which can help generate news overview articles automatically or by interacting with users. More importantly, INS can serve as a tool for editors to help them finish their jobs. In our experiments, INS performs well on both topic representation and synthesis article generation. A user study also demonstrates the usefulness and users' satisfaction with the INS tool. A demo video is available at this https URL .
One of the related fields is document summarization. The methods can be divided into extractive methods @cite_0 @cite_6 @cite_7 @cite_2 @cite_5 @cite_17 @cite_19 @cite_16 and abstractive methods @cite_13 @cite_14 @cite_10 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_10", "@cite_6", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2053818817", "2120543014", "2065354331", "2250968833" ], "abstract": [ "One of the important Natural Language Processing applications is Text Summarization, which helps users to manage the vast amount of information available, by condensing documents' content and extracting the most relevant facts or topics included. Text Summarization can be classified according to the type of summary: extractive, and abstractive. Extractive summary is the procedure of identifying important sections of the text and producing them verbatim while abstractive summary aims to produce important material in a new generalized form. In this paper, a novel approach is presented to create an abstractive summary for a single document using a rich semantic graph reducing technique. The approach summaries the input document by creating a rich semantic graph for the original document, reducing the generated graph, and then generating the abstractive summary from the reduced graph. Besides, a simulated case study is presented to show how the original text was minimized to fifty percent.", "Extractive methods for multi-document summarization are mainly governed by information overlap, coherence, and content constraints. We present an unsupervised probabilistic approach to model the hidden abstract concepts across documents as well as the correlation between these concepts, to generate topically coherent and non-redundant summaries. Based on human evaluations our models generate summaries with higher linguistic quality in terms of coherence, readability, and redundancy compared to benchmark systems. Although our system is unsupervised and optimized for topical coherence, we achieve a 44.1 ROUGE on the DUC-07 test set, roughly in the range of state-of-the-art supervised models.", "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing a context in which the need for generation-based methods is especially great.", "We present an approach for extractive single-document summarization. Our approach is based on a weighted graphical representation of documents obtained by topic modeling. We optimize importance, coherence and non-redundancy simultaneously using ILP. We compare ROUGE scores of our system with state-of-the-art results on scientific articles from PLOS Medicine and on DUC 2002 data. Human judges evaluate the coherence of summaries generated by our system in comparision to two baselines. Our approach obtains competitive performance." ] }
1907.10781
2963008717
Nowadays, we are surrounded by more and more online news articles. Tens or hundreds of news articles need to be read if we wish to explore a hot news event or topic. So it is of vital importance to automatically synthesize a batch of news articles related to the event or topic into a new synthesis article (or overview article) for reader's convenience. It is so challenging to make news synthesis fully automatic that there is no successful solution by now. In this paper, we put forward a novel Interactive News Synthesis system (i.e. INS), which can help generate news overview articles automatically or by interacting with users. More importantly, INS can serve as a tool for editors to help them finish their jobs. In our experiments, INS performs well on both topic representation and synthesis article generation. A user study also demonstrates the usefulness and users' satisfaction with the INS tool. A demo video is available at this https URL .
There are several pilot studies on producing long articles from a batch of news articles or web pages @cite_4 @cite_12 @cite_15 . However, the generated overview articles do not have good structures and there are no interaction functions.
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_12" ], "mid": [ "2787214294", "2963045354", "2743904806", "2051141368" ], "abstract": [ "We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.", "We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.", "As aggregators, online news portals face great challenges in continuously selecting a pool of candidate articles to be shown to their users. Typically, those candidate articles are recommended manually by platform editors from a much larger pool of articles aggregated from multiple sources. Such a hand-pick process is labor intensive and time-consuming. In this paper, we study the editor article selection behavior and propose a learning by demonstration system to automatically select a subset of articles from the large pool. Our data analysis shows that (i) editors' selection criteria are non-explicit, which are less based only on the keywords or topics, but more depend on the quality and attractiveness of the writing from the candidate article, which is hard to capture based on traditional bag-of-words article representation. And (ii) editors' article selection behaviors are dynamic: articles with different data distribution come into the pool everyday and the editors' preference varies, which are driven by some underlying periodic or occasional patterns. To address such problems, we propose a meta-attention model across multiple deep neural nets to (i) automatically catch the editors' underlying selection criteria via the automatic representation learning of each article and its interaction with the meta data and (ii) adaptively capture the change of such criteria via a hybrid attention model. The attention model strategically incorporates multiple prediction models, which are trained in previous days. The system has been deployed in a commercial article feed platform. A 9-day A B testing has demonstrated the consistent superiority of our proposed model over several strong baselines.", "Automatic news extraction from news pages is important in many Web applications such as news aggregation. However, the existing news extraction methods based on template-level wrapper induction have three serious limitations. First, the existing methods cannot correctly extract pages belonging to an unseen template. Second, it is costly to maintain up-to-date wrappers for a large amount of news websites, because any change of a template may invalidate the corresponding wrapper. Last, the existing methods can merely extract unformatted plain texts, and thus are not user friendly. In this paper, we tackle the problem of template-independent Web news extraction in a user-friendly way. We formalize Web news extraction as a machine learning problem and learn a template-independent wrapper using a very small number of labeled news pages from a single site. Novel features dedicated to news titles and bodies are developed. Correlations between news titles and news bodies are exploited. Our template-independent wrapper can extract news pages from different sites regardless of templates. Moreover, our approach can extract not only texts, but also images and animates within the news bodies and the extracted news articles are in the same visual style as in the original pages. In our experiments, a wrapper learned from 40 pages from a single news site achieved an accuracy of 98.1 on 3,973 news pages from 12 news sites." ] }
1907.10781
2963008717
Nowadays, we are surrounded by more and more online news articles. Tens or hundreds of news articles need to be read if we wish to explore a hot news event or topic. So it is of vital importance to automatically synthesize a batch of news articles related to the event or topic into a new synthesis article (or overview article) for reader's convenience. It is so challenging to make news synthesis fully automatic that there is no successful solution by now. In this paper, we put forward a novel Interactive News Synthesis system (i.e. INS), which can help generate news overview articles automatically or by interacting with users. More importantly, INS can serve as a tool for editors to help them finish their jobs. In our experiments, INS performs well on both topic representation and synthesis article generation. A user study also demonstrates the usefulness and users' satisfaction with the INS tool. A demo video is available at this https URL .
There are some attempts of adding interaction functions into the traditional document summarization tasks @cite_3 @cite_11 . However, the above work focuses on producing short summaries and the generation of long news overview articles is more challenging. Moreover, in the above work, the keyphrases to represent salient information are extracted based on some heuristic rules or simple clues, and they are usually not good subtopic representations.
{ "cite_N": [ "@cite_3", "@cite_11" ], "mid": [ "2611254175", "2149795409", "2053818817", "1985710361" ], "abstract": [ "Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encode-attend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28 (absolute) in ROUGE-L scores.", "Document summarization and keyphrase extraction are two related tasks in the IR and NLP fields, and both of them aim at extracting condensed representations from a single text document. Existing methods for single document summarization and keyphrase extraction usually make use of only the information contained in the specified document. This article proposes using a small number of nearest neighbor documents to improve document summarization and keyphrase extraction for the specified document, under the assumption that the neighbor documents could provide additional knowledge and more clues. The specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results on the Document Understanding Conference (DUC) benchmark datasets demonstrate the effectiveness and robustness of our proposed approaches. The cross-document sentence relationships in the expanded document set are validated to be beneficial to single document summarization, and the word cooccurrence relationships in the neighbor documents are validated to be very helpful to single document keyphrase extraction.", "One of the important Natural Language Processing applications is Text Summarization, which helps users to manage the vast amount of information available, by condensing documents' content and extracting the most relevant facts or topics included. Text Summarization can be classified according to the type of summary: extractive, and abstractive. Extractive summary is the procedure of identifying important sections of the text and producing them verbatim while abstractive summary aims to produce important material in a new generalized form. In this paper, a novel approach is presented to create an abstractive summary for a single document using a rich semantic graph reducing technique. The approach summaries the input document by creating a rich semantic graph for the original document, reducing the generated graph, and then generating the abstractive summary from the reduced graph. Besides, a simulated case study is presented to show how the original text was minimized to fifty percent.", "Comments left by readers on Web documents contain valuable information that can be utilized in different information retrieval tasks including document search, visualization, and summarization. In this paper, we study the problem of comments-oriented document summarization and aim to summarize a Web document (e.g., a blog post) by considering not only its content, but also the comments left by its readers. We identify three relations (namely, topic, quotation, and mention) by which comments can be linked to one another, and model the relations in three graphs. The importance of each comment is then scored by: (i) graph-based method, where the three graphs are merged into a multi-relation graph; (ii) tensor-based method, where the three graphs are used to construct a 3rd-order tensor. To generate a comments-oriented summary, we extract sentences from the given Web document using either feature-biased approach or uniform-document approach. The former scores sentences to bias keywords derived from comments; while the latter scores sentences uniformly with comments. In our experiments using a set of blog posts with manually labeled sentences, our proposed summarization methods utilizing comments showed significant improvement over those not using comments. The methods using feature-biased sentence extraction approach were observed to outperform that using uniform-document approach." ] }
1907.10700
2963909142
We introduce a system and methods for the three-dimensional measurement of extended specular surfaces with high surface normal variations. Our system consists only of a mobile hand held device and exploits screen and front camera for Deflectometry-based surface measurements. We demonstrate high quality measurements without the need for an offline calibration procedure. In addition, we develop a multi-view technique to compensate for the small screen of a mobile device so that large surfaces can be densely reconstructed in their entirety. This work is a first step towards developing a self-calibrating Deflectometry procedure capable of taking 3D surface measurements of specular objects in the wild and accessible to users with little to no technical imaging experience.
The authors of @cite_14 used the reflection of color coded circles observed by multiple cameras (which also resolves the bas-relief ambiguity). In other works, the authors utilized self illuminated screens with patterns such as stripes @cite_2 , multiple lines @cite_0 , or even a light field created from two stacked LED screens @cite_11 . Screenless' methods, such as @cite_7 @cite_24 analyze environment illumination or track prominent features (e.g. straight lines) in the environment to obtain information about the slope of specular surfaces.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_0", "@cite_24", "@cite_2", "@cite_11" ], "mid": [ "2403721818", "1997831592", "2444242407", "2346725554" ], "abstract": [ "This paper presents a novel approach for recovering the shape of non-Lambertian, multicolored objects using two input images. We show that a ring light source with complementary colored lights has the potential to be effectively utilized for this purpose. Under this lighting, the brightness of an object surface varies with respect to different reflections. Therefore, analyzing how brightness is modulated by illumination color gives us distinct cues to recover shape. Moreover, the use of complementary colored illumination enables the color photometric stereo to be applicable to multicolored surfaces. Here, we propose a color correction method based on the addition principle of complementary colors to remove the effect of illumination from the observed color. This allows the inclusion of surfaces with any number of chromaticities. Therefore, our method offers significant advantages over previous methods, which often assume constant object albedo and Lambertian reflectance. To the best of our knowledge, this is the first attempt to employ complementary colors on a ring light source to compute shape while considering both non-Lambertian reflection and spatially varying albedo. To show the efficacy of our method, we present results on synthetic and real world images and compare against photometric stereo methods elsewhere in the literature.", "In this work, we recover the 3D shape of mirrors, sunglasses, and stainless steel implements. A computer monitor displays several images of parallel stripes, each image at a different angle. Reflections of these stripes in a mirroring surface are captured by the camera. For every image point, the direction of the displayed stripes and their reflections in the image are related by a 1D homography matrix, computed with a robust version of the statistically accurate heteroscedastic approach. By focusing on a sparse set of image points for which monitor-image correspondence is computed, the depth and the local shape may be estimated from these homographies. The depth estimation relies on statistically correct minimization and provides accurate, reliable results. Even for the image points where the depth estimation process is inherently unstable, we are able to characterize this instability and develop an algorithm to detect and correct it. After correcting the instability, dense surface recovery of mirroring objects is performed using constrained interpolation, which does not simply interpolate the surface depth values but also uses the locally computed 1D homographies to solve for the depth, the correspondence, and the local surface shape. The method was implemented and the shape of several objects was densely recovered at submillimeter accuracy.", "We generalize Richardson-Lucy deblurring to 4-D light fields by replacing the convolution steps with light field rendering of motion blur. The method deals correctly with blur caused by 6-degree-of-freedom camera motion in complex 3-D scenes, without performing depth estimation. We include a novel regularization term that maintains parallax information in the light field, and employ 4-D anisotropic total variation to reduce noise and ringing. We demonstrate the method operating effectively on rendered scenes and scenes captured using an off-the-shelf light field camera mounted on an industrial robot arm. Examples include complex 3-D geometry and cover all major classes of camera motion. Both qualitative and quantitative results confirm the effectiveness of the method over a range of conditions, including commonly occurring cases for which previously published methods fail. We include mathematical proof that the algorithm converges to the maximum-likelihood estimate of the unblurred scene under Poisson noise.", "Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current light-field depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. The standard Lambertian photoconsistency measure considers the variance of different views, effectively enforcing point-consistency , i.e., that all views map to the same point in RGB space. This variance or point-consistency condition is a poor metric for glossy surfaces. In this paper, we present a novel theory of the relationship between light-field data and reflectance from the dichromatic model. We present a physically-based and practical method to estimate the light source color and separate specularity. We present a new photo consistency metric, line-consistency , which represents how viewpoint changes affect specular points. We then show how the new metric can be used in combination with the standard Lambertian variance or point-consistency measure to give us results that are robust against scenes with glossy surfaces. With our analysis, we can also robustly estimate multiple light source colors and remove the specular component from glossy objects. We show that our method outperforms current state-of-the-art specular removal and depth estimation algorithms in multiple real world scenarios using the consumer Lytro and Lytro Illum light field cameras." ] }
1907.10843
2963402660
Person re-identification (re-ID) solves the task of matching images across cameras and is among the research topics in vision community. Since query images in real-world scenarios might suffer from resolution loss, how to solve the resolution mismatch problem during person re-ID becomes a practical problem. Instead of applying separate image super-resolution models, we propose a novel network architecture of Resolution Adaptation and re-Identification Network (RAIN) to solve cross-resolution person re-ID. Advancing the strategy of adversarial learning, we aim at extracting resolution-invariant representations for re-ID, while the proposed model is learned in an end-to-end training fashion. Our experiments confirm that the use of our model can recognize low-resolution query images, even if the resolution is not seen during training. Moreover, the extension of our model for semi-supervised re-ID further confirms the scalability of our proposed method for real-world scenarios and applications.
Person re-ID has been widely studied in the literature. Most of the existing methods @cite_17 @cite_14 @cite_21 @cite_10 @cite_12 @cite_4 @cite_16 @cite_15 @cite_8 @cite_9 @cite_5 focus on tackling the challenges of matching images with viewpoint and pose variations, or those with background clutter or occlusion presented. For example, Liu al @cite_16 develop a pose-transferable GAN-based @cite_13 framework to address image pose variations. Chen al @cite_9 integrate the conditional random field (CRF) with deep neural networks to learn more consistent multi-scale similarity metrics. The DaRe @cite_19 combines the feature embeddings extracted from different convolutional layers into a single embedding to train the model in a supervised fashion. Several attention-based methods @cite_10 @cite_4 @cite_8 are further proposed to focus on learning the discriminative parts to mitigate the effect of background clutter. While promising results have been presented, the above approaches typically assume that all images (both query and gallery) are of the same (or similar) resolution, which might not be practical in real-world re-ID applications.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_4", "@cite_8", "@cite_9", "@cite_21", "@cite_19", "@cite_5", "@cite_15", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2009907187", "2774879330", "2788012242", "2962926870" ], "abstract": [ "In this paper we introduce a method for person re-identification based on discriminative, sparse basis expansions of targets in terms of a labeled gallery of known individuals. We propose an iterative extension to sparse discriminative classifiers capable of ranking many candidate targets. The approach makes use of soft- and hard- re-weighting to redistribute energy among the most relevant contributing elements and to ensure that the best candidates are ranked at each iteration. Our approach also leverages a novel visual descriptor which we show to be discriminative while remaining robust to pose and illumination variations. An extensive comparative evaluation is given demonstrating that our approach achieves state-of-the-art performance on single- and multi-shot person re-identification scenarios on the VIPeR, i-LIDS, ETHZ, and CAVIAR4REID datasets. The combination of our descriptor and iterative sparse basis expansion improves state-of-the-art rank-1 performance by six percentage points on VIPeR and by 20 on CAVIAR4REID compared to other methods with a single gallery image per person. With multiple gallery and probe images per person our approach improves by 17 percentage points the state-of-the-art on i-LIDS and by 72 on CAVIAR4REID at rank-1. The approach is also quite efficient, capable of single-shot person re-identification over galleries containing hundreds of individuals at about 30 re-identifications per second.", "Person Re-identification (re-id) faces two major challenges: the lack of cross-view paired training data and learning discriminative identity-sensitive and view-invariant features in the presence of large pose variations. In this work, we address both problems by proposing a novel deep person image generation model for synthesizing realistic person images conditional on pose. The model is based on a generative adversarial network (GAN) and used specifically for pose normalization in re-id, thus termed pose-normalization GAN (PN-GAN). With the synthesized images, we can learn a new type of deep re-id feature free of the influence of pose variations. We show that this feature is strong on its own and highly complementary to features learned with the original images. Importantly, we now have a model that generalizes to any new re-id dataset without the need for collecting any training data for model fine-tuning, thus making a deep re-id model truly scalable. Extensive experiments on five benchmarks show that our model outperforms the state-of-the-art models, often significantly. In particular, the features learned on Market-1501 can achieve a Rank-1 accuracy of 68.67 on VIPeR without any model fine-tuning, beating almost all existing models fine-tuned on the dataset.", "Existing person re-identification (re-id) methods either assume the availability of well-aligned person bounding box images as model input or rely on constrained attention selection mechanisms to calibrate misaligned images. They are therefore sub-optimal for re-id matching in arbitrarily aligned person images potentially with large human pose variations and unconstrained auto-detection errors. In this work, we show the advantages of jointly learning attention selection and feature representation in a Convolutional Neural Network (CNN) by maximising the complementary information of different levels of visual attention subject to re-id discriminative learning constraints. Specifically, we formulate a novel Harmonious Attention CNN (HA-CNN) model for joint learning of soft pixel attention and hard regional attention along with simultaneous optimisation of feature representations, dedicated to optimise person re-id in uncontrolled (misaligned) images. Extensive comparative evaluations validate the superiority of this new HA-CNN model for person re-id over a wide variety of state-of-the-art methods on three large-scale benchmarks including CUHK03, Market-1501, and DukeMTMC-ReID.", "Existing person re-identification (re-id) methods either assume the availability of well-aligned person bounding box images as model input or rely on constrained attention selection mechanisms to calibrate misaligned images. They are therefore sub-optimal for re-id matching in arbitrarily aligned person images potentially with large human pose variations and unconstrained auto-detection errors. In this work, we show the advantages of jointly learning attention selection and feature representation in a Convolutional Neural Network (CNN) by maximising the complementary information of different levels of visual attention subject to re-id discriminative learning constraints. Specifically, we formulate a novel Harmonious Attention CNN (HA-CNN) model for joint learning of soft pixel attention and hard regional attention along with simultaneous optimisation of feature representations, dedicated to optimise person re-id in uncontrolled (misaligned) images. Extensive comparative evaluations validate the superiority of this new HA-CNN model for person re-id over a wide variety of state-of-the-art methods on three large-scale benchmarks including CUHK03, Market-1501, and DukeMTMC-ReID." ] }
1907.10843
2963402660
Person re-identification (re-ID) solves the task of matching images across cameras and is among the research topics in vision community. Since query images in real-world scenarios might suffer from resolution loss, how to solve the resolution mismatch problem during person re-ID becomes a practical problem. Instead of applying separate image super-resolution models, we propose a novel network architecture of Resolution Adaptation and re-Identification Network (RAIN) to solve cross-resolution person re-ID. Advancing the strategy of adversarial learning, we aim at extracting resolution-invariant representations for re-ID, while the proposed model is learned in an end-to-end training fashion. Our experiments confirm that the use of our model can recognize low-resolution query images, even if the resolution is not seen during training. Moreover, the extension of our model for semi-supervised re-ID further confirms the scalability of our proposed method for real-world scenarios and applications.
To address the challenging resolution mismatch problem, a couple of methods @cite_22 @cite_20 @cite_18 @cite_34 @cite_11 @cite_28 have been recently proposed. Li al @cite_22 present a joint learning framework that simultaneously optimizes cross-scale image domain alignment and discriminant distance metric modeling. The SLD @math L @cite_20 learns a pair of HR and LR dictionaries and the mapping between the feature representations of HR and LR images. Wang al @cite_18 explore the scale-distance function space by varying the image scale of LR images when matching with HR ones. Nevertheless, the above methods employ hand-crafted descriptors, which might limit the generalization of their re-ID capability.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_28", "@cite_34", "@cite_20", "@cite_11" ], "mid": [ "2963102887", "2270409809", "2962887033", "1971955426" ], "abstract": [ "Visual recognition research often assumes a sufficient resolution of the region of interest (ROI). That is usually violated in practice, inspiring us to explore the Very Low Resolution Recognition (VLRR) problem. Typically, the ROI in a VLRR problem can be smaller than 16 16 pixels, and is challenging to be recognized even by human experts. We attempt to solve the VLRR problem using deep learning methods. Taking advantage of techniques primarily in super resolution, domain adaptation and robust regression, we formulate a dedicated deep learning method and demonstrate how these techniques are incorporated step by step. Any extra complexity, when introduced, is fully justified by both analysis and simulation results. The resulting Robust Partially Coupled Networks achieves feature enhancement and recognition simultaneously. It allows for both the flexibility to combat the LR-HR domain mismatch, and the robustness to outliers. Finally, the effectiveness of the proposed models is evaluated on three different VLRR tasks, including face identification, digit recognition and font recognition, all of which obtain very impressive performances.", "Distance metric learning (DML) approaches learn a transformation to a representation space where distance is in correspondence with a predefined notion of similarity. While such models offer a number of compelling benefits, it has been difficult for these to compete with modern classification algorithms in performance and even in feature extraction. In this work, we propose a novel approach explicitly designed to address a number of subtle yet important issues which have stymied earlier DML algorithms. It maintains an explicit model of the distributions of the different classes in representation space. It then employs this knowledge to adaptively assess similarity, and achieve local discrimination by penalizing class distribution overlap. We demonstrate the effectiveness of this idea on several tasks. Our approach achieves state-of-the-art classification results on a number of fine-grained visual recognition datasets, surpassing the standard softmax classifier and outperforming triplet loss by a relative margin of 30-40 . In terms of computational performance, it alleviates training inefficiencies in the traditional triplet loss, reaching the same error in 5-30 times fewer iterations. Beyond classification, we further validate the saliency of the learnt representations via their attribute concentration and hierarchy recovery properties, achieving 10-25 relative gains on the softmax classifier and 25-50 on triplet loss in these tasks.", "Abstract: Distance metric learning (DML) approaches learn a transformation to a representation space where distance is in correspondence with a predefined notion of similarity. While such models offer a number of compelling benefits, it has been difficult for these to compete with modern classification algorithms in performance and even in feature extraction. In this work, we propose a novel approach explicitly designed to address a number of subtle yet important issues which have stymied earlier DML algorithms. It maintains an explicit model of the distributions of the different classes in representation space. It then employs this knowledge to adaptively assess similarity, and achieve local discrimination by penalizing class distribution overlap. We demonstrate the effectiveness of this idea on several tasks. Our approach achieves state-of-the-art classification results on a number of fine-grained visual recognition datasets, surpassing the standard softmax classifier and outperforming triplet loss by a relative margin of 30-40 . In terms of computational performance, it alleviates training inefficiencies in the traditional triplet loss, reaching the same error in 5-30 times fewer iterations. Beyond classification, we further validate the saliency of the learnt representations via their attribute concentration and hierarchy recovery properties, achieving 10-25 relative gains on the softmax classifier and 25-50 on triplet loss in these tasks.", "Identifying the same individual across different scenes is an important yet difficult task in intelligent video surveillance. Its main difficulty lies in how to preserve similarity of the same person against large appearance and structure variation while discriminating different individuals. In this paper, we present a scalable distance driven feature learning framework based on the deep neural network for person re-identification, and demonstrate its effectiveness to handle the existing challenges. Specifically, given the training images with the class labels (person IDs), we first produce a large number of triplet units, each of which contains three images, i.e. one person with a matched reference and a mismatched reference. Treating the units as the input, we build the convolutional neural network to generate the layered representations, and follow with the L 2 distance metric. By means of parameter optimization, our framework tends to maximize the relative distance between the matched pair and the mismatched pair for each triplet unit. Moreover, a nontrivial issue arising with the framework is that the triplet organization cubically enlarges the number of training triplets, as one image can be involved into several triplet units. To overcome this problem, we develop an effective triplet generation scheme and an optimized gradient descent algorithm, making the computational load mainly depend on the number of original images instead of the number of triplets. On several challenging databases, our approach achieves very promising results and outperforms other state-of-the-art approaches. HighlightsWe present a novel feature learning framework for person re-identification.Our framework is based on the maximum relative distance comparison.The learning algorithm is scalable to process large amount of data.We demonstrate superior performances over other state-of-the-arts." ] }
1907.10738
2962927471
Open book question answering is a type of natural language based QA (NLQA) where questions are expected to be answered with respect to a given set of open book facts, and common knowledge about a topic. Recently a challenge involving such QA, OpenBookQA, has been proposed. Unlike most other NLQA tasks that focus on linguistic understanding, OpenBookQA requires deeper reasoning involving linguistic understanding as well as reasoning with common knowledge. In this paper we address QA with respect to the OpenBookQA dataset and combine state of the art language models with abductive information retrieval (IR), information gain based re-ranking, passage selection and weighted scoring to achieve 72.0 accuracy, an 11.6 improvement over the current state of the art.
Among these, the closest to our work is the work in @cite_22 which perform QA using fine tuned language model and the works of @cite_6 @cite_17 which performs QA using external knowledge.
{ "cite_N": [ "@cite_22", "@cite_6", "@cite_17" ], "mid": [ "2774249241", "2112729630", "2517782820", "2778792674" ], "abstract": [ "Research has seen considerable achievements concerning translation of natural language patterns into formal queries for Question Answering (QA) based on Knowledge Graphs (KG). One of the main challenges in this research area is about how to identify which property within a Knowledge Graph matches the predicate found in a Natural Language (NL) relation. Current approaches for formal query generation attempt to resolve this problem mainly by first retrieving the named entity from the KG together with a list of its predicates, then filtering out one from all the predicates of the entity. We attempt an approach to directly match an NL predicate to KG properties that can be employed within QA pipelines. In this paper, we specify a systematic approach as well as providing a tool that can be employed to solve this task. Our approach models KB relations with their underlying parts of speech, we then enhance this with extra attributes obtained from Wordnet and Dependency parsing characteristics. From a question, we model a similar representation of query relations. We then define distance measurements between the query relation and the properties representations from the KG to identify which property is referred to by the relation within the query. We report substantive recall values and considerable precision from our evaluation.", "A range of Natural Language Processing tasks involve making judgments about the semantic relatedness of a pair of sentences, such as Recognizing Textual Entailment (RTE) and answer selection for Question Answering (QA). A key challenge that these tasks face in common is the lack of explicit alignment annotation between a sentence pair. We capture the alignment by using a novel probabilistic model that models tree-edit operations on dependency parse trees. Unlike previous tree-edit models which require a separate alignment-finding phase and resort to ad-hoc distance metrics, our method treats alignments as structured latent variables, and offers a principled framework for incorporating complex linguistic features. We demonstrate the robustness of our model by conducting experiments for RTE and QA, and show that our model performs competitively on both tasks with the same set of general features.", "Passage-level question answer matching is a challenging task since it requires effective representations that capture the complex semantic relations between questions and answers. In this work, we propose a series of deep learning models to address passage answer selection. To match passage answers to questions accommodating their complex semantic relations, unlike most previous work that utilizes a single deep learning structure, we develop hybrid models that process the text using both convolutional and recurrent neural networks, combining the merits on extracting linguistic information from both structures. Additionally, we also develop a simple but effective attention mechanism for the purpose of constructing better answer representations according to the input question, which is imperative for better modeling long answer sequences. The results on two public benchmark datasets, InsuranceQA and TREC-QA, show that our proposed models outperform a variety of strong baselines.", "Transforming natural language questions into formal queries is an integral task in Question Answering (QA) systems. QA systems built on knowledge graphs like DBpedia, require a step after natural language processing for linking words, specifically including named entities and relations, to their corresponding entities in a knowledge graph. To achieve this task, several approaches rely on background knowledge bases containing semantically-typed relations, e.g., PATTY, for an extra disambiguation step. Two major factors may affect the performance of relation linking approaches whenever background knowledge bases are accessed: a) limited availability of such semantic knowledge sources, and b) lack of a systematic approach on how to maximize the benefits of the collected knowledge. We tackle this problem and devise SIBKB, a semantic-based index able to capture knowledge encoded on background knowledge bases like PATTY. SIBKB represents a background knowledge base as a bi-partite and a dynamic index over the relation patterns included in the knowledge base. Moreover, we develop a relation linking component able to exploit SIBKB features. The benefits of SIBKB are empirically studied on existing QA benchmarks and observed results suggest that SIBKB is able to enhance the accuracy of relation linking by up to three times." ] }
1907.10738
2962927471
Open book question answering is a type of natural language based QA (NLQA) where questions are expected to be answered with respect to a given set of open book facts, and common knowledge about a topic. Recently a challenge involving such QA, OpenBookQA, has been proposed. Unlike most other NLQA tasks that focus on linguistic understanding, OpenBookQA requires deeper reasoning involving linguistic understanding as well as reasoning with common knowledge. In this paper we address QA with respect to the OpenBookQA dataset and combine state of the art language models with abductive information retrieval (IR), information gain based re-ranking, passage selection and weighted scoring to achieve 72.0 accuracy, an 11.6 improvement over the current state of the art.
Related to our work for extracting missing knowledge are the works of @cite_1 @cite_9 @cite_15 which respectively generate a query either by extracting key terms from a question and an answer option or by classifying key terms or by Seq2Seq models to generate key terms. In comparison, we generate queries using the question, an answer option and an extracted fact using natural language abduction. The task of natural language abduction for natural language understanding has been studied for a long time @cite_28 @cite_16 @cite_29 @cite_14 @cite_8 @cite_7 @cite_13 @cite_2 . However, such works transform the natural language text to a logical form and then use formal reasoning to perform the abduction. On the contrary, our system performs abduction over natural language text without translating the texts to a logical form.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_28", "@cite_9", "@cite_29", "@cite_1", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2233653089", "2774249241", "2101349222", "176609766" ], "abstract": [ "We describe a legal question answering system which combines legal information retrieval and textual entailment. We have evaluated our system using the data from the first competition on legal information extraction entailment (COLIEE) 2014. The competition focuses on two aspects of legal information processing related to answering yes no questions from Japanese legal bar exams. The shared task consists of two phases: legal ad hoc information retrieval and textual entailment. The first phase requires the identification of Japan civil law articles relevant to a legal bar exam query. We have implemented two unsupervised baseline models (tf-idf and Latent Dirichlet Allocation (LDA)-based Information Retrieval (IR)), and a supervised model, Ranking SVM, for the task. The features of the model are a set of words, and scores of an article based on the corresponding baseline models. The results show that the Ranking SVM model nearly doubles the Mean Average Precision compared with both baseline models. The second phase is to answer “Yes” or “No” to previously unseen queries, by comparing the meanings of queries with relevant articles. The features used for phase two are syntactic semantic similarities and identification of negation antonym relations. The results show that our method, combined with rule-based model and the unsupervised model, outperforms the SVM-based supervised model.", "Research has seen considerable achievements concerning translation of natural language patterns into formal queries for Question Answering (QA) based on Knowledge Graphs (KG). One of the main challenges in this research area is about how to identify which property within a Knowledge Graph matches the predicate found in a Natural Language (NL) relation. Current approaches for formal query generation attempt to resolve this problem mainly by first retrieving the named entity from the KG together with a list of its predicates, then filtering out one from all the predicates of the entity. We attempt an approach to directly match an NL predicate to KG properties that can be employed within QA pipelines. In this paper, we specify a systematic approach as well as providing a tool that can be employed to solve this task. Our approach models KB relations with their underlying parts of speech, we then enhance this with extra attributes obtained from Wordnet and Dependency parsing characteristics. From a question, we model a similar representation of query relations. We then define distance measurements between the query relation and the properties representations from the KG to identify which property is referred to by the relation within the query. We report substantive recall values and considerable precision from our evaluation.", "Automatic information extraction (IE) enables the construction of very large knowledge bases (KBs), with relational facts on millions of entities from text corpora and Web sources. However, such KBs contain errors and they are far from being complete. This motivates the need for exploiting human intelligence and knowledge using crowd-based human computing (HC) for assessing the validity of facts and for gathering additional knowledge. This paper presents a novel system architecture, called Higgins, which shows how to effectively integrate an IE engine and a HC engine. Higgins generates game questions where players choose or fill in missing relations for subject-relation-object triples. For generating multiple-choice answer candidates, we have constructed a large dictionary of entity names and relational phrases, and have developed specifically designed statistical language models for phrase relatedness. To this end, we combine semantic resources like WordNet, ConceptNet, and others with statistics derived from a largeWeb corpus. We demonstrate the effectiveness of Higgins for knowledge acquisition by crowdsourced gathering of relationships between characters in narrative descriptions of movies and books.", "Horn clause logic programming can be extended to include abduction with integrity constraints. In the resulting extension of logic programming, negation by failure can be simulated by making negative conditions abducible and by imposing appropriate denials and disjunctions as integrity constraints. This gives an alternative semantics for negation by failure, which generalises the stable model semantics of negation by failure. The abductive extension of logic programming extends negation by failure in three ways: (1) computation can be perfonned in alternative minimal models, (2) positive as well as negative conditions can be made abducible, and (3) other integrity constraints can also be accommodated. * This paper was written while the first author was at Imperial College. 235 Introduction The tenn \"abduction\" was introduced by the philosopher Charles Peirce [1931] to refer to a particular kind of hypothetical reasoning. In the simplest case, it has the fonn: From A and A fB infer B as a possible \"explanation\" of A. Abduction has been given prominence in Charniak and McDennot's [1985] \"Introduction to Artificial Intelligence\", where it has been applied to expert systems and story comprehension. Independently, several authors have developed deductive techniques to drive the generation of abductive hypotheses. Cox and Pietrzykowski [1986] construct hypotheses from the \"dead ends\" of linear resolution proofs. Finger and Genesereth [1985] generate \"deductive solutions to design problems\" using the \"residue\" left behind in resolution proofs. Poole, Goebel and Aleliunas [1987] also use linear resolution to generate hypotheses. All impose the restriction that hypotheses should be consistent with the \"knowledge base\". Abduction is a fonn of non-monotonic reasoning, because hypotheses which are consistent with one state of a knowledge base may become inconSistent when new knowledge is added. Poole [1988] argues that abduction is preferable to noh-monotonic logics for default reasoning. In this view, defaults are hypotheses fonnulated within classical logic rather than conclusions derived withln some fonn of non-monotonic logic. The similarity between abduction and default reasoning was also pointed out in [Kowalski, 1979]. In this paper we show how abduction can be integrated with logic programming, and we concentrate on the use of abduction to generalise negation by failure. Conditional Answers Compared with Abduction In the simplest case, a logic program consists of a set of Horn Clauses, which are used backward to_reduce goals to sub goals. The initial goal is solved when there are no subgollls left;" ] }
1907.10903
2963481198
Existing Graph Convolutional Networks (GCNs) are shallow---the number of the layers is usually not larger than 2. The deeper variants by simply stacking more layers, unfortunately perform worse, even involving well-known tricks like weight penalizing, dropout, and residual connections. This paper reveals that developing deep GCNs mainly encounters two obstacles: and . The over-fitting issue weakens the generalization ability on small graphs, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. Hence, we propose DropEdge, a novel technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graphs, acting like a data augmenter and also a message passing reducer. More importantly, DropEdge enables us to recast a wider range of Convolutional Neural Networks (CNNs) from the image field to the graph domain; in particular, we study DenseNet and InceptionNet in this paper. Extensive experiments on several benchmarks demonstrate that our method allows deep GCNs to achieve promising performance, even when the number of layers exceeds 30---the deepest GCN that has ever been proposed.
Inspired by the huge success of CNNs in computer vision, a large number of methods come redefining the notion of convolution on graphs under the umbrella of GCNs. The first prominent research on GCNs is presented in @cite_25 , which develops graph convolution based on spectral graph theory. Later, @cite_14 @cite_17 @cite_8 @cite_9 @cite_15 apply improvements, extensions, and approximations on spectral-based GCNs. With contending the scalability issue of spectral-based GCNs on large graphs, spatial-based GCNs have been rapidly developed @cite_5 @cite_24 @cite_31 @cite_11 . These methods directly perform convolution in the graph domain by aggregating the information from neighbor nodes. By recent, several sampling-based methods have been proposed for fast graph representation learning, including the node-wise sampling methods @cite_5 , the layer-wise approach @cite_7 and its layer-dependent variant @cite_28 .
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_11", "@cite_7", "@cite_8", "@cite_28", "@cite_9", "@cite_24", "@cite_5", "@cite_15", "@cite_25", "@cite_17" ], "mid": [ "2890703109", "2809418595", "2614256707", "2798598284" ], "abstract": [ "Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on large-scale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.", "Convolutional neural networks (CNNs) have achieved great success on grid-like data such as images, but face tremendous challenges in learning from more generic data such as graphs. In CNNs, the trainable local filters enable the automatic extraction of high-level features. The computation with filters requires a fixed number of ordered units in the receptive fields. However, the number of neighboring units is neither fixed nor are they ordered in generic graphs, thereby hindering the applications of convolutional operations. Here, we address these challenges by proposing the learnable graph convolutional layer (LGCL). LGCL automatically selects a fixed number of neighboring nodes for each feature based on value ranking in order to transform graph data into grid-like structures in 1-D format, thereby enabling the use of regular convolutional operations on generic graphs. To enable model training on large-scale graphs, we propose a sub-graph training method to reduce the excessive memory and computational resource requirements suffered by prior methods on graph convolutions. Our experimental results on node classification tasks in both transductive and inductive learning settings demonstrate that our methods can achieve consistently better performance on the Cora, Citeseer, Pubmed citation network, and protein-protein interaction network datasets. Our results also indicate that the proposed methods using sub-graph training strategy are more efficient as compared to prior approaches.", "In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.", "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture." ] }
1907.10903
2963481198
Existing Graph Convolutional Networks (GCNs) are shallow---the number of the layers is usually not larger than 2. The deeper variants by simply stacking more layers, unfortunately perform worse, even involving well-known tricks like weight penalizing, dropout, and residual connections. This paper reveals that developing deep GCNs mainly encounters two obstacles: and . The over-fitting issue weakens the generalization ability on small graphs, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. Hence, we propose DropEdge, a novel technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graphs, acting like a data augmenter and also a message passing reducer. More importantly, DropEdge enables us to recast a wider range of Convolutional Neural Networks (CNNs) from the image field to the graph domain; in particular, we study DenseNet and InceptionNet in this paper. Extensive experiments on several benchmarks demonstrate that our method allows deep GCNs to achieve promising performance, even when the number of layers exceeds 30---the deepest GCN that has ever been proposed.
Despite the fruitful progress, most previous works only focus on shallow GCNs while the deeper extension is seldom discussed. The work by @cite_30 first introduces the concept of over-smoothing in GCNs, but it never proposes a deep GCN with addressing this issue. Its following study @cite_22 solves over-smoothing by using personalized PageRank that additionally involves the rooted node into the message passing loop; however, the accuracy is still observed to decrease when the depth of GCN increases from 2. The JKNet @cite_10 employs skip connections for multi-hop message passing, and it enables different neighborhood ranges for better structure-aware representation learning. Unexpectedly, as shown in the experiments, the JKNets that obtain the best accuracy have depth less than 3 on all datasets, except the one on Cora where the best result is given by the 6-layer network. In this paper, we propose the notion of DropEdge to overcome both the over-fitting and over-smoothing issues simultaneously, and combine it with various backbone architectures to drive an in-depth analysis on deep GCNs.
{ "cite_N": [ "@cite_30", "@cite_10", "@cite_22" ], "mid": [ "2890703109", "2593110912", "2953324412", "2784814091" ], "abstract": [ "Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on large-scale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.", "Deep neural network is difficult to train and this predicament becomes worse as the depth increases. The essence of this problem exists in the magnitude of backpropagated errors that will result in gradient vanishing or exploding phenomenon. We show that a variant of regularizer which utilizes orthonormality among different filter banks can alleviate this problem. Moreover, we design a backward error modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. Equipped with these two ingredients, we propose several novel optimization solutions that can be utilized for training a specific-structured (repetitively triple modules of Conv-BNReLU) extremely deep convolutional neural network (CNN) WITHOUT any shortcuts identity mappings from scratch. Experiments show that our proposed solutions can achieve distinct improvements for a 44-layer and a 110-layer plain networks on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train plain CNNs to match the performance of the residual counterparts. Besides, we propose new principles for designing network structure from the insights evoked by orthonormality. Combined with residual structure, we achieve comparative performance on the ImageNet dataset.", "Deep neural network is difficult to train and this predicament becomes worse as the depth increases. The essence of this problem exists in the magnitude of backpropagated errors that will result in gradient vanishing or exploding phenomenon. We show that a variant of regularizer which utilizes orthonormality among different filter banks can alleviate this problem. Moreover, we design a backward error modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. Equipped with these two ingredients, we propose several novel optimization solutions that can be utilized for training a specific-structured (repetitively triple modules of Conv-BNReLU) extremely deep convolutional neural network (CNN) WITHOUT any shortcuts identity mappings from scratch. Experiments show that our proposed solutions can achieve distinct improvements for a 44-layer and a 110-layer plain networks on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train plain CNNs to match the performance of the residual counterparts. Besides, we propose new principles for designing network structure from the insights evoked by orthonormality. Combined with residual structure, we achieve comparative performance on the ImageNet dataset.", "Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semisupervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires a considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals." ] }
1907.10861
2962969695
For any positive real number @math , the @math -frame potential of @math unit vectors @math is defined as @math . In this paper, we focus on the special case @math and establish the unique minimizer of @math for @math . Our results completely solve the minimization problem of @math -frame potential when @math , which confirms a conjecture posed by Chen, Gonzales, Goodman, Kang and Okoudjou.
For any @math , Ehler and Okoudjou provided another bound in @cite_12 : where the equality holds if and only if @math is an equiangular tight frame (ETF) in @math @cite_1 @cite_15 . We take @math as an example. Since there always exist @math unit vectors in @math forming an ETF @cite_3 , then the set of these @math vectors is the minimizer of the @math -frame potential for @math .
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_1", "@cite_12" ], "mid": [ "2916660142", "2083945855", "2032049628", "2949121341" ], "abstract": [ "An extension is given of a recent result of Glazyrin, showing that an orthonormal basis @math joined with the vectors @math , where @math minimizes the @math -frame potential for @math over all collections of @math vectors @math in @math .", "Abstract We investigate the recovery of almost s -sparse vectors x ∈ C N from undersampled and inaccurate data y = A x + e ∈ C m by means of minimizing ‖ z ‖ 1 subject to the equality constraints A z = y . If m ≍ s ln ( N s ) and if Gaussian random matrices A ∈ R m × N are used, this equality-constrained l 1 -minimization is known to be stable with respect to sparsity defects and robust with respect to measurement errors. If m ≍ s ln ( N s ) and if Weibull random matrices are used, we prove here that the equality-constrained l 1 -minimization remains stable and robust. The arguments are based on two key ingredients, namely the robust null space property and the quotient property. The robust null space property relies on a variant of the classical restricted isometry property where the inner norm is replaced by the l 1 -norm and the outer norm is replaced by a norm comparable to the l 2 -norm. For the l 1 -minimization subject to inequality constraints, this yields stability and robustness results that are also valid when considering sparsity relative to a redundant dictionary. As for the quotient property, it relies on lower estimates for the tail probability of sums of independent Weibull random variables.", "Let (a>0, b>0, ab<1; ) and let (g L^2( R ). ) In this paper we investigate the relation between the frame operator (S:f L^2( R ) n,m ,(f,g_ na,mb ) ,g_ na,mb ) and the matrix (H ) whose entries (H_ k,l ,; ,k',l' ) are given by ((g_ k' b,l' a ,g_ k b,l a ) ) for (k,l,k',l' Z . ) Here (f_ x,y (t)= exp (2 iyt) ,f(t-x), ) (t R ), for any (f L^2( R ). ) We show that (S ) is bounded as a mapping of (L^2( R ) ) into (L^2( R ) ) if and only if (H ) is bounded as a mapping of (l^2( Z ^2) ) into (l^2( Z ^2). ) Also we show that (AI S BI ) if and only if (AI 1 ab ,H BI, ) where (I ) denotes the identity operator of (L^2( R ) ) and (l^2( Z ^2), ) respectively, and (A 0, ) (B< . ) Next, when (g ) generates a frame, we have that ((g_ k b,l a )_ k,l ) has an upper frame bound, and the minimal dual function (^ ) can be computed as (ab , k,l ,(H^ -1 )_ k,l ,; ,o,o ,g_ k b,l a . ) The results of this paper extend, generalize, and rigourize results of Wexler and Raz and of Qian, D. Chen, K. Chen, and Li on the computation of dual functions for finite, discrete-time Gabor expansions to the infinite, continuous-time case. Furthermore, we present a framework in which one can show that certain smoothness and decay properties of a (g ) generating a frame are inherited by (^ . ) In particular, we show that (^ S ) when (g S ) generates a frame (( S ) Schwartz space). The proofs of the main results of this paper rely heavily on a technique introduced by Tolimieri and Orr for relating frame bound questions on complementary lattices by means of the Poisson summation formula.", "We study the problem of fair allocation for indivisible goods. We use the the maxmin share paradigm introduced by Budish as a measure for fairness. Procacciafirst (EC'14) were first to investigate this fundamental problem in the additive setting. In contrast to what real-world experiments suggest, they show that a maxmin guarantee (1- @math allocation) is not always possible even when the number of agents is limited to 3. While the existence of an approximation solution (e.g. a @math - @math allocation) is quite straightforward, improving the guarantee becomes subtler for larger constants. Procaccia provide a proof for existence of a @math - @math allocation and leave the question open for better guarantees. Our main contribution is an answer to the above question. We improve the result of ! to a @math factor in the additive setting. The main idea for our @math - @math allocation method is clustering the agents. To this end, we introduce three notions and techniques, namely reducibility, matching allocation, and cycle-envy-freeness, and prove the approximation guarantee of our algorithm via non-trivial applications of these techniques. Our analysis involves coloring and double counting arguments that might be of independent interest. One major shortcoming of the current studies on fair allocation is the additivity assumption on the valuations. We alleviate this by extending our results to the case of submodular, fractionally subadditive, and subadditive settings. More precisely, we give constant approximation guarantees for submodular and XOS agents, and a logarithmic approximation for the case of subadditive agents. Furthermore, we complement our results by providing close upper bounds for each class of valuation functions. Finally, we present algorithms to find such allocations for additive, submodular, and XOS settings in polynomial time." ] }
1907.10861
2962969695
For any positive real number @math , the @math -frame potential of @math unit vectors @math is defined as @math . In this paper, we focus on the special case @math and establish the unique minimizer of @math for @math . Our results completely solve the minimization problem of @math -frame potential when @math , which confirms a conjecture posed by Chen, Gonzales, Goodman, Kang and Okoudjou.
However, when @math , not much is known except few special cases. In @cite_12 , Ehler and Okoudjou solved the simplest case where @math and @math and also proved that the minimizer of the @math -frame potential is exactly @math copies of an orthonormal basis if @math where @math is a positive integer. In @cite_4 , Glazyrin provided a lower bound for any @math : but the condition under which the equality holds is very harsh. In @cite_13 , Chen, Gonzales, Goodman, Kang and Okoudjou considered this special case where @math . Particularly, numerical experiments in @cite_13 show that the set @math , which is called lifted ETF, seems to be the minimizer of the @math -frame potential where @math is an integer depending on @math . Here, @math is defined as a set of @math unit vectors in @math satisfying . . Note that @math actually forms an ETF in some subspace @math with dimension @math and the rest of @math vectors form an orthonormal basis in the orthogonal complement space of @math .
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_12" ], "mid": [ "2916660142", "2000931246", "2537174396", "2885396681" ], "abstract": [ "An extension is given of a recent result of Glazyrin, showing that an orthonormal basis @math joined with the vectors @math , where @math minimizes the @math -frame potential for @math over all collections of @math vectors @math in @math .", "Let f be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number k such that if the ratio of the number of clauses over the number of variables of f strictly exceeds k , then f is almost certainly unsatisfiable. By a well-known and more or less straightforward argument, it can be shown that kF5.191. This upper bound was improved by to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of k is around 4.2. In this work, we define, in terms of the random formula f, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then f is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for k equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601q . In general, by letting the U This work was performed while the first author was visiting the School of Computer Science, Carleton Ž University, and was partially supported by NSERC Natural Sciences and Engineering Research Council . of Canada , and by a grant from the University of Patras for sabbatical leaves. The second and third Ž authors were supported in part by grants from NSERC Natural Sciences and Engineering Research . Council of Canada . During the last stages of this research, the first and last authors were also partially Ž . supported by EU ESPRIT Long-Term Research Project ALCOM-IT Project No. 20244 . †An extended abstract of this paper was published in the Proceedings of the Fourth Annual European Ž Symposium on Algorithms, ESA’96, September 25]27, 1996, Barcelona, Spain Springer-Verlag, LNCS, . pp. 27]38 . That extended abstract was coauthored by the first three authors of the present paper. Correspondence to: L. M. Kirousis Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r030253-17 253", "This paper considers recovering @math -dimensional vectors @math , and @math from their circular convolutions @math . The vector @math is assumed to be @math -sparse in a known basis that is spread out in the Fourier domain, and each input @math is a member of a known @math -dimensional random subspace. We prove that whenever @math , the problem can be solved effectively by using only the nuclear-norm minimization as the convex relaxation, as long as the inputs are sufficiently diverse and obey @math . By “diverse inputs,” we mean that the @math ’s belong to different, generic subspaces. To the best of our knowledge, this is the first theoretical result on blind deconvolution where the subspace to which @math belongs is not fixed but needs to be determined. We discuss the result in the context of multipath channel estimation in wireless communications. Both the fading coefficients and the delays in the channel impulse response @math are unknown. The encoder codes the @math -dimensional message vectors randomly and then transmits coded messages @math ’s over a fixed channel one after the other. The decoder then discovers all of the messages and the channel response when the number of samples taken for each received message are roughly greater than @math , and the number of messages is roughly at least @math .", "We develop fast and memory efficient numerical methods for learning functions of many variables that admit sparse representations in terms of general bounded orthonormal tensor product bases. Such functions appear in many applications including, e.g., various Uncertainty Quantification(UQ) problems involving the solution of parametric PDE that are approximately sparse in Chebyshev or Legendre product bases. We expect that our results provide a starting point for a new line of research on sublinear-time solution techniques for UQ applications of the type above which will eventually be able to scale to significantly higher-dimensional problems than what are currently computationally feasible. More concretely, let @math be a finite Bounded Orthonormal Product Basis (BOPB) of cardinality @math . We will develop methods that approximate any function @math that is sparse in the BOPB, that is, @math of the form @math with @math of cardinality @math . Our method has a runtime of just @math , uses only @math function evaluations on a fixed and nonadaptive grid, and not more than @math bits of memory. For @math , the runtime @math will be less than what is required to simply enumerate the elements of the basis @math ; thus our method is the first approach applicable in a general BOPB framework that falls into the class referred to as \"sublinear-time\". This and the similarly reduced sample and memory requirements set our algorithm apart from previous works based on standard compressive sensing algorithms such as basis pursuit which typically store and utilize full intermediate basis representations of size @math ." ] }
1907.10861
2962969695
For any positive real number @math , the @math -frame potential of @math unit vectors @math is defined as @math . In this paper, we focus on the special case @math and establish the unique minimizer of @math for @math . Our results completely solve the minimization problem of @math -frame potential when @math , which confirms a conjecture posed by Chen, Gonzales, Goodman, Kang and Okoudjou.
The cases @math and @math for Conjecture are already solved in @cite_12 and @cite_9 , respectively. The first new result for Conjecture is obtained by Glazyrin in @cite_14 who shows that an orthonormal basis in @math plus a repeated vector minimizes @math for any @math . Combining Glazyrin's result with the previous ones, the minimizer of @math is only known for @math . Recently, Park extented Glazyrin's result to the case @math where @math , and showed that an orthonormal basis plus @math repeated vectors is the minimizer for any @math (see @cite_5 ). But the minimal @math -frame potential problem remains open for the case @math when @math .
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_14", "@cite_12" ], "mid": [ "2916660142", "2885396681", "1521197246", "2591592591" ], "abstract": [ "An extension is given of a recent result of Glazyrin, showing that an orthonormal basis @math joined with the vectors @math , where @math minimizes the @math -frame potential for @math over all collections of @math vectors @math in @math .", "We develop fast and memory efficient numerical methods for learning functions of many variables that admit sparse representations in terms of general bounded orthonormal tensor product bases. Such functions appear in many applications including, e.g., various Uncertainty Quantification(UQ) problems involving the solution of parametric PDE that are approximately sparse in Chebyshev or Legendre product bases. We expect that our results provide a starting point for a new line of research on sublinear-time solution techniques for UQ applications of the type above which will eventually be able to scale to significantly higher-dimensional problems than what are currently computationally feasible. More concretely, let @math be a finite Bounded Orthonormal Product Basis (BOPB) of cardinality @math . We will develop methods that approximate any function @math that is sparse in the BOPB, that is, @math of the form @math with @math of cardinality @math . Our method has a runtime of just @math , uses only @math function evaluations on a fixed and nonadaptive grid, and not more than @math bits of memory. For @math , the runtime @math will be less than what is required to simply enumerate the elements of the basis @math ; thus our method is the first approach applicable in a general BOPB framework that falls into the class referred to as \"sublinear-time\". This and the similarly reduced sample and memory requirements set our algorithm apart from previous works based on standard compressive sensing algorithms such as basis pursuit which typically store and utilize full intermediate basis representations of size @math .", "We study a statistical model for the tensor principal component analysis problem introduced by Montanari and Richard: Given a order- @math tensor @math of the form @math , where @math is a signal-to-noise ratio, @math is a unit vector, and @math is a random noise tensor, the goal is to recover the planted vector @math . For the case that @math has iid standard Gaussian entries, we give an efficient algorithm to recover @math whenever @math , and certify that the recovered vector is close to a maximum likelihood estimator, all with high probability over the random choice of @math . The previous best algorithms with provable guarantees required @math . In the regime @math , natural tensor-unfolding-based spectral relaxations for the underlying optimization problem break down (in the sense that their integrality gap is large). To go beyond this barrier, we use convex relaxations based on the sum-of-squares method. Our recovery algorithm proceeds by rounding a degree- @math sum-of-squares relaxations of the maximum-likelihood-estimation problem for the statistical model. To complement our algorithmic results, we show that degree- @math sum-of-squares relaxations break down for @math , which demonstrates that improving our current guarantees (by more than logarithmic factors) would require new techniques or might even be intractable. Finally, we show how to exploit additional problem structure in order to solve our sum-of-squares relaxations, up to some approximation, very efficiently. Our fastest algorithm runs in nearly-linear time using shifted (matrix) power iteration and has similar guarantees as above. The analysis of this algorithm also confirms a variant of a conjecture of Montanari and Richard about singular vectors of tensor unfoldings.", "This paper considers the problem of designing maximum distance separable (MDS) codes over small fields with constraints on the support of their generator matrices. For any given @math binary matrix @math , the GM-MDS conjecture, proposed by , states that if @math satisfies the so-called MDS condition, then for any field @math of size @math , there exists an @math MDS code whose generator matrix @math , with entries in @math , fits the matrix @math (i.e., @math is the support matrix of @math ). Despite all the attempts by the coding theory community, this conjecture remains still open in general. It was shown, independently by and , that the GM-MDS conjecture holds if the following conjecture, referred to as the TM-MDS conjecture, holds: if @math satisfies the MDS condition, then the determinant of a transform matrix @math , such that @math fits @math , is not identically zero, where @math is a Vandermonde matrix with distinct parameters. In this work, we first reformulate the TM-MDS conjecture in terms of the Wronskian determinant, and then present an algebraic-combinatorial approach based on polynomial-degree reduction for proving this conjecture. Our proof technique's strength is based primarily on reducing inherent combinatorics in the proof. We demonstrate the strength of our technique by proving the TM-MDS conjecture for the cases where the number of rows ( @math ) of @math is upper bounded by @math . For this class of special cases of @math where the only additional constraint is on @math , only cases with @math were previously proven theoretically, and the previously used proof techniques are not applicable to cases with @math ." ] }
1907.10758
1568415925
Wi-Fi was originally designed to provide broadband wireless Internet access for devices which generate rather heavy streams. And Wi-Fi succeeded. The coming revolution of the Internet of Things with myriads of autonomous devices and machine type communications (MTC) traffic raises a question: can the Wi-Fi success story be repeated in the area of MTC? Started in 2010, IEEE 802.11 Task Group ah (TGah) has developed a draft amendment to the IEEE 802.11 standard, adapting Wi-Fi for MTC requirements. The performance of novel channel access enhancements in MTC scenarios can hardly be studied with models from Bianchi's clan, which typically assume that traffic load does not change with time. This paper contributes with a pioneer analytical approach to study Wi-Fi-based MTC, which can be used to investigate and customize many mechanisms developed by TGah.
The widest known mathematical model of DCF --- the basic random channel access used in Wi-Fi networks --- was developed by Bianchi in @cite_1 . The model allows estimating maximal throughput, assuming that a constant number of active STAs work in saturated conditions. So the model cannot be used to solve the problems stated in Section , since in these problems the number of active STAs decreases with time. However, paper @cite_1 contains basic principles of Wi-Fi modeling. In particular, it introduces a concept of virtual slot, which is the time interval between consequent backoff counter changes.
{ "cite_N": [ "@cite_1" ], "mid": [ "2143747785", "2583248450", "2163814678", "2098080835" ], "abstract": [ "This paper proposes a semi-random backoff (SRB) method that enables resource reservation in contention-based wireless LANs. The proposed SRB is fundamentally different from traditional random backoff methods because it provides an easy migration path from random backoffs to deterministic slot assignments. The central idea of the SRB is for the wireless station to set its backoff counter to a deterministic value upon a successful packet transmission. This deterministic value will allow the station to reuse the time-slot in consecutive backoff cycles. When multiple stations with successful packet transmissions reuse their respective time-slots, the collision probability is reduced, and the channel achieves the equivalence of resource reservation. In case of a failed packet transmission, a station will revert to the standard random backoff method and probe for a new available time-slot. The proposed SRB method can be readily applied to both 802.11 DCF and 802.11e EDCA networks with minimum modification to the existing DCF EDCA implementations. Theoretical analysis and simulation results validate the superior performance of the SRB for small-scale and heavily loaded wireless LANs. When combined with an adaptive mechanism and a persistent backoff process, SRB can also be effective for large-scale and lightly loaded wireless networks.", "TO-DCF, a new backoff scheme for 802.11, has the potential to significantly increase throughput in dense wireless LANs while also opportunistically favouring nodes with heavier traffic loads and or better channel conditions. In this paper we present an analytical model to investigate the behaviour and performance of the TO-DCF protocol with regards to operating parameters such as the number of nodes, the contention window size and the backoff countdown probabilities. We then compare numerical results from an implementation of our model with simulations. Our model shows a high level of accuracy, even when the model assumptions are relaxed, and provides guidance for network operators to correctly configure the weight functions for nodes running TO-DCF given the network’s operating conditions.", "Analytical modeling of the 802.11e enhanced distributed channel access (EDCA) mechanism is today a fairly mature research area, considering the very large number of papers that have appeared in the literature. However, most work in this area models the EDCA operation through per-slot statistics, namely probability of transmission and collisions referred to \"slots.\" In so doing, they still share a methodology originally proposed for the 802.11 Distributed Coordination Function (DCF), although they do extend it by considering differentiated transmission collision probabilities over different slots.We aim to show that it is possible to devise 802.11e models that do not rely on per-slot statistics. To this purpose, we introduce and describe a novel modeling methodology that does not use per-slot transmission collision probabilities, but relies on the fixed-point computation of the whole (residual) backoff counter distribution occurring after a generic transmission attempt. The proposed approach achieves high accuracy in describing the channel access operations, not only in terms of throughput and delay performance, but also in terms of low-level performance metrics.", "The performance of the distributed coordination function (DCF) of the IEEE 802.11 protocol has been shown to heavily depend on the number of terminals accessing the distributed medium. The DCF uses a carrier sense multiple access scheme with collision avoidance (CSMA CA), where the backoff parameters are fixed and determined by the standard. While those parameters were chosen to provide a good protocol performance, they fail to provide an optimum utilization of the channel in many scenarios. In particular, under heavy load scenarios, the utilization of the medium can drop tenfold. Most of the optimization mechanisms proposed in the literature are based on adapting the DCF backoff parameters to the estimate of the number of competing terminals in the network. However, existing estimation algorithms are either inaccurate or too complex. In this paper, we propose an enhanced version of the IEEE 802.11 DCF that employs an adaptive estimator of the number of competing terminals based on sequential Monte Carlo methods. The algorithm uses a Bayesian approach, optimizing the backoff parameters of the DCF based on the predictive distribution of the number of competing terminals. We show that our algorithm is simple yet highly accurate even at small time scales. We implement our proposed new DCF in the ns-2 simulator and show that it outperforms existing methods. We also show that its accuracy can be used to improve the results of the protocol even when the terminals are not in saturation mode. Moreover, we show that there exists a Nash equilibrium strategy that prevents rogue terminals from changing their parameters for their own benefit, making the algorithm safely applicable in a complete distributed fashion" ] }
1907.10758
1568415925
Wi-Fi was originally designed to provide broadband wireless Internet access for devices which generate rather heavy streams. And Wi-Fi succeeded. The coming revolution of the Internet of Things with myriads of autonomous devices and machine type communications (MTC) traffic raises a question: can the Wi-Fi success story be repeated in the area of MTC? Started in 2010, IEEE 802.11 Task Group ah (TGah) has developed a draft amendment to the IEEE 802.11 standard, adapting Wi-Fi for MTC requirements. The performance of novel channel access enhancements in MTC scenarios can hardly be studied with models from Bianchi's clan, which typically assume that traffic load does not change with time. This paper contributes with a pioneer analytical approach to study Wi-Fi-based MTC, which can be used to investigate and customize many mechanisms developed by TGah.
Paper @cite_0 presents a model, which allows estimating the maximal throughput (again, in saturated scenarios) if all STAs are equally divided into several groups and each slot is assigned to a group. It proves that RAW manifold increases throughput in a network with thousands STAs, however the model can not be used for our problems for aforesaid reasons.
{ "cite_N": [ "@cite_0" ], "mid": [ "2059060883", "2118762973", "2144431033", "2044081853" ], "abstract": [ ".We consider a class of stochastic processing networks. Assume that the networks satisfy a complete resource pooling condition. We prove that each maximum pressure policy asymptotically minimizes the workload process in a stochastic processing network in heavy traffic. We also show that, under each quadratic holding cost structure, there is a maximum pressure policy that asymptotically minimizes the holding cost. A key to the optimality proofs is to prove a state space collapse result and a heavy traffic limit theorem for the network processes under a maximum pressure policy. We extend a framework of Bramson [Queueing Systems Theory Appl. 30 (1998) 89–148] and Williams [Queueing Systems Theory Appl. 30 (1998b) 5–25] from the multiclass queueing network setting to the stochastic processing network setting to prove the state space collapse result and the heavy traffic limit theorem. The extension can be adapted to other studies of stochastic processing networks. 1. Introduction. This paper is a continuation of Dai and Lin (2005), in which maximum pressure policies are shown to be throughput optimal for a class of stochastic processing networks. Throughput optimality is an important, first-order objective for many networks, but it ignores some key secondary performance measures like queueing delays experienced by jobs in these networks. In this paper we show that maximum pressure policies enjoy additional optimality properties; they are asymptotically optimal in minimizing a certain workload or holding cost of a stochastic processing network. Stochastic processing networks have been introduced in a series of three papers by Harrison (2000, 2002, 2003). In Dai and Lin (2005) and this paper we consider a special class of Harrison’s model. This class of stochastic processing networks is much more general than multiclass queueing networks that have been a subject of intensive study in the last 20 years; see, for example, Harrison (1988), Williams", "Abstract Random-access algorithms such as the Carrier-Sense Multiple-Access (CSMA) protocol provide a popular mechanism for distributed medium access control in large-scale wireless networks. In recent years fairly tractable models have been shown to yield remarkably accurate throughput estimates in scenarios with saturated buffers. In contrast, in non-saturated scenarios, where nodes refrain from competition for the medium when their buffers are empty, a complex two-way interaction arises between the activity states and the buffer contents of the various nodes. As a result, the throughput characteristics in such scenarios have largely remained elusive so far. In the present paper we provide a generic structural characterization of the throughput performance and corresponding stability region in terms of the individual saturation throughputs of the various nodes. While the saturation throughputs are difficult to explicitly determine in general, we identify certain cases where these values can be expressed in closed form. In addition, we demonstrate that various lower-dimensional facets of the stability region can be explicitly calculated as well, depending on the neighborhood structure of the interference graph. Illustrative examples and numerical results are presented to illuminate the main analytical findings.", "This paper presents an analytical study of the stable throughput for multiple broadcast sessions in a multi-hop wireless tandem network with random access. Intermediate nodes leverage on the broadcast nature of wireless medium access to perform inter-session network coding among different flows. This problem is challenging due to the interaction among nodes, and has been addressed so far only in the saturated mode where all nodes always have packet to send, which results in infinite packet delay. In this paper, we provide a novel model based on multi-class queueing networks to investigate the problem in unsaturated mode. We devise a theoretical framework for computing maximum stable throughput of network coding for a slotted ALOHA-based random access system. Using our formulation, we compare the performance of network coding and traditional routing. Our results show that network coding leads to high throughput gain over traditional routing. We also define a new metric, network unbalance ratio (NUR), that indicates the unbalance status of the utilization factors at different nodes. We show that although the throughput gain of the network coding compared to the traditional routing decreases when the number of nodes tends to infinity, NUR of the former outperforms the latter. We carry out simulations to confirm our theoretical analysis.", "In this paper, we undertake the first study of statistical multiplexing from the perspective of approximation algorithms. The basic issue underlying statistical multiplexing is the following: in high-speed networks, individual connections (i.e., communication sessions) are very bursty, with transmission rates that vary greatly over time. As such, the problem of packing multiple connections together on a link becomes more subtle than in the case when each connection is assumed to have a fixed demand. We consider one of the most commonly studied models in this domain: that of two communicating nodes connected by a set of parallel edges, where the rate of each connection between them is a random variable. We consider three related problems: (1) stochastic load balancing, (2) stochastic bin-packing, and (3) stochastic knapsack. In the first problem the number of links is given and we want to minimize the expected value of the maximum load. In the other two problems the link capacity and an allowed overflow probability p are given, and the objective is to assign connections to links, so that the probability that the load of a link exceeds the link capacity is at most @math . In bin-packing we need to assign each connection to a link using as few links as possible. In the knapsack problem each connection has a value, and we have only one link. The problem is to accept as many connections as possible. For the stochastic load balancing problem we give an O(1)-approximation algorithm for arbitrary random variables. For the other two problems we have algorithms restricted to on-off sources (the most common special case studied in the statistical multiplexing literature), with a somewhat weaker range of performance guarantees. A standard approach that has emerged for dealing with probabilistic resource requirements is the notion of effective bandwidth---this is a means of associating a fixed demand with a bursty connection that \"represents\" its distribution as closely as possible. Our approximation algorithms make use of the standard definition of effective bandwidth and also a new one that we introduce; the performance guarantees are based on new results showing that a combination of these measures can be used to provide bounds on the optimal solution." ] }
1907.10758
1568415925
Wi-Fi was originally designed to provide broadband wireless Internet access for devices which generate rather heavy streams. And Wi-Fi succeeded. The coming revolution of the Internet of Things with myriads of autonomous devices and machine type communications (MTC) traffic raises a question: can the Wi-Fi success story be repeated in the area of MTC? Started in 2010, IEEE 802.11 Task Group ah (TGah) has developed a draft amendment to the IEEE 802.11 standard, adapting Wi-Fi for MTC requirements. The performance of novel channel access enhancements in MTC scenarios can hardly be studied with models from Bianchi's clan, which typically assume that traffic load does not change with time. This paper contributes with a pioneer analytical approach to study Wi-Fi-based MTC, which can be used to investigate and customize many mechanisms developed by TGah.
In @cite_4 , the authors consider another protocol, IEEE 802.15.4 that uses similar to EDCA channel access. However, in .15.4 a STA senses the channel only when the backoff ends. Although both papers present performance evaluation in the scenario similar to the one described in this paper, the authors assume that collision probability is constant, while in reality, both varying contention window and the number of STAs having packets to transmit make collision probability change with time.
{ "cite_N": [ "@cite_4" ], "mid": [ "1185134559", "2163814678", "2129374252", "2261973694" ], "abstract": [ "This paper proposes analytical expressions of end-to-end throughput for IEEE 802.11e Enhanced Distributed Channel Access (EDCA) wireless string-topology multi-hop net- works. For obtaining the IEEE 802.11e EDCA performance, internal collisions between Access Categories (ACs) in a node, frame collisions with external nodes, and frame-existence probabil- ities of buffers at each AC are expressed as functions of EDCA access parameters. Therefore, it is possible to obtain the effects of the EDCA access parameters to Quality of Service (QoS) support in the EDCA. It is possible to obtain the end-to-end throughput at any offered load with respect to each AC because the buffer states can be expressed according to ACs. The obtained analytical expressions are verified by showing the quantitative agreements with sim- ulation results.", "Analytical modeling of the 802.11e enhanced distributed channel access (EDCA) mechanism is today a fairly mature research area, considering the very large number of papers that have appeared in the literature. However, most work in this area models the EDCA operation through per-slot statistics, namely probability of transmission and collisions referred to \"slots.\" In so doing, they still share a methodology originally proposed for the 802.11 Distributed Coordination Function (DCF), although they do extend it by considering differentiated transmission collision probabilities over different slots.We aim to show that it is possible to devise 802.11e models that do not rely on per-slot statistics. To this purpose, we introduce and describe a novel modeling methodology that does not use per-slot transmission collision probabilities, but relies on the fixed-point computation of the whole (residual) backoff counter distribution occurring after a generic transmission attempt. The proposed approach achieves high accuracy in describing the channel access operations, not only in terms of throughput and delay performance, but also in terms of low-level performance metrics.", "The IEEE 802.11 standard for wireless local area networks (WLANs) employs a medium access control (MAC), called distributed coordination function (DCF), which is based on carrier sense multiple access with collision avoidance (CSMA CA). The collision avoidance mechanism utilizes the random backoff prior to each frame transmission attempt. The random nature of the backoff reduces the collision probability, but cannot completely eliminate collisions. It is known that the throughput performance of the 802.11 WLAN is significantly compromised as the number of stations increases. In this paper, we propose a novel distributed reservation-based MAC protocol, called early backoff announcement (EBA), which is backward compatible with the legacy DCF. Under EBA, a station announces its future backoff information in terms of the number of backoff slots via the MAC header of its frame being transmitted. All the stations receiving the information avoid collisions by excluding the same backoff duration when selecting their future backoff value. Through extensive simulations, EBA is found to achieve a significant increase in the throughput performance as well as a higher degree of fairness compared to the 802.11 DCF.", "Carrier sense multiple access with collision avoidance (CSHA CA) has been the access protocol of choice for IEEE 802.11-based WLANs. In addition to channel sensing before transmission, the probability of a collision in these WLANs is typically reduced by the application of a binary exponential backoff (BEB) algorithm that randomizes the selection of the time slot in which a given station transmits. In a system without adaptive modulation and coding (AHC), the reduction in spectral efficiency caused by BEB is outweighed by the reduction in the number of collisions. In contrast, in systems using the ubiquitous auto rate fallback (ARF) AHC algorithm, which is unable to distinguish a collision from an erroneous transmission, the remaining collisions induce a dramatic drop in system performance. This degradation is caused by the utilization of low-rate transmission modes even when the channel conditions would permit the use of much higher-rate modes. In an attempt to further reduce the number of collisions, a variant of CSHA CA, called enhanced collision avoidance (CSHA E2CA), has been recently proposed. In this paper, a model approach to the performance evaluation of both BEB-based CSHA CA and CSHA E2CA, used in conjunction with ARF, is presented and validated. Results reveal the synergistic properties of the E2CA and ARF combination, as demonstrated by the superior goodput performance when compared against other strategies." ] }
1907.10758
1568415925
Wi-Fi was originally designed to provide broadband wireless Internet access for devices which generate rather heavy streams. And Wi-Fi succeeded. The coming revolution of the Internet of Things with myriads of autonomous devices and machine type communications (MTC) traffic raises a question: can the Wi-Fi success story be repeated in the area of MTC? Started in 2010, IEEE 802.11 Task Group ah (TGah) has developed a draft amendment to the IEEE 802.11 standard, adapting Wi-Fi for MTC requirements. The performance of novel channel access enhancements in MTC scenarios can hardly be studied with models from Bianchi's clan, which typically assume that traffic load does not change with time. This paper contributes with a pioneer analytical approach to study Wi-Fi-based MTC, which can be used to investigate and customize many mechanisms developed by TGah.
The authors of @cite_7 study the power saving mechanism. They have developed a model, which allows estimating the average energy consumed by a STA and average time used by a STA to retrieve its data. As shown in , even though the model developed in @cite_7 can be used to find the average frame transmission time for a STA, it can not be used to find the correct time distribution required in problem A. Besides that, it can not be used at all to solve problem B.
{ "cite_N": [ "@cite_7" ], "mid": [ "2088506000", "2101261257", "2024379753", "2031440489" ], "abstract": [ "In energy harvesting communication systems, an exogenous recharge process supplies energy necessary for data transmission and the arriving energy can be buffered in a battery before consumption. We determine the information-theoretic capacity of the classical additive white Gaussian noise (AWGN) channel with an energy harvesting transmitter with an unlimited sized battery. As the energy arrives randomly and can be saved in the battery, codewords must obey cumulative stochastic energy constraints. We show that the capacity of the AWGN channel with such stochastic channel input constraints is equal to the capacity with an average power constraint equal to the average recharge rate. We provide two capacity achieving schemes: save-and-transmit and best-effort-transmit. In the save-and-transmit scheme, the transmitter collects energy in a saving phase of proper duration that guarantees that there will be no energy shortages during the transmission of code symbols. In the best-effort-transmit scheme, the transmission starts right away without an initial saving period, and the transmitter sends a code symbol if there is sufficient energy in the battery, and a zero symbol otherwise. Finally, we consider a system in which the average recharge rate is time varying in a larger time scale and derive the optimal offline power policy that maximizes the average throughput, by using majorization theory.", "This paper considers the problem of power management and throughput maximization for energy neutral operation when using an energy harvesting sensor (EHS) to send data over a wireless link. The EHS is assumed to be able to harvest energy at a constant rate, and use a fixed part of the energy harvested in a slot for measuring the channel state. The rest of the energy harvested is available for transmission, however, it can be stored in an inefficient battery if it is not fully utilized. The key constraint that the EHS needs to satisfy is energy neutrality, i.e., the expected energy drawn from the battery should equal the expected energy deposited into the battery. In this scenario, two popular models for data transmission are contrasted: the constant bit rate (CBR) model and the variable bit rate (VBR) model. In the CBR model, it is assumed that the EHS are designed to transmit data at a constant rate (using a fixed modulation and coding scheme) but are power-controlled. In the VBR model, the EHS selects both the transmit power and the data rate of transmission in each slot based on the channel instantiation. A framework under which the system designer can optimize several parameters of the EHS that determine the average data rate performance when the channel is Rayleigh fading is developed. Using this framework, the two transmission schemes are contrasted. It is shown that, with the right choice of parameter settings, the CBR scheme can perform nearly as well as the VBR scheme at significantly lower complextiy. The usefulness and validity of the framework developed is illustrated through simulations for specific examples.", "In this paper, traffic-aware sleeping control (SC) and power matching (PM) of a single base station (BS) in cellular networks are studied. The objective is to find the sleeping control and power matching configurations that achieve the Pareto optimal tradeoff between total power consumption and average delay. Two types of sleeping control schemes are considered: The BS goes to sleep whenever there is no active user, and wakes up when N users are assembled or after a period of multiple or single vacation time. We first discuss when to incorporate sleeping control into power matching energy efficiently. The explicit relationship between total power consumption and average delay with varying service rate is analyzed theoretically, indicating that sacrificing delay cannot always be traded for energy saving, and we also provide conditions under which the energy-optimal rate exists. Moreover, the optimal pair of sleeping parameter and service rate to achieve the optimal energy-delay tradeoff, and the energy consumption lower bound are also derived. Both the analytical and simulation results show that tolerable sacrifice of delay performance can be traded for substantial amount of energy saving given that careful designs were made according to our analysis.", "In this paper, we investigate the transmission completion time minimization problem in an additive white Gaussian noise (AWGN) broadcast channel, where the transmitter is able to harvest energy from the nature, using a rechargeable battery. The harvested energy is modeled to arrive at the transmitter during the course of transmissions. The transmitter has a fixed number of packets to be delivered to each receiver. The objective is to minimize the time by which all of the packets are delivered to their respective destinations. To this end, we optimize the transmit powers and transmission rates in a deterministic setting. We first analyze the structural properties of the optimal transmission policy in a two-user broadcast channel via the dual problem of maximizing the departure region by a fixed time T. We prove that the optimal total transmit power sequence has the same structure as the optimal single-user transmit power sequence in . In addition, the total power is split optimally based on a cut-off power level; if the total transmit power is lower than this cut-off level, all transmit power is allocated to the stronger user; otherwise, all transmit power above this level is allocated to the weaker user. We then extend our analysis to an M-user broadcast channel. We show that the optimal total power sequence has the same structure as the two-user case and optimally splitting the total power among M users involves M-1 cut-off power levels. Using this structure, we propose an algorithm that finds the globally optimal policy. Our algorithm is based on reducing the broadcast channel problem to a single-user problem as much as possible. Finally, we illustrate the optimal policy and compare its performance with several suboptimal policies under different settings." ] }
1907.10786
2963577681
Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation
GANs @cite_6 have brought wide attention in recent years. The efforts made to improve GANs lie in various aspects, including designing better objective functions @cite_22 @cite_12 , improving synthesis diversity @cite_14 @cite_33 @cite_8 , image resolution @cite_24 @cite_7 , as well as training stability @cite_11 @cite_26 @cite_0 . Despite this tremendous success, little work has been done on understanding what GANs have learned in the process of synthesizing the real visual world. Prior work @cite_22 @cite_35 observed the vector arithmetic property in the latent space. Bau al @cite_5 analyzed GANs by visualizing the spatial feature map and understanding the behavior of different units in intermediate layers. However, detailed study on the fine-grained relationship between input latent space and semantic attributes of output images is still missing.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_33", "@cite_22", "@cite_8", "@cite_7", "@cite_6", "@cite_24", "@cite_0", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2963577681", "2548275288", "2950776302", "2768098503" ], "abstract": [ "Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.", "Image completion has achieved significant progress due to advances in generative adversarial networks (GANs). Albeit natural-looking, the synthesized contents still lack details, especially for scenes with complex structures or images with large holes. This is because there exists a gap between low-level reconstruction loss and high-level adversarial loss. To address this issue, we introduce a perceptual network to provide mid-level guidance, which measures the semantical similarity between the synthesized and original contents in a similarity-enhanced space. We conduct a detailed analysis on the effects of different losses and different levels of perceptual features in image completion, showing that there exist complementarity between adversarial training and perceptual features. By combining them together, our model can achieve nearly seamless fusion results in an end-to-end manner. Moreover, we design an effective lightweight generator architecture, which can achieve effective image inpainting with far less parameters. Evaluated on CelebA Face and Paris StreetView dataset, our proposed method significantly outperforms existing methods." ] }
1907.10786
2963577681
Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation
Besides improving GANs to synthesize images in an unconditional way, plenty of work has been done to control the contents and attributes of the outputs. CGAN @cite_31 was firstly proposed to add constraints into the training procedure. Specifically, additional label together with the random latent code is fed into the generator, and then used as supervision to ensure that GAN outputs image with desired category. In this way, latent code and the auxiliary label are considered as decomposed such that changing one item will not affect the other. This idea is further extended with more carefully designed loss functions @cite_34 @cite_27 , introduction of semantic attribute features @cite_19 @cite_21 @cite_3 @cite_10 , as well as novel architectures @cite_4 @cite_13 to improve the disentanglement and synthesis quality. However, all these approaches require additional information involved in GAN learning. InfoGAN @cite_25 learned disentangled latent space unsupervisedly by adding regularizers to the generator to maximize the mutual information. Different from previous learning-based methods, this work explores the disentanglement of semantics in the latent space of unconstrained GANs without any retraining or redesigning the models themselves.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_21", "@cite_3", "@cite_19", "@cite_27", "@cite_31", "@cite_34", "@cite_10", "@cite_25" ], "mid": [ "2963577681", "2798844427", "2787223504", "2607491080" ], "abstract": [ "Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation", "This paper proposes an unpaired learning method for image enhancement. Given a set of photographs with the desired characteristics, the proposed method learns a photo enhancer which transforms an input image into an enhanced image with those characteristics. The method is based on the framework of two-way generative adversarial networks (GANs) with several improvements. First, we augment the U-Net with global features and show that it is more effective. The global U-Net acts as the generator in our GAN model. Second, we improve Wasserstein GAN (WGAN) with an adaptive weighting scheme. With this scheme, training converges faster and better, and is less sensitive to parameters than WGAN-GP. Finally, we propose to use individual batch normalization layers for generators in two-way GANs. It helps generators better adapt to their own input distributions. All together, they significantly improve the stability of GAN training for our application. Both quantitative and visual results show that the proposed method is effective for enhancing images.", "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "Traditional generative adversarial networks (GAN) and many of its variants are trained by minimizing the KL or JS-divergence loss that measures how close the generated data distribution is from the true data distribution. A recent advance called the WGAN based on Wasserstein distance can improve on the KL and JS-divergence based GANs, and alleviate the gradient vanishing, instability, and mode collapse issues that are common in the GAN training. In this work, we aim at improving on the WGAN by first generalizing its discriminator loss to a margin-based one, which leads to a better discriminator, and in turn a better generator, and then carrying out a progressive training paradigm involving multiple GANs to contribute to the maximum margin ranking loss so that the GAN at later stages will improve upon early stages. We call this method Gang of GANs (GoGAN). We have shown theoretically that the proposed GoGAN can reduce the gap between the true data distribution and the generated data distribution by at least half in an optimally trained WGAN. We have also proposed a new way of measuring GAN quality which is based on image completion tasks. We have evaluated our method on four visual datasets: CelebA, LSUN Bedroom, CIFAR-10, and 50K-SSFF, and have seen both visual and quantitative improvement over baseline WGAN." ] }
1907.10786
2963577681
Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation
Latent space is treated as Riemannian manifolds by recent work @cite_23 @cite_18 @cite_36 . They focus on exploring how to make the output image vary more smoothly through interpolation in latent space. This idea is improved in @cite_17 by employing feature-based metrics as the path length in image space. Some work @cite_28 observed that the linear paths in latent space can closely approximate geodesics on generated manifold. There are also some methods targeting at the inversion from image space back to latent space @cite_32 @cite_20 @cite_15 for better image manipulation. GLO @cite_9 optimized the generator and latent code simultaneously to learn a better latent space. Unlike them, this paper studies the latent space by probing the hidden semantic subspaces using linear attribute classifiers. Some concurrent work also explore the semantics in latent space of GANs for image manipulation: @cite_30 studied the steerability of GAN model by shifting the latent distribution and achieved the control of camera motion and image color tone, while @cite_1 improved the memoriability of the output image via varying the latent code.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_36", "@cite_28", "@cite_9", "@cite_1", "@cite_32", "@cite_23", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2963577681", "2804013387", "2963105487", "2817444259" ], "abstract": [ "Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation", "Given data, deep generative models, such as variational autoencoders (VAE) and generative adversarial networks (GAN), train a lower dimensional latent representation of the data space. The linear Euclidean geometry of data space pulls back to a nonlinear Riemannian geometry on the latent space. The latent space thus provides a low-dimensional nonlinear representation of data and classical linear statistical techniques are no longer applicable. In this paper we show how statistics of data in their latent space representation can be performed using techniques from the field of nonlinear manifold statistics. Nonlinear manifold statistics provide generalizations of Euclidean statistical notions including means, principal component analysis, and maximum likelihood fits of parametric probability distributions. We develop new techniques for maximum likelihood inference in latent space, and adress the computational complexity of using geometric algorithms with high-dimensional data by training a separate neural network to approximate the Riemannian metric and cometric tensor capturing the shape of the learned data manifold.", "Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an “inverse model,” a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion , to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website ( https: github.com ToniCreswell InvertingGAN ).", "The manifold hypothesis states that many kinds of high-dimensional data are concentrated near a low-dimensional manifold. If the topology of this data manifold is non-trivial, a continuous en-coder network cannot embed it in a one-to-one manner without creating holes of low density in the latent space. This is at odds with the Gaussian prior assumption typically made in Variational Auto-Encoders (VAEs), because the density of a Gaussian concentrates near a blob-like manifold. In this paper we investigate the use of manifold-valued latent variables. Specifically, we focus on the important case of continuously differen-tiable symmetry groups (Lie groups), such as the group of 3D rotations SO(3). We show how a VAE with SO(3)-valued latent variables can be constructed, by extending the reparameterization trick to compact connected Lie groups. Our exper-iments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space." ] }
1907.10801
2963861395
Automatic image aesthetics assessment is important for a wide variety of applications such as on-line photo suggestion, photo album management and image retrieval. Previous methods have focused on mapping the holistic image content to a high or low aesthetics rating. However, the composition information of an image characterizes the harmony of its visual elements according to the principles of art, and provides richer information for learning aesthetics. In this work, we propose to model the image composition information as the mutual dependency of its local regions, and design a novel architecture to leverage such information to boost the performance of aesthetics assessment. To achieve this, we densely partition an image into local regions and compute aesthetics-preserving features over the regions to characterize the aesthetics properties of image content. With the feature representation of local regions, we build a region composition graph in which each node denotes one region and any two nodes are connected by an edge weighted by the similarity of the region features. We perform reasoning on this graph via graph convolution, in which the activation of each node is determined by its highly correlated neighbors. Our method naturally uncovers the mutual dependency of local regions in the network training procedure, and achieves the state-of-the-art performance on the benchmark visual aesthetics datasets.
. Modeling the relations of different visual components in visual data has been proven effective in computer vision community. Ma . @cite_14 proposed to model higher-order object interactions with attention mechanism for understanding the actions in videos. Wang . @cite_29 proposed to represent video as a space-time graph which captures temporal dynamics and functional relations between human and object, and then apply graph convolution over the video graph to learn the long range dependencies among the human object entities in the video. @cite_27 proposed a non-local operation for capturing the long-range dependencies among visual elements, and achieved the state-of-the-art results on various computer vision tasks. In image segmentation, modeling the contextual dependency of the local segments with (CRF) @cite_20 has become an inevitable step to achieve good performance. Methodologically, our method is closely related to the relation reasoning networks @cite_53 @cite_52 @cite_57 in machine learning community, which were originally proposed to deal with structured data such as texts and speeches. In particular, we are motivated by @cite_51 due to its recent success in computer vision community @cite_46 @cite_30 . We adopt graph convolution operation as the region dependency modeling mechanism in our aesthetics model, leading to the state-of-the-art results.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_29", "@cite_53", "@cite_52", "@cite_57", "@cite_27", "@cite_46", "@cite_51", "@cite_20" ], "mid": [ "2953264111", "2415731916", "2895340898", "2902591409" ], "abstract": [ "Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labeled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pretrained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labeled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modeling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.", "Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labelled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pre-trained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modelling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.", "This paper proposes a fast video salient object detection model, based on a novel recurrent network architecture, named Pyramid Dilated Bidirectional ConvLSTM (PDB-ConvLSTM). A Pyramid Dilated Convolution (PDC) module is first designed for simultaneously extracting spatial features at multiple scales. These spatial features are then concatenated and fed into an extended Deeper Bidirectional ConvLSTM (DB-ConvLSTM) to learn spatiotemporal information. Forward and backward ConvLSTM units are placed in two layers and connected in a cascaded way, encouraging information flow between the bi-directional streams and leading to deeper feature extraction. We further augment DB-ConvLSTM with a PDC-like structure, by adopting several dilated DB-ConvLSTMs to extract multi-scale spatiotemporal information. Extensive experimental results show that our method outperforms previous video saliency models in a large margin, with a real-time speed of 20 fps on a single GPU. With unsupervised video object segmentation as an example application, the proposed model (with a CRF-based post-process) achieves state-of-the-art results on two popular benchmarks, well demonstrating its superior performance and high applicability.", "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet, ResNeXt, SE-Net and DPN, for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task." ] }
1907.10265
2962741219
Cyber-physical system applications such as autonomous vehicles, wearable devices, and avionic systems generate a large volume of time-series data. Designers often look for tools to help classify and categorize the data. Traditional machine learning techniques for time-series data offer several solutions to solve these problems; however, the artifacts trained by these algorithms often lack interpretability. On the other hand, temporal logics, such as Signal Temporal Logic (STL) have been successfully used in the formal methods community as specifications of time-series behaviors. In this work, we propose a new technique to automatically learn temporal logic formulae that are able to cluster and classify real-valued time-series data. Previous work on learning STL formulas from data either assumes a formula-template to be given by the user, or assumes some special fragment of STL that enables exploring the formula structure in a systematic fashion. In our technique, we relax these assumptions, and provide a way to systematically explore the space of all STL formulas. As the space of all STL formulas is very large, and contains many semantically equivalent formulas, we suggest a technique to heuristically prune the space of formulas considered. Finally, we illustrate our technique on various case studies from the automotive, transportation and healthcare domain.
There has been considerable recent work on learning STL formulas from data for various applications such as supervised learning @cite_28 @cite_31 , clustering @cite_15 @cite_33 , or anomaly detection @cite_17 .
{ "cite_N": [ "@cite_33", "@cite_15", "@cite_28", "@cite_31", "@cite_17" ], "mid": [ "2086359741", "2031049553", "2029731618", "2101804404" ], "abstract": [ "As the complexity of cyber-physical systems increases, so does the number of ways an adversary can disrupt them. This necessitates automated anomaly detection methods to detect possible threats. In this paper, we extend our recent results in the field of inference via formal methods to develop an unsupervised learning algorithm. Our procedure constructs from data a signal temporal logic (STL) formula that describes normal system behavior. Trajectories that do not satisfy the learned formula are flagged as anomalous. STL can be used to formulate properties such as “If the train brakes within 500 m of the platform at a speed of 50 km hr, then it will stop in at least 30 s and at most 50 s.” STL gives a more human-readable representation of behavior than classifiers represented as surfaces in high-dimensional feature spaces. STL formulae can also be used for early detection via online monitoring and for anomaly mitigation via formal synthesis. We demonstrate the power of our method with a physical model of a train's brake system. To our knowledge, this paper is the first instance of formal methods being applied to anomaly detection.", "This paper aims to take general tensors as inputs for supervised learning. A supervised tensor learning (STL) framework is established for convex optimization based learning techniques such as support vector machines (SVM) and minimax probability machines (MPM). Within the STL framework, many conventional learning machines can be generalized to take n sup th -order tensors as inputs. We also study the applications of tensors to learning machine design and feature extraction by linear discriminant analysis (LDA). Our method for tensor based feature extraction is named the tenor rank-one discriminant analysis (TR1DA). These generalized algorithms have several advantages: 1) reduce the curse of dimension problem in machine learning and data mining; 2) avoid the failure to converge; and 3) achieve better separation between the different categories of samples. As an example, we generalize MPM to its STL version, which is named the tensor MPM (TMPM). TMPM learns a series of tensor projections iteratively. It is then evaluated against the original MPM. Our experiments on a binary classification problem show that TMPM significantly outperforms the original MPM.", "We address the task of learning a semantic segmentation from weakly supervised data. Our aim is to devise a system that predicts an object label for each pixel by making use of only image level labels during training – the information whether a certain object is present or not in the image. Such coarse tagging of images is faster and easier to obtain as opposed to the tedious task of pixelwise labeling required in state of the art systems. We cast this task naturally as a multiple instance learning (MIL) problem. We use Semantic Texton Forest (STF) as the basic framework and extend it for the MIL setting. We make use of multitask learning (MTL) to regularize our solution. Here, an external task of geometric context estimation is used to improve on the task of semantic segmentation. We report experimental results on the MSRC21 and the very challenging VOC2007 datasets. On MSRC21 dataset we are able, by using 276 weakly labeled images, to achieve the performance of a supervised STF trained on pixelwise labeled training set of 56 images, which is a significant reduction in supervision needed.", "Monitoring transient behaviors of real-time systems plays an important role in model-based systems design. Signal Temporal Logic (STL) emerges as a convenient and powerful formalism for continuous and hybrid systems. This paper presents an efficient algorithm for computing the robustness degree in which a piecewise-continuous signal satisfies or violates an STL formula. The algorithm, by leveraging state-of-the-art streaming algorithms from Signal Processing, is linear in the size of the signal and its implementation in the Breach tool is shown to outperform alternative implementations." ] }
1907.10265
2962741219
Cyber-physical system applications such as autonomous vehicles, wearable devices, and avionic systems generate a large volume of time-series data. Designers often look for tools to help classify and categorize the data. Traditional machine learning techniques for time-series data offer several solutions to solve these problems; however, the artifacts trained by these algorithms often lack interpretability. On the other hand, temporal logics, such as Signal Temporal Logic (STL) have been successfully used in the formal methods community as specifications of time-series behaviors. In this work, we propose a new technique to automatically learn temporal logic formulae that are able to cluster and classify real-valued time-series data. Previous work on learning STL formulas from data either assumes a formula-template to be given by the user, or assumes some special fragment of STL that enables exploring the formula structure in a systematic fashion. In our technique, we relax these assumptions, and provide a way to systematically explore the space of all STL formulas. As the space of all STL formulas is very large, and contains many semantically equivalent formulas, we suggest a technique to heuristically prune the space of formulas considered. Finally, we illustrate our technique on various case studies from the automotive, transportation and healthcare domain.
In @cite_31 , a fragment of PSTL (rPSTL or reactive parametric signal temporal logic) is defined to capture causal relationships from data. However, there are some temporal properties namely, concurrent eventuality and nested always eventually that cannot be described directly in rPSTL. In @cite_17 , the authors extend @cite_31 by using a fragment of rPSTL, inference parametric STL (iPSTL), that does not require a causal structure. In this work, classical ML algorithms (one-class support vector machines) are applied for unsupervised learning problem. In @cite_28 , a decision tree based method is employed to learn STL formulas, which creates a map between a restricted fragment of STL and a binary decision tree in order to build a STL classifier. While this seminal work has advanced work in the intersection of formal methods and machine learning, one disadvantage of these approaches has been that they lead to long formulas which can become an issue for interpretability.
{ "cite_N": [ "@cite_28", "@cite_31", "@cite_17" ], "mid": [ "2086092403", "1539708367", "828139470", "2086359741" ], "abstract": [ "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach.", "In this paper we advocate the use of multi-dimensional modal logics as a framework for knowledge representation and, in particular, for representing spatio-temporal information. We construct a two-dimensional logic capable of describing topological relationships that change over time. This logic, called PSTL (Propositional Spatio-Temporal Logic) is the Cartesian product of the well-known temporal logic PTL and the modal logic S4u, which is the Lewis system S4 augmented with the universal modality. Although it is an open problem whether the full PSTL is decidable, we show that it contains decidable fragments into which various temporal extensions (both point-based and interval based) of the spatial logic RCC-8 can be embedded. We consider known decidability and complexity results that are relevant to computation with multi-dimensional formalisms and discuss possible directions for further research.", "Signal temporal logic (STL) is a formalism used to rigorously specify requirements of cyberphysical systems (CPS), i.e., systems mixing digital or discrete components in interaction with a continuous environment or analog components. STL is naturally equipped with a quantitative semantics which can be used for various purposes: from assessing the robustness of a specification to guiding searches over the input and parameter space with the goal of falsifying the given property over system behaviors. Algorithms have been proposed and implemented for offline computation of such quantitative semantics, but only few methods exist for an online setting, where one would want to monitor the satisfaction of a formula during simulation. In this paper, we formalize a semantics for robust online monitoring of partial traces, i.e., traces for which there might not be enough data to decide the Boolean satisfaction (and to compute its quantitative counterpart). We propose an efficient algorithm to compute it and demonstrate its usage on two large scale real-world case studies coming from the automotive domain and from CPS education in a Massively Open Online Course setting. We show that savings in computationally expensive simulations far outweigh any overheads incurred by an online approach.", "As the complexity of cyber-physical systems increases, so does the number of ways an adversary can disrupt them. This necessitates automated anomaly detection methods to detect possible threats. In this paper, we extend our recent results in the field of inference via formal methods to develop an unsupervised learning algorithm. Our procedure constructs from data a signal temporal logic (STL) formula that describes normal system behavior. Trajectories that do not satisfy the learned formula are flagged as anomalous. STL can be used to formulate properties such as “If the train brakes within 500 m of the platform at a speed of 50 km hr, then it will stop in at least 30 s and at most 50 s.” STL gives a more human-readable representation of behavior than classifiers represented as surfaces in high-dimensional feature spaces. STL formulae can also be used for early detection via online monitoring and for anomaly mitigation via formal synthesis. We demonstrate the power of our method with a physical model of a train's brake system. To our knowledge, this paper is the first instance of formal methods being applied to anomaly detection." ] }
1907.10265
2962741219
Cyber-physical system applications such as autonomous vehicles, wearable devices, and avionic systems generate a large volume of time-series data. Designers often look for tools to help classify and categorize the data. Traditional machine learning techniques for time-series data offer several solutions to solve these problems; however, the artifacts trained by these algorithms often lack interpretability. On the other hand, temporal logics, such as Signal Temporal Logic (STL) have been successfully used in the formal methods community as specifications of time-series behaviors. In this work, we propose a new technique to automatically learn temporal logic formulae that are able to cluster and classify real-valued time-series data. Previous work on learning STL formulas from data either assumes a formula-template to be given by the user, or assumes some special fragment of STL that enables exploring the formula structure in a systematic fashion. In our technique, we relax these assumptions, and provide a way to systematically explore the space of all STL formulas. As the space of all STL formulas is very large, and contains many semantically equivalent formulas, we suggest a technique to heuristically prune the space of formulas considered. Finally, we illustrate our technique on various case studies from the automotive, transportation and healthcare domain.
In template-based techniques, a fixed PSTL template is provided by the user, and the techniques only learn the values of parameters associated with the PSTL. In @cite_15 , a total ordering on parameter space of PSTL specifications is utilized as feature vectors for learning logical specifications. Unfortunately, recognizing the best total ordering is not straightforward for users. In @cite_33 , the authors eliminate this additional burden on the user by suggesting a method that maps timed traces to a surface in the parameter space of the formula, and then employing these curves as features. In @cite_29 , the input to the algorithm is a requirement template expressed in PSTL, where the traces are actively generated from a model of the system. Our proposed technique, which uses systematic enumeration, can produce smaller formulas which may be more human-interpretable, and with higher accuracy($ 92
{ "cite_N": [ "@cite_15", "@cite_29", "@cite_33" ], "mid": [ "2086092403", "2131479143", "1539708367", "2887556118" ], "abstract": [ "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach.", "Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP.", "In this paper we advocate the use of multi-dimensional modal logics as a framework for knowledge representation and, in particular, for representing spatio-temporal information. We construct a two-dimensional logic capable of describing topological relationships that change over time. This logic, called PSTL (Propositional Spatio-Temporal Logic) is the Cartesian product of the well-known temporal logic PTL and the modal logic S4u, which is the Lewis system S4 augmented with the universal modality. Although it is an open problem whether the full PSTL is decidable, we show that it contains decidable fragments into which various temporal extensions (both point-based and interval based) of the spatial logic RCC-8 can be embedded. We consider known decidability and complexity results that are relevant to computation with multi-dimensional formalisms and discuss possible directions for further research.", "Abstract Visual tracking algorithms based on structured output support vector machine (SOSVM) have demonstrated excellent performance. However, sampling methods and optimization strategies of SOSVM undesirably increase the computational overloads, which hinder real-time application of these algorithms. Moreover, due to the lack of high-dimensional features and dense training samples, SOSVM-based algorithms are unstable to deal with various challenging scenarios, such as occlusions and scale variations. Recently, visual tracking algorithms based on discriminative correlation filters (DCF), especially the combination of DCF and features from deep convolutional neural networks (CNN), have been successfully applied to visual tracking, and attains surprisingly good performance on recent benchmarks. The success is mainly attributed to two aspects: the circular correlation properties of DCF and the powerful representation capabilities of CNN features. Nevertheless, compared with SOSVM, DCF-based algorithms are restricted to simple ridge regression which has a weaker discriminative ability. In this paper, a novel circular and structural operator tracker (CSOT) is proposed for high performance visual tracking, it not only possesses the powerful discriminative capability of SOSVM but also efficiently inherits the superior computational efficiency of DCF. Based on the proposed circular and structural operators, a set of primal confidence score maps can be obtained by circular correlating feature maps with their corresponding structural correlation filters. Furthermore, an implicit interpolation is applied to convert the multi-resolution feature maps to the continuous domain and make all primal confidence score maps have the same spatial resolution. Then, we exploit an efficient ensemble post-processor based on relative entropy, which can coalesce primal confidence score maps and create an optimal confidence score map for more accurate localization. The target is localized on the peak of the optimal confidence score map. Besides, we introduce a collaborative optimization strategy to update circular and structural operators by iteratively training structural correlation filters, which significantly reduces computational complexity and improves robustness. Experimental results demonstrate that our approach achieves state-of-the-art performance in mean AUC scores of 71.5 and 69.4 on the OTB2013 and OTB2015 benchmarks respectively, and obtains a third-best expected average overlap (EAO) score of 29.8 on the VOT2017 benchmark." ] }
1907.10218
2963829227
The state-of-the-art federated learning brings a new direction for the data privacy protection of mobile crowdsensing machine learning applications. However, besides being vulnerable to GAN based user data construction attack, the existing gradient descent based federate learning schemes are lack of consideration for how to preserve the model privacy. In this paper, we propose a secret sharing based federated extreme boosting learning frame-work (FedXGB) to achieve privacy-preserving model training for mobile crowdsensing. First, a series of protocols are designed to implement privacy-preserving extreme gradient boosting of classification and regression tree. The protocols preserve the user data privacy protection feature of federated learning that XGBoost is trained without revealing plaintext user data. Then, in consideration of the high commercial value of a well-trained model, a secure prediction protocol is developed to protect the model privacy for the crowdsensing sponsor. Additionally, we operate comprehensive theoretical analysis and extensive experiments to evaluate the security, effectiveness and efficiency of FedXGB. The results show that FedXGB is secure in the honest-but-curious model, and attains approximate accuracy and convergence rate with the original model in low runtime.
Most of the existing privacy-preserving works for machine learning are data driven and based traditional cryptographic algorithms. For example, Q. Wang @cite_23 proposed a privacy-preserving data mining model learning scheme for canonical correlation analysis in cross-media retrieval system garbled circuit. Z. Ma @cite_9 proposed a lightweight ensemble classification learning framework for the universal face recognition system by exploiting additive secret sharing. Considering the wide applications of gradient boosting decision tree (GDBT) in data mining, L. Zhao @cite_11 utilized the differential privacy technology to implement two novel privacy-preserving schemes for classification and regression tasks, respectively. And towards the patient's medical data privacy protection in e-Health system, X. Liu in @cite_12 advocated a homomorphic encryption based scheme to implement privacy-preserving reinforcement learning scheme for patient-centric dynamic treatment regimes. Due to be data security driven, the above four types of privacy-preserving schemes still have to upload encrypted user data to central server and cause massive extra communication overhead.
{ "cite_N": [ "@cite_9", "@cite_12", "@cite_23", "@cite_11" ], "mid": [ "2762867797", "2783547004", "2950925835", "2119874464" ], "abstract": [ "A massive explosion of various types of data has been triggered in the “Big Data” era. In big data systems, machine learning plays an important role due to its effectiveness in discovering hidden information and valuable knowledge. Data privacy, however, becomes an unavoidable concern since big data usually involve multiple organizations, e.g., different healthcare systems and hospitals, who are not in the same trust domain and may be reluctant to share their data publicly. Applying traditional cryptographic tools is a straightforward approach to protect sensitive information, but it often renders learning algorithms useless inevitably. In this work, we, for the first time, propose a novel privacy-preserving scheme for canonical correlation analysis (CCA), which is a well-known learning technique and has been widely used in cross-media retrieval system. We first develop a library of building blocks to support various arithmetics over encrypted real numbers by leveraging additively homomorphic encryption and garbled circuits. Then we encrypt private data by randomly splitting the numerical data, formalize CCA problem and reduce it to a symmetric eigenvalue problem by designing new protocols for privacy-preserving QR decomposition. Finally, we solve all the eigenvalues and the corresponding eigenvectors by running Newton-Raphson method and inverse power method over the ciphertext domain. We carefully analyze the security and extensively evaluate the effectiveness of our design. The results show that our scheme is practically secure, incurs negligible errors compared with performing CCA in the clear and performs comparably in cross-media retrieval systems.", "Various paradigms, based on differential privacy, have been proposed to release a privacy-preserving dataset with statistical approximation. Nonetheless, most existing schemes are limited when facing highly correlated attributes, and cannot prevent privacy threats from untrusted servers. In this paper, we propose a novel Copula- based scheme to efficiently synthesize and release multi-dimensional crowdsourced data with local differential privacy. In our scheme, each participant's (or user's) data is locally transformed into bit strings based on a randomized response technique, which guarantees a participant's privacy on the participant (user) side. Then, Copula theory is leveraged to synthesize multi-dimensional crowdsourced data based on univariate marginal distribution and attribute dependence. Univariate marginal distribution is estimated by the Lasso-based regression algorithm from the aggregated privacy- preserving bit strings. Dependencies among attributes are modeled as multivariate Gaussian Copula, of which parameter is estimated by Pearson correlation coefficients. We conduct experiments to validate the effectiveness of our scheme. Our experimental results demonstrate that our scheme is effective for the release of multi-dimensional data with local differential privacy guaranteed to distributed participants.", "We propose a privacy-preserving framework for learning visual classifiers by leveraging distributed private image data. This framework is designed to aggregate multiple classifiers updated locally using private data and to ensure that no private information about the data is exposed during and after its learning procedure. We utilize a homomorphic cryptosystem that can aggregate the local classifiers while they are encrypted and thus kept secret. To overcome the high computational cost of homomorphic encryption of high-dimensional classifiers, we (1) impose sparsity constraints on local classifier updates and (2) propose a novel efficient encryption scheme named doubly-permuted homomorphic encryption (DPHE) which is tailored to sparse high-dimensional data. DPHE (i) decomposes sparse data into its constituent non-zero values and their corresponding support indices, (ii) applies homomorphic encryption only to the non-zero values, and (iii) employs double permutations on the support indices to make them secret. Our experimental evaluation on several public datasets shows that the proposed approach achieves comparable performance against state-of-the-art visual recognition methods while preserving privacy and significantly outperforms other privacy-preserving methods.", "Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the e-differential privacy definition due to (2006). First we apply the output perturbation ideas of (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance." ] }
1907.10218
2963829227
The state-of-the-art federated learning brings a new direction for the data privacy protection of mobile crowdsensing machine learning applications. However, besides being vulnerable to GAN based user data construction attack, the existing gradient descent based federate learning schemes are lack of consideration for how to preserve the model privacy. In this paper, we propose a secret sharing based federated extreme boosting learning frame-work (FedXGB) to achieve privacy-preserving model training for mobile crowdsensing. First, a series of protocols are designed to implement privacy-preserving extreme gradient boosting of classification and regression tree. The protocols preserve the user data privacy protection feature of federated learning that XGBoost is trained without revealing plaintext user data. Then, in consideration of the high commercial value of a well-trained model, a secure prediction protocol is developed to protect the model privacy for the crowdsensing sponsor. Additionally, we operate comprehensive theoretical analysis and extensive experiments to evaluate the security, effectiveness and efficiency of FedXGB. The results show that FedXGB is secure in the honest-but-curious model, and attains approximate accuracy and convergence rate with the original model in low runtime.
Therefore, the federated learning concept was proposed @cite_22 . However, up to now, there were only a few works that adapted the architecture to propose practical schemes for applications @cite_13 . And most existing federated learning schemes still concentrated on the SGD based models. For example, considering the limited bandwidth, precious storage and imperative privacy problem in modern Internet of Things (IoT) environment, S. Wang provided a SGD based federated machine learning architecture based on the edge nodes in @cite_16 . For the privacy-preserving machine learning model training in smart vehicles, S. Sumudu @cite_0 proposed a federated learning based novel joint transmit power and resource allocation approach. And to avoid the adversary to analyze the hidden information about user private data from the uploaded gradient values, cryptographic methods were then added to the original federated learning scheme for protecting gradients. B. Keith @cite_15 designed a universal and practical model aggregation scheme for mobile devices with secret sharing technology. In @cite_24 , N. Richard utilized the homomorphic encryption to protect the uploaded gradients and designed an entity resolution and federated learning framework.
{ "cite_N": [ "@cite_22", "@cite_0", "@cite_24", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2807006176", "2920095265", "2810065831", "2900120080" ], "abstract": [ "Federated learning enables resource-constrained edge compute devices, such as mobile phones and IoT devices, to learn a shared model for prediction, while keeping the training data local. This decentralized approach to train models provides privacy, security, regulatory and economic benefits. In this work, we focus on the statistical challenge of federated learning when local data is non-IID. We first show that the accuracy of federated learning reduces significantly, by up to 55 for neural networks trained for highly skewed non-IID data, where each client device trains only on a single class of data. We further show that this accuracy reduction can be explained by the weight divergence, which can be quantified by the earth mover's distance (EMD) between the distribution over classes on each device and the population distribution. As a solution, we propose a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices. Experiments show that accuracy can be increased by 30 for the CIFAR-10 dataset with only 5 globally shared data.", "There is an increasing interest in a new machine learning technique called Federated Learning, in which the model training is distributed over mobile user equipments (UEs), and each UE contributes to the learning model by independently computing the gradient based on its local training data. Federated Learning has several benefits of data privacy and potentially a large amount of UE participants with modern powerful processors and low-delay mobile-edge networks. While most of the existing work focused on designing learning algorithms with provable convergence time, other issues such as uncertainty of wireless channels and UEs with heterogeneous power constraints and local data size, are under-explored. These issues especially affect to various trade-offs: (i) between computation and communication latencies determined by learning accuracy level, and thus (ii) between the Federated Learning time and UE energy consumption. We fill this gap by formulating a Federated Learning over wireless network as an optimization problem FEDL that captures both trade-offs. Even though FEDL is non-convex, we exploit the problem structure to decompose and transform it to three convex sub-problems. We also obtain the globally optimal solution by charactering the closed-form solutions to all sub-problems, which give qualitative insights to problem design via the obtained optimal FEDL learning time, accuracy level, and UE energy cost. Our theoretical analysis is also illustrated by extensive numerical results.", "Federated learning enables multiple participants to jointly construct a deep learning model without sharing their private training data with each other. For example, multiple smartphones can jointly train a predictive keyboard model without revealing what individual users type into their phones. We demonstrate that any participant in federated learning can introduce hidden backdoor functionality into the joint global model, e.g., to ensure that an image classifier assigns an attacker-chosen label to images with certain features, or that a next-word predictor completes certain sentences with an attacker-chosen word. We design and evaluate a new \"constrain-and-scale\" model-poisoning methodology and show that it greatly outperforms data poisoning. An attacker selected just once, in a single round of federated learning, can cause the global model to reach 100 accuracy on the backdoor task. We evaluate the attack under different assumptions and attack scenarios for standard federated learning tasks. We also show how to evade anomaly detection-based defenses by incorporating the evasion into the loss function when training the attack model.", "We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the Federated Averaging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall. This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over the use of their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices." ] }
1907.10274
2964328619
Photorealistic style transfer aims to transfer the style of a reference photo onto a content photo naturally, such that the stylized image looks like a real photo taken by a camera. Existing state-of-the-art methods are prone to spatial structure distortion of the content image and global color inconsistency across different semantic objects, making the results less photorealistic. In this paper, we propose a one-shot mutual Dirichlet network, to address these challenging issues. The essential contribution of the work is the realization of a representation scheme that successfully decouples the spatial structure and color information of images, such that the spatial structure can be well preserved during stylization. This representation is discriminative and context-sensitive with respect to semantic objects. It is extracted with a shared sparse Dirichlet encoder. Moreover, such representation is encouraged to be matched between the content and style images for faithful color transfer. The affine-transfer model is embedded in the decoder of the network to facilitate the color transfer. The strong representative and discriminative power of the proposed network enables one-shot learning given only one content-style image pair. Experimental results demonstrate that the proposed method is able to generate photorealistic photos without spatial distortion or abrupt color changes.
Classical style transfer methods stylize an image in a global fashion with spatial-invariant transfer functions @cite_41 @cite_7 @cite_3 @cite_2 @cite_43 @cite_23 . These methods can handle global color shifts, but they are limited in matching sophisticated styles with drastic color changes @cite_36 @cite_17 , as shown in Fig. .
{ "cite_N": [ "@cite_7", "@cite_41", "@cite_36", "@cite_3", "@cite_43", "@cite_23", "@cite_2", "@cite_17" ], "mid": [ "2962772087", "2788095258", "2951924128", "2611605760" ], "abstract": [ "Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures by simple feature coloring.", "Photorealistic image style transfer algorithms aim at stylizing a content photo using the style of a reference photo with the constraint that the stylized photo should remains photorealistic. While several methods exist for this task, they tend to generate spatially inconsistent stylizations with noticeable artifacts. In addition, these methods are computationally expensive, requiring several minutes to stylize a VGA photo. In this paper, we present a novel algorithm to address the limitations. The proposed algorithm consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step encourages spatially consistent stylizations. Unlike existing algorithms that require iterative optimization, both steps in our algorithm have closed-form solutions. Experimental results show that the stylized photos generated by our algorithm are twice more preferred by human subjects in average. Moreover, our method runs 60 times faster than the state-of-the-art approach. Code and additional results are available at this https URL", "We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of \"image analogy\" with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique Deep Image Analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style texture transfer, color style swap, sketch painting to photo, and time lapse.", "We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of \"image analogy\" [ 2001] with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique deep image analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style texture transfer, color style swap, sketch painting to photo, and time lapse." ] }
1907.10274
2964328619
Photorealistic style transfer aims to transfer the style of a reference photo onto a content photo naturally, such that the stylized image looks like a real photo taken by a camera. Existing state-of-the-art methods are prone to spatial structure distortion of the content image and global color inconsistency across different semantic objects, making the results less photorealistic. In this paper, we propose a one-shot mutual Dirichlet network, to address these challenging issues. The essential contribution of the work is the realization of a representation scheme that successfully decouples the spatial structure and color information of images, such that the spatial structure can be well preserved during stylization. This representation is discriminative and context-sensitive with respect to semantic objects. It is extracted with a shared sparse Dirichlet encoder. Moreover, such representation is encouraged to be matched between the content and style images for faithful color transfer. The affine-transfer model is embedded in the decoder of the network to facilitate the color transfer. The strong representative and discriminative power of the proposed network enables one-shot learning given only one content-style image pair. Experimental results demonstrate that the proposed method is able to generate photorealistic photos without spatial distortion or abrupt color changes.
The quality of image stylization can be improved by densely matching the low-level or high-level features between the content and style images @cite_1 @cite_29 @cite_33 @cite_34 . Gatys al @cite_34 demonstrated impressive art style transfer results with pretrained CNN, which matches the correlations of deep features extracted from CNN based on Gram matrix. Since then, numerous approaches had been developed to further improve the stylization performance as well as efficiency @cite_31 @cite_0 @cite_37 @cite_40 @cite_5 @cite_12 . For example, feed-forward approaches @cite_22 @cite_10 improved the stylization speed by training a decoder network with different loss functions. In order to transform arbitrary styles to content images, Li al @cite_35 adopted the classical signal whitening and coloring transforms (WCTs) on features that extracted from CNN. These methods could generate promising images with different art styles. However, the spatial structures of the content image could not be preserved well even the given style image is a real photo, as shown in Fig. .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_33", "@cite_22", "@cite_29", "@cite_1", "@cite_0", "@cite_40", "@cite_5", "@cite_31", "@cite_34", "@cite_10", "@cite_12" ], "mid": [ "2526782364", "2572559801", "2572730214", "2952767162" ], "abstract": [ "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced online iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by performing much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.", "The recent work of , who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.", "Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced on-line iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by conducting much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales." ] }
1907.10274
2964328619
Photorealistic style transfer aims to transfer the style of a reference photo onto a content photo naturally, such that the stylized image looks like a real photo taken by a camera. Existing state-of-the-art methods are prone to spatial structure distortion of the content image and global color inconsistency across different semantic objects, making the results less photorealistic. In this paper, we propose a one-shot mutual Dirichlet network, to address these challenging issues. The essential contribution of the work is the realization of a representation scheme that successfully decouples the spatial structure and color information of images, such that the spatial structure can be well preserved during stylization. This representation is discriminative and context-sensitive with respect to semantic objects. It is extracted with a shared sparse Dirichlet encoder. Moreover, such representation is encouraged to be matched between the content and style images for faithful color transfer. The affine-transfer model is embedded in the decoder of the network to facilitate the color transfer. The strong representative and discriminative power of the proposed network enables one-shot learning given only one content-style image pair. Experimental results demonstrate that the proposed method is able to generate photorealistic photos without spatial distortion or abrupt color changes.
Recently, there have been a few methods specifically designed for photorealistic image stylization @cite_18 @cite_25 . Luan al @cite_36 preserved the structure of the content image by adopting a color-affine-transfer constraint and color transfer is performed according to the semantic region. However, the generated results easily suffer abrupt color changes with noticeable artifacts especially between adjacent regions segments. Mechrez al @cite_25 proposed to maintain the fidelity of the stylized image with a post-processing step based on the screened poisson equation (SPE). Li al @cite_17 improved the spatial consistency of the output image by adopting the manifold ranking algorithm as the post-processing step. He al @cite_18 optimized the dense semantic correspondence in deep feature domain, resulting in the smooth local color transfer in the image domain. Although these methods preserve the spatial structure well, the light and color changes of different parts and materials are not smooth. See Fig. for a comparison. Aside from image quality, these methods need to train a network with large amount of parameters on large dataset.
{ "cite_N": [ "@cite_36", "@cite_18", "@cite_25", "@cite_17" ], "mid": [ "2604721644", "2951413914", "2788095258", "2611605760" ], "abstract": [ "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.", "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.", "Photorealistic image style transfer algorithms aim at stylizing a content photo using the style of a reference photo with the constraint that the stylized photo should remains photorealistic. While several methods exist for this task, they tend to generate spatially inconsistent stylizations with noticeable artifacts. In addition, these methods are computationally expensive, requiring several minutes to stylize a VGA photo. In this paper, we present a novel algorithm to address the limitations. The proposed algorithm consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step encourages spatially consistent stylizations. Unlike existing algorithms that require iterative optimization, both steps in our algorithm have closed-form solutions. Experimental results show that the stylized photos generated by our algorithm are twice more preferred by human subjects in average. Moreover, our method runs 60 times faster than the state-of-the-art approach. Code and additional results are available at this https URL", "We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of \"image analogy\" [ 2001] with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique deep image analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style texture transfer, color style swap, sketch painting to photo, and time lapse." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
In the literature of CDR, early works @cite_31 @cite_3 @cite_14 @cite_6 @cite_5 mainly adopt matrix factorization models. In particular, @cite_3 constructs a cluster-level rating matrix (code-book) from user-item rating patterns and through which it establishes links to transfer the knowledge across domains. A similar approach with an extension to soft-membership was proposed in @cite_14 . Collective matrix factorization (CMF) @cite_31 was proposed for the case where the entities participate in more than one relation. However, as many studies pointed out, MF models may not handle non-linearity and complex relationships present in the system @cite_42 @cite_40 @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_42", "@cite_6", "@cite_3", "@cite_40", "@cite_5", "@cite_31" ], "mid": [ "1994576156", "2129679514", "1956916606", "2950921101" ], "abstract": [ "A major challenge for collaborative filtering (CF) techniques in recommender systems is the data sparsity that is caused by missing and noisy ratings. This problem is even more serious for CF domains where the ratings are expressed numerically, e.g. as 5-star grades. We assume the 5-star ratings are unordered bins instead of ordinal relative preferences. We observe that, while we may lack the information in numerical ratings, we sometimes have additional auxiliary data in the form of binary ratings. This is especially true given that users can easily express themselves with their preferences expressed as likes or dislikes for items. In this paper, we explore how to use these binary auxiliary preference data to help reduce the impact of data sparsity for CF domains expressed in numerical ratings. We solve this problem by transferring the rating knowledge from some auxiliary data source in binary form (that is, likes or dislikes), to a target numerical rating matrix. In particular, our solution is to model both the numerical ratings and ratings expressed as like or dislike in a principled way. We present a novel framework of Transfer by Collective Factorization (TCF), in which we construct a shared latent space collectively and learn the data-dependent effect separately. A major advantage of the TCF approach over the previous bilinear method of collective matrix factorization is that we are able to capture the data-dependent effect when sharing the data-independent knowledge. This allows us to increase the overall quality of knowledge transfer. We present extensive experimental results to demonstrate the effectiveness of TCF at various sparsity levels, and show improvements of our approach as compared to several state-of-the-art methods.", "Recommender systems always aim to provide recommendations for a user based on historical ratings collected from a single domain (e.g., movies or books) only, which may suffer from the data sparsity problem. Recently, several recommendation models have been proposed to transfer knowledge by pooling together the rating data from multiple domains to alleviate the sparsity problem, which typically assume that multiple domains share a latent common rating pattern based on the user-item co-clustering. In practice, however, the related domains do not necessarily share such a common rating pattern, and diversity among the related domains might outweigh the advantages of such common pattern, which may result in performance degradations. In this paper, we propose a novel cluster-level based latent factor model to enhance the cross-domain recommendation, which can not only learn the common rating pattern shared across domains with the flexibility in controlling the optimal level of sharing, but also learn the domain-specific rating patterns of users in each domain that involve the discriminative information propitious to performance improvement. To this end, the proposed model is formulated as an optimization problem based on joint nonnegative matrix tri-factorization and an efficient alternating minimization algorithm is developed with convergence guarantee. Extensive experiments on several real world datasets suggest that our proposed model outperforms the state-of-the-art methods for the cross-domain recommendation task.", "The Cross Domain Collaborative Filtering (CDCF) exploits the rating matrices from multiple domains to make better recommendations. Existing CDCF methods adopt the substructure sharing technique that can only transfer linearly correlated knowledge between domains. In this paper, we propose the notion of Hyper-Structure Transfer (HST) that requires the rating matrices to be explained by the projections of some more complex structure, called the hyper-structure, shared by all domains, and thus allows the nonlinearly correlated knowledge between domains to be identified and transferred. Extensive experiments are conducted and the results demonstrate the effectiveness of our HST models empirically.", "CMF is a technique for simultaneously learning low-rank representations based on a collection of matrices with shared entities. A typical example is the joint modeling of user-item, item-property, and user-feature matrices in a recommender system. The key idea in CMF is that the embeddings are shared across the matrices, which enables transferring information between them. The existing solutions, however, break down when the individual matrices have low-rank structure not shared with others. In this work we present a novel CMF solution that allows each of the matrices to have a separate low-rank structure that is independent of the other matrices, as well as structures that are shared only by a subset of them. We compare MAP and variational Bayesian solutions based on alternating optimization algorithms and show that the model automatically infers the nature of each factor using group-wise sparsity. Our approach supports in a principled way continuous, binary and count observations and is efficient for sparse matrices involving missing data. We illustrate the solution on a number of examples, focusing in particular on an interesting use-case of augmented multi-view learning." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
On the other hand, recently, there has been a surge in methods proposed to explore deep learning networks for recommender systems @cite_42 . Most of the models in this category focus on utilizing neural network models for extracting embeddings from side information such as reviews @cite_36 , descriptions @cite_10 , content information @cite_27 , images @cite_24 and knowledge graphs @cite_32 . Nevertheless, many of these models are traces to matrix factorization models, that is, in the absence of side information, these models distill to either MF @cite_25 or PMF @cite_29 .
{ "cite_N": [ "@cite_36", "@cite_29", "@cite_42", "@cite_32", "@cite_24", "@cite_27", "@cite_10", "@cite_25" ], "mid": [ "2883308936", "2526782364", "2906869444", "2610935556" ], "abstract": [ "Recommender systems have been studied extensively due to their practical use in real-world scenarios. Despite this, generating effective recommendations with sparse user ratings remains a challenge. Side information has been widely utilized to address rating sparsity Existing recommendation models that use side information are linear and, hence, have restricted expressiveness. Deep learning has been used to capture non-linearities by learning deep item representations from side information but as side information is high-dimensional, existing deep models tend to have large input dimensionality, which dominates their overall size. This makes them difficult to train, especially with insufficient inputs. Rather than learning item representations, in this paper, we propose to learn feature representations through deep learning from side information. Learning feature representations ensures a sufficient number of inputs to train a deep network. To achieve this, we propose to simultaneously recover user ratings and side information, by using a Variational Autoencoder (VAE). Specifically, user ratings and side information are encoded and decoded collectively through the same inference network and generation network. This is possible as both user ratings and side information are associated with items. To account for the heterogeneity of user ratings and side information, the final layer of the generation network follows different distributions. The proposed model is easy to implement and efficient to optimize and is shown to outperform state-of-the-art top-N recommendation methods that use side information.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "Deep learning is gaining importance in many applications. However, Neural Networks face several security and privacy threats. This is particularly significant in the scenario where Cloud infrastructures deploy a service with Neural Network model at the back end. Here, an adversary can extract the Neural Network parameters, infer the regularization hyperparameter, identify if a data point was part of the training data, and generate effective transferable adversarial examples to evade classifiers. This paper shows how a Neural Network model is susceptible to timing side channel attack. In this paper, a black box Neural Network extraction attack is proposed by exploiting the timing side channels to infer the depth of the network. Although, constructing an equivalent architecture is a complex search problem, it is shown how Reinforcement Learning with knowledge distillation can effectively reduce the search space to infer a target model. The proposed approach has been tested with VGG architectures on CIFAR10 data set. It is observed that it is possible to reconstruct substitute models with test accuracy close to the target models and the proposed approach is scalable and independent of type of Neural Network architectures.", "Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision and NLP tasks, such improvements have not yet been observed in ranking for information retrieval. The reason may be the complexity of the ranking problem, as it is not obvious how to learn from queries and documents when no supervised signal is available. Hence, in this paper, we propose to train a neural ranking model using weak supervision, where labels are obtained automatically without human annotators or any external resources (e.g., click data). To this aim, we use the output of an unsupervised ranking model, such as BM25, as a weak supervision signal. We further train a set of simple yet effective ranking models based on feed-forward neural networks. We study their effectiveness under various learning scenarios (point-wise and pair-wise models) and using different input representations (i.e., from encoding query-document pairs into dense sparse vectors to using word embedding representation). We train our networks using tens of millions of training instances and evaluate it on two standard collections: a homogeneous news collection (Robust) and a heterogeneous large-scale web collection (ClueWeb). Our experiments indicate that employing proper objective functions and letting the networks to learn the input representation based on weakly supervised data leads to impressive performance, with over 13 and 35 MAP improvements over the BM25 model on the Robust and the ClueWeb collections. Our findings also suggest that supervised neural ranking models can greatly benefit from pre-training on large amounts of weakly labeled data that can be easily obtained from unsupervised IR models." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
More recently, to combine the advantages of both matrix factorization models and deep networks such as multi-layer perceptron (MLP), some models have been proposed @cite_20 @cite_35 @cite_33 for learning representations from only ratings. These models combine both the wide and deep networks together to provide better representations. Autoencoder, stacked denoising autoencoder @cite_19 @cite_21 @cite_38 @cite_7 , Restricted Boltzmann machines @cite_8 and recurrent neural networks have also been exploited for recommendation systems. However, the above neural network models use only the interaction between users and items from a single domain. Hence, they suffer from the aforementioned issues such as sparsity and cold-start.
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_33", "@cite_7", "@cite_8", "@cite_21", "@cite_19", "@cite_20" ], "mid": [ "2145094598", "1498932870", "1645800954", "2575006718" ], "abstract": [ "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.", "Ability of deep networks to extract high level features and of recurrent networks to perform time-series inference have been studied. In view of universality of one hidden layer network at approximating functions under weak constraints, the benefit of multiple layers is to enlarge the space of dynamical systems approximated or, given the space, reduce the number of units required for a certain error. Traditionally shallow networks with manually engineered features are used, back-propagation extent is limited to one and attempt to choose a large number of hidden units to satisfy the Markov condition is made. In case of Markov models, it has been shown that many systems need to be modeled as higher order. In the present work, we present deep recurrent networks with longer backpropagation through time extent as a solution to modeling systems that are high order and to predicting ahead. We study epileptic seizure suppression electro-stimulator. Extraction of manually engineered complex features and prediction employing them has not allowed small low-power implementations as, to avoid possibility of surgery, extraction of any features that may be required has to be included. In this solution, a recurrent neural network performs both feature extraction and prediction. We prove analytically that adding hidden layers or increasing backpropagation extent increases the rate of decrease of approximation error. A Dynamic Programming (DP) training procedure employing matrix operations is derived. DP and use of matrix operations makes the procedure efficient particularly when using data-parallel computing. The simulation studies show the geometry of the parameter space, that the network learns the temporal structure, that parameters converge while model output displays same dynamic behavior as the system and greater than .99 Average Detection Rate on all real seizure data tried.", "Deep neural networks such as Convolutional Networks (ConvNets) and Deep Belief Networks (DBNs) represent the state-of-the-art for many machine learning and computer vision classification problems. To overcome the large computational cost of deep networks, spiking deep networks have recently been proposed, given the specialized hardware now available for spiking neural networks (SNNs). However, this has come at the cost of performance losses due to the conversion from analog neural networks (ANNs) without a notion of time, to sparsely firing, event-driven SNNs. Here we analyze the effects of converting deep ANNs into SNNs with respect to the choice of parameters for spiking neurons such as firing rates and thresholds. We present a set of optimization techniques to minimize performance loss in the conversion process for ConvNets and fully connected deep networks. These techniques yield networks that outperform all previous SNNs on the MNIST database to date, and many networks here are close to maximum performance after only 20 ms of simulated time. The techniques include using rectified linear units (ReLUs) with zero bias during training, and using a new weight normalization method to help regulate firing rates. Our method for converting an ANN into an SNN enables low-latency classification with high accuracies already after the first output spike, and compared with previous SNN approaches it yields improved performance without increased training time. The presented analysis and optimization techniques boost the value of spiking deep networks as an attractive framework for neuromorphic computing platforms aimed at fast and efficient pattern recognition.", "A large amount of information exists in reviews written by users. This source of information has been ignored by most of the current recommender systems while it can potentially alleviate the sparsity problem and improve the quality of recommendations. In this paper, we present a deep model to learn item properties and user behaviors jointly from review text. The proposed model, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers. One of the networks focuses on learning user behaviors exploiting reviews written by the user, and the other one learns item properties from the reviews written for the item. A shared layer is introduced on the top to couple these two networks together. The shared layer enables latent factors learned for users and items to interact with each other in a manner similar to factorization machine techniques. Experimental results demonstrate that DeepCoNN significantly outperforms all baseline recommender systems on a variety of datasets." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
Though the use of multiple related domains and neural networks for recommendations has been studied and justified in many works @cite_42 , very few attempts have been made to make use of neural networks in cross-domain recommendation setting @cite_23 @cite_0 @cite_13 @cite_43 . In particular, MV-DNN @cite_23 uses an MLP to learn shared representations of the entities participating in multiple domains. A factorization based multi-view neural network was proposed in CCCFNet @cite_0 , where the representations learned from multiple domains are coupled with the representations learned from content information. A two-stage approach was followed in @cite_13 @cite_43 , wherein the first stage, embeddings are learned for users, and in the second stage, a function is learned to map the user embedding in the target domain from the source domain.
{ "cite_N": [ "@cite_42", "@cite_0", "@cite_43", "@cite_23", "@cite_13" ], "mid": [ "2792156255", "2258484932", "2526782364", "2114079787" ], "abstract": [ "Abstract Deep Neural Network (DNN) has recently achieved outstanding performance in a variety of computer vision tasks, including facial attribute classification. The great success of classifying facial attributes with DNN often relies on a massive amount of labelled data. However, in real-world applications, labelled data are only provided for some commonly used attributes (such as age, gender); whereas, unlabelled data are available for other attributes (such as attraction, hairline). To address the above problem, we propose a novel deep transfer neural network method based on multi-label learning for facial attribute classification, termed FMTNet, which consists of three sub-networks: the Face detection Network (FNet), the Multi-label learning Network (MNet) and the Transfer learning Network (TNet). Firstly, based on the Faster Region-based Convolutional Neural Network (Faster R-CNN), FNet is fine-tuned for face detection. Then, MNet is fine-tuned by FNet to predict multiple attributes with labelled data, where an effective loss weight scheme is developed to explicitly exploit the correlation between facial attributes based on attribute grouping. Finally, based on MNet, TNet is trained by taking advantage of unsupervised domain adaptation for unlabelled facial attribute classification. The three sub-networks are tightly coupled to perform effective facial attribute classification. A distinguishing characteristic of the proposed FMTNet method is that the three sub-networks (FNet, MNet and TNet) are constructed in a similar network structure. Extensive experimental results on challenging face datasets demonstrate the effectiveness of our proposed method compared with several state-of-the-art methods.", "Convolutional neural network (CNN) has achieved the state-of-the-art performance in many different visual tasks. Learned from a large-scale training data set, CNN features are much more discriminative and accurate than the handcrafted features. Moreover, CNN features are also transferable among different domains. On the other hand, traditional dictionary-based features (such as BoW and spatial pyramid matching) contain much more local discriminative and structural information, which is implicitly embedded in the images. To further improve the performance, in this paper, we propose to combine CNN with dictionary-based models for scene recognition and visual domain adaptation (DA). Specifically, based on the well-tuned CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations are further constructed, namely, mid-level local representation (MLR) and convolutional Fisher vector (CFV) representation. In MLR, an efficient two-stage clustering method, i.e., weighted spatial and feature space spectral clustering on the parts of a single image followed by clustering all representative parts of all images, is used to generate a class-mixture or a class-specific part dictionary. After that, the part dictionary is used to operate with the multiscale image inputs for generating mid-level representation. In CFV, a multiscale and scale-proportional Gaussian mixture model training strategy is utilized to generate Fisher vectors based on the last convolutional layer of CNN. By integrating the complementary information of MLR, CFV, and the CNN features of the fully connected layer, the state-of-the-art performance can be achieved on scene recognition and DA problems. An interested finding is that our proposed hybrid representation (from VGG net trained on ImageNet) is also complementary to GoogLeNet and or VGG-11 (trained on Place205) greatly.", "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "Recent online services rely heavily on automatic personalization to recommend relevant content to a large number of users. This requires systems to scale promptly to accommodate the stream of new users visiting the online services for the first time. In this work, we propose a content-based recommendation system to address both the recommendation quality and the system scalability. We propose to use a rich feature set to represent users, according to their web browsing history and search queries. We use a Deep Learning approach to map users and items to a latent space where the similarity between users and their preferred items is maximized. We extend the model to jointly learn from features of items from different domains and user features by introducing a multi-view Deep Learning model. We show how to make this rich-feature based user representation scalable by reducing the dimension of the inputs and the amount of training data. The rich user feature representation allows the model to learn relevant user behavior patterns and give useful recommendations for users who do not have any interaction with the service, given that they have adequate search and browsing history. The combination of different domains into a single model for learning helps improve the recommendation quality across all the domains, as well as having a more compact and a semantically richer user latent feature vector. We experiment with our approach on three real-world recommendation systems acquired from different sources of Microsoft products: Windows Apps recommendation, News recommendation, and Movie TV recommendation. Results indicate that our approach is significantly better than the state-of-the-art algorithms (up to 49 enhancement on existing users and 115 enhancement on new users). In addition, experiments on a publicly open data set also indicate the superiority of our method in comparison with transitional generative topic models, for modeling cross-domain recommender systems. Scalability analysis show that our multi-view DNN model can easily scale to encompass millions of users and billions of item entries. Experimental results also confirm that combining features from all domains produces much better performance than building separate models for each domain." ] }
1907.08440
2963361436
Cross-Domain Collaborative Filtering (CDCF) provides a way to alleviate data sparsity and cold-start problems present in recommendation systems by exploiting the knowledge from related domains. Existing CDCF models are either based on matrix factorization or deep neural networks. Either of the techniques in isolation may result in suboptimal performance for the prediction task. Also, most of the existing models face challenges particularly in handling diversity between domains and learning complex non-linear relationships that exist amongst entities (users items) within and across domains. In this work, we propose an end-to-end neural network model -- NeuCDCF, to address these challenges in a cross-domain setting. More importantly, NeuCDCF follows a wide and deep framework and it learns the representations combinedly from both matrix factorization and deep neural networks. We perform experiments on four real-world datasets and demonstrate that our model performs better than state-of-the-art CDCF models.
While the models @cite_23 @cite_0 @cite_13 @cite_43 consider learning embeddings together, they completely ignore the domain-specific representations for the shared users or items. The performance of these models @cite_23 @cite_0 is heavily dependent on the relatedness of the domains. In contrast, our proposed model learns domain-specific representations that significantly improves the prediction performance. Further, @cite_0 rely on content information to bridge the source and target domains. Besides, all of these models @cite_23 @cite_0 @cite_13 are either based on wide or deep networks but not both. We are also aware of the models proposed for cross-domain settings @cite_30 @cite_28 @cite_2 @cite_0 . However, they differ from the research scope of ours because they bridge the source and target domains using available side information.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_0", "@cite_43", "@cite_23", "@cite_2", "@cite_13" ], "mid": [ "2114079787", "2951077644", "2079659743", "1533230146" ], "abstract": [ "Recent online services rely heavily on automatic personalization to recommend relevant content to a large number of users. This requires systems to scale promptly to accommodate the stream of new users visiting the online services for the first time. In this work, we propose a content-based recommendation system to address both the recommendation quality and the system scalability. We propose to use a rich feature set to represent users, according to their web browsing history and search queries. We use a Deep Learning approach to map users and items to a latent space where the similarity between users and their preferred items is maximized. We extend the model to jointly learn from features of items from different domains and user features by introducing a multi-view Deep Learning model. We show how to make this rich-feature based user representation scalable by reducing the dimension of the inputs and the amount of training data. The rich user feature representation allows the model to learn relevant user behavior patterns and give useful recommendations for users who do not have any interaction with the service, given that they have adequate search and browsing history. The combination of different domains into a single model for learning helps improve the recommendation quality across all the domains, as well as having a more compact and a semantically richer user latent feature vector. We experiment with our approach on three real-world recommendation systems acquired from different sources of Microsoft products: Windows Apps recommendation, News recommendation, and Movie TV recommendation. Results indicate that our approach is significantly better than the state-of-the-art algorithms (up to 49 enhancement on existing users and 115 enhancement on new users). In addition, experiments on a publicly open data set also indicate the superiority of our method in comparison with transitional generative topic models, for modeling cross-domain recommender systems. Scalability analysis show that our multi-view DNN model can easily scale to encompass millions of users and billions of item entries. Experimental results also confirm that combining features from all domains produces much better performance than building separate models for each domain.", "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning.", "Given an entity in a source domain, finding its matched entities from another (target) domain is an important task in many applications. Traditionally, the problem was usually addressed by first extracting major keywords corresponding to the source entity and then query relevant entities from the target domain using those keywords. However, the method would inevitably fails if the two domains have less or no overlapping in the content. An extreme case is that the source domain is in English and the target domain is in Chinese. In this paper, we formalize the problem as entity matching across heterogeneous sources and propose a probabilistic topic model to solve the problem. The model integrates the topic extraction and entity matching, two core subtasks for dealing with the problem, into a unified model. Specifically, for handling the text disjointing problem, we use a cross-sampling process in our model to extract topics with terms coming from all the sources, and leverage existing matching relations through latent topic layers instead of at text layers. Benefit from the proposed model, we can not only find the matched documents for a query entity, but also explain why these documents are related by showing the common topics they share. Our experiments in two real-world applications show that the proposed model can extensively improve the matching performance (+19.8 and +7.1 in two applications respectively) compared with several alternative methods.", "Abstract: We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning." ] }
1907.08661
2963611731
Searching sounds by text labels is often difficult, as text descriptions cannot describe the audio content in detail. Query by vocal imitation bridges such gap and provides a novel way to sound search. Several algorithms for sound search by vocal imitation have been proposed and evaluated in a simulation environment, however, they have not been deployed into a real search engine nor evaluated by real users. This pilot work conducts a subjective study to compare these two approaches to sound search, and tries to answer the question of which approach works better for what kinds of sounds. To do so, we developed two web-based search engines for sound, one by vocal imitation (Vroom!) and the other by text description (TextSearch). We also developed an experimental framework to host these engines to collect statistics of user behaviors and ratings. Results showed that Vroom! received significantly higher search satisfaction ratings than TextSearch did for sound categories that were difficult for subjects to describe by text. Results also showed a better overall ease-of-use rating for Vroom! than TextSearch on the limited sound library in our experiments. These findings suggest advantages of vocal-imitation-based search for sound in practice.
In our previous work @cite_13 , we first proposed a supervised system using a Stacked Auto-Encoder (SAE) for automatic feature learning followed by an SVM for imitation classification. We then proposed an unsupervised system called IMISOUND @cite_27 to overcome the close-set limitation in @cite_13 . The SAE was adopted for feature extraction for both imitation queries and sound candidates and various similarity measures were calculated @cite_7 @cite_11 @cite_6 . Due to the separation of feature representation and metric learning, we further proposed the end-to-end Siamese style convolutional neural networks @cite_17 to integrate these two modules together, in which the transfer learning based TL-IMINET is our most updated model @cite_15 . Meanwhile, the benefits of applying positive and negative imitations to update the cosine similarity between the query and sound candidate embedding was investigated in @cite_26 . To understand what such neural networks actually learns, we also visualized and sonified the input patterns in TL-IMINET @cite_2 using activation maximization @cite_24 .
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_17", "@cite_6", "@cite_24", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2889726676", "2890913619", "2406791552", "2785325870" ], "abstract": [ "Designing systems that allow users to search sounds through vocal imitation augments the current text-based search engines and advances human-computer interaction. Previously we proposed a Siamese style convolutional network called IMINET for sound search by vocal imitation, which jointly addresses feature extraction by Convolutional Neural Network (CNN) and similarity calculation by Fully Connected Network (FCN), and is currently the state of the art. However, how such architecture works is still a mystery. In this paper, we try to answer this question. First, we visualize the input patterns that maximize the activation of different neurons in each CNN tower; this helps us understand what features are extracted from vocal imitations and sound candidates. Second, we visualize the imitation-sound input pairs that maximize the activation of different neurons in the FCN layers; this helps us understand what kind of input pattern pairs are recognized during the similarity calculation. Interesting patterns are found to reveal the local-to-global and simple-to-conceptual learning mechanism of TL-IMINET. Experiments also show how transfer learning helps to improve TL-IMINET performance from the visualization aspect.", "Conventional methods for finding audio in databases typically search text labels, rather than the audio itself. This can be problematic as labels may be missing, irrelevant to the audio content, or not known by users. Query by vocal imitation lets users query using vocal imitations instead. To do so, appropriate audio feature representations and effective similarity measures of imitations and original sounds must be developed. In this paper, we build upon our preliminary work to propose Siamese style convolutional neural networks to learn feature representations and similarity measures in a unified end-to-end training framework. Our Siamese architecture uses two convolutional neural networks to extract features, one from vocal imitations and the other from original sounds. The encoded features are then concatenated and fed into a fully connected network to estimate their similarity. We propose two versions of the system: IMINET is symmetric where the two encoders have an identical structure and are trained from scratch, while TL-IMINET is asymmetric and adopts the transfer learning idea by pretraining the two encoders from other relevant tasks: spoken language recognition for the imitation encoder and environmental sound classification for the original sound encoder. Experimental results show that both versions of the proposed system outperform a state-of-the-art system for sound search by vocal imitation, and the performance can be further improved when they are fused with the state of the art system. Results also show that transfer learning significantly improves the retrieval performance. This paper also provides insights to the proposed networks by visualizing and sonifying input patterns that maximize the activation of certain neurons in different layers.", "Vocal imitation is widely used in human interactions. In this paper, we propose a novel human-computer interaction system called IMISOUND that listens to a vocal imitation and retrieves similar sounds from a sound library. This system allows users to search sounds even if they do not remember their semantic labels or the sounds do not have these labels (e.g., synthesized sound effects). IMISOUND employs a Stacked Auto-Encoder (SAE) to extract features from both the vocal imitation (query) and sounds in the library (candidates). The SAE is pre-trained using training vocal imitations of sounds not in the library to automatically learn more suitable feature representations than human-engineered features such as MFCC's. It then measures the similarity between the query and each sound candidate, using the K-L divergence and Dynamic Time Warping distance between their feature representations, and finally retrieves the closest sounds. IMISOUND is an unsupervised system in the sense that no training is performed for the target sound, nonetheless, experiments show that it achieves comparable performance to a previously proposed supervised system which requires pre-training on sounds to be retrieved. Experiments also show that IMISOUND significantly outperforms an unsupervised MFCC-based baseline system, validating the advantage of the SAE feature representation.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL ." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
In the scientific domain, several approaches automate or simplify light-source placement and orientation---either with procedural methods as suggested by Schwarz and Wonka @cite_11 , or by painting'' the parts of a scene for illumination @cite_3 @cite_15 @cite_8 @cite_23 @cite_28 @cite_41 . While these methods deliver solutions to certain aspects, they ignore the iterative, interactive workflow of lighting designers, in which a large variety of considerations (that may not all be quantifiable) play an important role. Other approaches focus on interactivity, and try to decrease the feedback cycles between modeling and simulation. Both @cite_37 and Kr " @cite_21 rely on fast, GPU-based simulations. Despite being efficient, they do not offer guided modeling proposals or methods to explore and compare parallel modeling tracks.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_28", "@cite_41", "@cite_21", "@cite_3", "@cite_23", "@cite_15", "@cite_11" ], "mid": [ "2076669957", "2963970120", "1983440625", "2060174494" ], "abstract": [ "This paper focuses on efficient rendering based on pre-computed light transport, with realistic materials and shadows under all-frequency direct lighting such an environment maps. The basic difficulty is representation and computation in the 6D space of light direction, view direction, and surface position. While image-based and synthetic methods for real-time rendering have been proposed, they do not scale to high sampling rates with variation of both lighting and viewpoint. Current approaches are therefore limited to lower dimensionality (only lighting or viewpoint variation, not both) or lower sampling rates (low frequency lighting and materials). We propose a new mathematical and computational analysis of pre-computed light transport. We use factored forms, separately pre-computing and representing visibility and material properties. Rendering then requires computing triple product integrals at each vertex, involving the lighting, visibility and BRDF. Our main contribution is a general analysis of these triple product integrals, which are likely to have broad applicability in computer graphics and numerical analysis. We first determine the computational complexity in a number of bases like point samples, spherical harmonics and wavelets. We then give efficient linear and sublinear-time algorithms for Haar wavelets, incorporating non-linear wavelet approximation of lighting and BRDFs. Practically, we demonstrate rendering of images under new lighting and viewing conditions in a few seconds, significantly faster than previous techniques.", "Faithful manipulation of shape, material, and illumination in 2D Internet images would greatly benefit from a reliable factorization of appearance into material (i.e. diffuse and specular) and illumination (i.e. environment maps). On the one hand, current methods that produce very high fidelity results, typically require controlled settings, expensive devices, or significant manual effort. To the other hand, methods that are automatic and work on 'in the wild' Internet images, often extract only low-frequency lighting or diffuse materials. In this work, we propose to make use of a set of photographs in order to jointly estimate the non-diffuse materials and sharp lighting in an uncontrolled setting. Our key observation is that seeing multiple instances of the same material under different illumination (i.e. environment), and different materials under the same illumination provide valuable constraints that can be exploited to yield a high-quality solution (i.e. specular materials and environment illumination) for all the observed materials and environments. Similar constraints also arise when observing multiple materials in a single environment, or a single material across multiple environments. Technically, we enable this by a novel scalable formulation using parametric mixture models that allows for simultaneous estimation of all materials and illumination directly from a set of (uncontrolled) Internet images. The core of this approach is an optimization procedure that uses two neural networks that are trained on synthetic images to predict good gradients in parametric space given observation of reflected light. We evaluate our method on a range of synthetic and real examples to generate high-quality estimates, qualitatively compare our results against state-of-the-art alternatives via a user study, and demonstrate photo-consistent image manipulation that is otherwise very challenging to achieve.", "An interactive and intuitive way of designing lighting around a model is desirable in many applications. In this paper, we present a tool for interactive inverse lighting in which a model is rendered based on sketched lighting effects. To specify target lighting, the user freely sketches bright and dark regions on the model as if coloring it with crayons. Using these hints and the geometry of the model, the system efficiently derives light positions, directions, intensities and spot angles, assuming a local point-light based illumination model. As the system also minimizes changes from the previous specifications, lighting can be designed incrementally. We formulate the inverse lighting problem as that of an optimization and solve it using a judicious mix of greedy and minimization methods. We also map expensive calculations of the optimization to graphics hardware to make the process fast and interactive. Our tool can be used to augment larger systems that use point-light based illumination models but lack intuitive interfaces for lighting design, and also in conjunction with applications like ray tracing where interactive lighting design is difficult to achieve.", "We present a system for the lighting design of procedurally modeled buildings. The design is procedurally specified as part of the ordinary modeling workflow by defining goals for the illumination that should be attained and locations where luminaires may be installed to realize these goals. Additionally, constraints can be modeled that make the arrangement of the installed luminaires respect certain aesthetic and structural considerations. From this specification, the system automatically generates a lighting solution for any concrete model instance. The underlying, intricate joint optimization and constraint satisfaction problem is approached with a stochastic scheme that operates directly in the complex subspace where all constraints are observed. To navigate this subspace efficaciously, the actual lighting situation is taken into account. We demonstrate our system on multiple examples spanning a variety of architectural structures and lighting designs." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
@cite_12 tackle the problem of comparing different light configurations by linking the simulation results and a spatial view with non-spatial ranking and comparison visualizations. Their idea of setting the importance of certain criteria to compute the overall score (i.e., giving more weight to certain illumination requirements, to certain scene objects, or to global factors like maintenance costs) during the decision process, has influenced our work. Nevertheless, their approach does not take the modeling process itself into account and presumes the availability of a high number of valid, pre-simulated lighting configurations. This assumption rarely holds in real-world scenarios (due to the trial-and-error-based methodology converging to a single valid solution), raising the need for novel methods that produce multiple solutions in parallel. Other solutions, such as proposed by @cite_34 , record light rays and offer visual-analytics tools to explore, evaluate, and compare light interactions, potentially involving several scenes. Nevertheless, they do not offer suggestions for scene manipulations to fulfill given constraints.
{ "cite_N": [ "@cite_34", "@cite_12" ], "mid": [ "2963970120", "1968381457", "2766584316", "2055686029" ], "abstract": [ "Faithful manipulation of shape, material, and illumination in 2D Internet images would greatly benefit from a reliable factorization of appearance into material (i.e. diffuse and specular) and illumination (i.e. environment maps). On the one hand, current methods that produce very high fidelity results, typically require controlled settings, expensive devices, or significant manual effort. To the other hand, methods that are automatic and work on 'in the wild' Internet images, often extract only low-frequency lighting or diffuse materials. In this work, we propose to make use of a set of photographs in order to jointly estimate the non-diffuse materials and sharp lighting in an uncontrolled setting. Our key observation is that seeing multiple instances of the same material under different illumination (i.e. environment), and different materials under the same illumination provide valuable constraints that can be exploited to yield a high-quality solution (i.e. specular materials and environment illumination) for all the observed materials and environments. Similar constraints also arise when observing multiple materials in a single environment, or a single material across multiple environments. Technically, we enable this by a novel scalable formulation using parametric mixture models that allows for simultaneous estimation of all materials and illumination directly from a set of (uncontrolled) Internet images. The core of this approach is an optimization procedure that uses two neural networks that are trained on synthetic images to predict good gradients in parametric space given observation of reflected light. We evaluate our method on a range of synthetic and real examples to generate high-quality estimates, qualitatively compare our results against state-of-the-art alternatives via a user study, and demonstrate photo-consistent image manipulation that is otherwise very challenging to achieve.", "Tracking across cameras with non-overlapping views is a challenging problem. Firstly, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, we observe that people or vehicles tend to follow the same paths in most cases, i.e., roads, walkways, corridors etc. The proposed algorithm uses this conformity in the traversed paths to establish correspondence. The algorithm learns this conformity and hence the inter-camera relationships in the form of multivariate probability density of space-time variables (entry and exit locations, velocities, and transition times) using kernel density estimation. To handle the appearance change of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace. This subspace is learned by using probabilistic principal component analysis and used for appearance matching. The proposed approach does not require explicit inter-camera calibration, rather the system learns the camera topology and subspace of inter-camera brightness transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum likelihood (ML) estimation framework using both location and appearance cues. Experiments with real world videos are reported which validate the proposed approach.", "Faithful manipulation of shape, material, and illumination in 2D Internet images would greatly benefit from a reliable factorization of appearance into material (i.e., diffuse and specular) and illumination (i.e., environment maps). On the one hand, current methods that produce very high fidelity results, typically require controlled settings, expensive devices, or significant manual effort. To the other hand, methods that are automatic and work on 'in the wild' Internet images, often extract only low-frequency lighting or diffuse materials. In this work, we propose to make use of a set of photographs in order to jointly estimate the non-diffuse materials and sharp lighting in an uncontrolled setting. Our key observation is that seeing multiple instances of the same material under different illumination (i.e., environment), and different materials under the same illumination provide valuable constraints that can be exploited to yield a high-quality solution (i.e., specular materials and environment illumination) for all the observed materials and environments. Similar constraints also arise when observing multiple materials in a single environment, or a single material across multiple environments. The core of this approach is an optimization procedure that uses two neural networks that are trained on synthetic images to predict good gradients in parametric space given observation of reflected light. We evaluate our method on a range of synthetic and real examples to generate high-quality estimates, qualitatively compare our results against state-of-the-art alternatives via a user study, and demonstrate photo-consistent image manipulation that is otherwise very challenging to achieve.", "We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
In accordance with @cite_32 , we classify LightGuider as follows: Utilizing an interactive lighting simulation, lighting designers start out with a single sample and generate new on-the-fly supported by guidance mechanisms (see sec:relworkguidance ) suggesting alternatives in the parameter space. Immediate feedback of the simulation results provides them with navigation. As lighting designers need to evaluate qualitative as well as quantitative aspects of the simulation output, the domain goals of LightGuider present a mixture of and domain goals, which are reached through the of both. As a secondary analysis objective we identify in the elaboration of alternative designs to illustrate different trade-offs.
{ "cite_N": [ "@cite_32" ], "mid": [ "1934088123", "2060174494", "2120860586", "1983440625" ], "abstract": [ "State-of-the-art lighting design is based on physically accurate lighting simulations of scenes such as offices. The simulation results support lighting designers in the creation of lighting configurations, which must meet contradicting customer objectives regarding quality and price while conforming to industry standards. However, current tools for lighting design impede rapid feedback cycles. On the one side, they decouple analysis and simulation specification. On the other side, they lack capabilities for a detailed comparison of multiple configurations. The primary contribution of this paper is a design study of LiteVis, a system for efficient decision support in lighting design. LiteVis tightly integrates global illumination-based lighting simulation, a spatial representation of the scene, and non-spatial visualizations of parameters and result indicators. This enables an efficient iterative cycle of simulation parametrization and analysis. Specifically, a novel visualization supports decision making by ranking simulated lighting configurations with regard to a weight-based prioritization of objectives that considers both spatial and non-spatial characteristics. In the spatial domain, novel concepts support a detailed comparison of illumination scenarios. We demonstrate LiteVis using a real-world use case and report qualitative feedback of lighting designers. This feedback indicates that LiteVis successfully supports lighting designers to achieve key tasks more efficiently and with greater certainty.", "We present a system for the lighting design of procedurally modeled buildings. The design is procedurally specified as part of the ordinary modeling workflow by defining goals for the illumination that should be attained and locations where luminaires may be installed to realize these goals. Additionally, constraints can be modeled that make the arrangement of the installed luminaires respect certain aesthetic and structural considerations. From this specification, the system automatically generates a lighting solution for any concrete model instance. The underlying, intricate joint optimization and constraint satisfaction problem is approached with a stochastic scheme that operates directly in the complex subspace where all constraints are observed. To navigate this subspace efficaciously, the actual lighting situation is taken into account. We demonstrate our system on multiple examples spanning a variety of architectural structures and lighting designs.", "LightGuide is a system that explores a new approach to gesture guidance where we project guidance hints directly on a user's body. These projected hints guide the user in completing the desired motion with their body part which is particularly useful for performing movements that require accuracy and proper technique, such as during exercise or physical therapy. Our proof-of-concept implementation consists of a single low-cost depth camera and projector and we present four novel interaction techniques that are focused on guiding a user's hand in mid-air. Our visualizations are designed to incorporate both feedback and feedforward cues to help guide users through a range of movements. We quantify the performance of LightGuide in a user study comparing each of our on-body visualizations to hand animation videos on a computer display in both time and accuracy. Exceeding our expectations, participants performed movements with an average error of 21.6mm, nearly 85 more accurately than when guided by video.", "An interactive and intuitive way of designing lighting around a model is desirable in many applications. In this paper, we present a tool for interactive inverse lighting in which a model is rendered based on sketched lighting effects. To specify target lighting, the user freely sketches bright and dark regions on the model as if coloring it with crayons. Using these hints and the geometry of the model, the system efficiently derives light positions, directions, intensities and spot angles, assuming a local point-light based illumination model. As the system also minimizes changes from the previous specifications, lighting can be designed incrementally. We formulate the inverse lighting problem as that of an optimization and solve it using a judicious mix of greedy and minimization methods. We also map expensive calculations of the optimization to graphics hardware to make the process fast and interactive. Our tool can be used to augment larger systems that use point-light based illumination models but lack intuitive interfaces for lighting design, and also in conjunction with applications like ray tracing where interactive lighting design is difficult to achieve." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
When it comes to provenance information in visualization, @cite_27 give a comprehensive overview of the different types of provenance information (e.g., the history of data editing, the history of graphical views and visualization types, or the history of interactions) and different purposes of using them in the context of visualization (e.g., recall different states of the analysis, action recovery, or collaboration). However, there are varying approaches to visualize this information. The most common choice is presenting the provenance tree as a node-link diagram that shows the sequence of states and alternative branches of a workflow as described by @cite_29 .
{ "cite_N": [ "@cite_27", "@cite_29" ], "mid": [ "1959365993", "2950879328", "2104791905", "2784401422" ], "abstract": [ "While the primary goal of visual analytics research is to improve the quality of insights and findings, a substantial amount of research in provenance has focused on the history of changes and advances throughout the analysis process. The term, provenance, has been used in a variety of ways to describe different types of records and histories related to visualization. The existing body of provenance research has grown to a point where the consolidation of design knowledge requires cross-referencing a variety of projects and studies spanning multiple domain areas. We present an organizational framework of the different types of provenance information and purposes for why they are desired in the field of visual analytics. Our organization is intended to serve as a framework to help researchers specify types of provenance and coordinate design knowledge across projects. We also discuss the relationships between these factors and the methods used to capture provenance information. In addition, our organization can be used to guide the selection of evaluation methodology and the comparison of study outcomes in provenance research.", "A major challenge in data-driven biomedical research lies in the collection and representation of data provenance information to ensure that findings are reproducibile. In order to communicate and reproduce multi-step analysis workflows executed on datasets that contain data for dozens or hundreds of samples, it is crucial to be able to visualize the provenance graph at different levels of aggregation. Most existing approaches are based on node-link diagrams, which do not scale to the complexity of typical data provenance graphs. In our proposed approach, we reduce the complexity of the graph using hierarchical and motif-based aggregation. Based on user action and graph attributes, a modular degree-of-interest (DoI) function is applied to expand parts of the graph that are relevant to the user. This interest-driven adaptive approach to provenance visualization allows users to review and communicate complex multi-step analyses, which can be based on hundreds of files that are processed by numerous workflows. We have integrated our approach into an analysis platform that captures extensive data provenance information, and demonstrate its effectiveness by means of a biomedical usage scenario.", "Provenance has been studied extensively in both database and workflow management systems, so far with little convergence of definitions or models. Provenance in databases has generally been defined for relational or complex object data, by propagating fine-grained annotations or algebraic expressions from the input to the output. This kind of provenance has been found useful in other areas of computer science: annotation databases, probabilistic databases, schema and data integration, etc. In contrast, workflow provenance aims to capture a complete description of evaluation - or enactment - of a workflow, and this is crucial to verification in scientific computation. Workflows and their provenance are often presented using graphical notation, making them easy to visualize but complicating the formal semantics that relates their run-time behavior with their provenance records. We bridge this gap by extending a previously-developed dataflow language which supports both database-style querying and workflow-style batch processing steps to produce a workflow-style provenance graph that can be explicitly queried. We define and describe the model through examples, present queries that extract other forms of provenance, and give an executable definition of the graph semantics of dataflow expressions.", "Prior art has shown it is possible to estimate, through image processing and computer vision techniques, the types and parameters of transformations that have been applied to the content of individual images to obtain new images. Given a large corpus of images and a query image, an interesting further step is to retrieve the set of original images whose content is present in the query image, as well as the detailed sequences of transformations that yield the query image given the original images. This is a problem that recently has received the name of image provenance analysis. In these times of public media manipulation ( e.g., fake news and meme sharing), obtaining the history of image transformations is relevant for fact checking and authorship verification, among many other applications. This article presents an end-to-end processing pipeline for image provenance analysis, which works at real-world scale. It employs a cutting-edge image filtering solution that is custom-tailored for the problem at hand, as well as novel techniques for obtaining the provenance graph that expresses how the images, as nodes, are ancestrally connected. A comprehensive set of experiments for each stage of the pipeline is provided, comparing the proposed solution with state-of-the-art results, employing previously published datasets. In addition, this work introduces a new dataset of real-world provenance cases from the social media site Reddit, along with baseline results." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
@cite_46 focus on the scalability of node-link diagrams for encoding a history of analysis workflows. They use filtering, node aggregation, as well as a user-interest driven expansion of nodes (i.e., a degree-of-interest function) to make the tree more comprehensible. In a different work, @cite_7 use a provenance tree for visualizing automatically recorded user interactions and visualizations. Again, they focus on the efficient retrieval of analysis states by offering different possibilities for querying the data (e.g., query by user-generated examples). These works offer sophisticated solutions to scalability problems of provenance trees in the form of node-link diagrams, as well as solutions for efficient interaction with large trees. However, they do not focus on integrating visual representations of additional information for each tree node. Our application scenario requires a quick visual comparison of multiple numerical variables (i.e., illumination constraints) for each state to enable the assessment of changes of quality for each lighting design action as well as trends of the lighting design process and of alternative workflows.
{ "cite_N": [ "@cite_46", "@cite_7" ], "mid": [ "137863291", "2101474491", "174865904", "2888018391" ], "abstract": [ "Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.", "An alternative form to multidimensional projections for the visual analysis of data represented in multidimensional spaces is the deployment of similarity trees, such as Neighbor Joining trees. They organize data objects on the visual plane emphasizing their levels of similarity with high capability of detecting and separating groups and subgroups of objects. Besides this similarity-based hierarchical data organization, some of their advantages include the ability to decrease point clutter; high precision; and a consistent view of the data set during focusing, offering a very intuitive way to view the general structure of the data set as well as to drill down to groups and subgroups of interest. Disadvantages of similarity trees based on neighbor joining strategies include their computational cost and the presence of virtual nodes that utilize too much of the visual space. This paper presents a highly improved version of the similarity tree technique. The improvements in the technique are given by two procedures. The first is a strategy that replaces virtual nodes by promoting real leaf nodes to their place, saving large portions of space in the display and maintaining the expressiveness and precision of the technique. The second improvement is an implementation that significantly accelerates the algorithm, impacting its use for larger data sets. We also illustrate the applicability of the technique in visual data mining, showing its advantages to support visual classification of data sets, with special attention to the case of image classification. We demonstrate the capabilities of the tree for analysis and iterative manipulation and employ those capabilities to support evolving to a satisfactory data organization and classification.", "This paper introduces connectivity preserving constraints into spatio-temporal multi-view reconstruction. We efficiently model connectivity constraints by precomputing a geodesic shortest path tree on the occupancy likelihood. Connectivity of the final occupancy labeling is ensured with a set of linear constraints on the labeling function. In order to generalize the connectivity constraints from objects with genus 0 to an arbitrary genus, we detect loops by analyzing the visual hull of the scene. A modification of the constraints ensures connectivity in the presence of loops. The proposed efficient implementation adds little runtime and memory overhead to the reconstruction method. Several experiments show significant improvement over state-of-the-art methods and validate the practical use of this approach in scenes with fine structured details.", "Storing analytical provenance generates a knowledge base with a large potential for recalling previous results and guiding users in future analyses. However, without extensive manual creation of meta information and annotations by the users, search and retrieval of analysis states can become tedious. We present KnowledgePearls, a solution for efficient retrieval of analysis states that are structured as provenance graphs containing automatically recorded user interactions and visualizations. As a core component, we describe a visual interface for querying and exploring analysis states based on their similarity to a partial definition of a requested analysis state. Depending on the use case, this definition may be provided explicitly by the user by formulating a search query or inferred from given reference states. We explain our approach using the example of efficient retrieval of demographic analyses by Hans Rosling and discuss our implementation for a fast look-up of previous states. Our approach is independent of the underlying visualization framework. We discuss the applicability for visualizations which are based on the declarative grammar Vega and we use a Vega-based implementation of Gapminder as guiding example. We additionally present a biomedical case study to illustrate how KnowledgePearls facilitates the exploration process by recalling states from earlier analyses." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
Besides node-link diagrams, there are examples of other visualization types used to show provenance information. Vi 'e @cite_16 , for instance, visualize the history of the editing that was applied to a Wikipedia (wikipedia.org) page in a flow-like visualization. This visualization is specifically designed to represent one page with text running from top to bottom, but the only interactions it supports (indirectly) are adding text'' and removing text''. Thus, it does not lend itself to our problem scenario. Another approach by @cite_40 shows the editing history of illustrations. They provide a superimposed visualization of two illustration states with before'' states rendered semi-transparent. Moreover, the illustration is augmented with arrows, icons, and color. Arrows and icons indicate spatial transformations of (parts of) the illustration, while color indicates user changes. This is a specialized design for the problem at hand and cannot be transferred to our application scenario.
{ "cite_N": [ "@cite_40", "@cite_16" ], "mid": [ "137863291", "2106268337", "1970022459", "2102675982" ], "abstract": [ "Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.", "We present a novel dynamic graph visualization technique based on node-link diagrams. The graphs are drawn side-byside from left to right as a sequence of narrow stripes that are placed perpendicular to the horizontal time line. The hierarchically organized vertices of the graphs are arranged on vertical, parallel lines that bound the stripes; directed edges connect these vertices from left to right. To address massive overplotting of edges in huge graphs, we employ a splatting approach that transforms the edges to a pixel-based scalar field. This field represents the edge densities in a scalable way and is depicted by non-linear color mapping. The visualization method is complemented by interaction techniques that support data exploration by aggregation, filtering, brushing, and selective data zooming. Furthermore, we formalize graph patterns so that they can be interactively highlighted on demand. A case study on software releases explores the evolution of call graphs extracted from the JUnit open source software project. In a second application, we demonstrate the scalability of our approach by applying it to a bibliography dataset containing more than 1.5 million paper titles from 60 years of research history producing a vast amount of relations between title words.", "It has been known for some time that larger graphs can be interpreted if laid out in 3D and displayed with stereo and or motion depth cues to support spatial perception. However, prior studies were carried out using displays that provided a level of detail far short of what the human visual system is capable of resolving. Therefore, we undertook a graph comprehension study using a very high resolution stereoscopic display. In our first experiment, we examined the effect of stereoscopic display, kinetic depth, and using 3D tubes versus lines to display the links. The results showed a much greater benefit for 3D viewing than previous studies. For example, with both motion and stereoscopic depth cues, unskilled observers could see paths between nodes in 333 node graphs with less than a 10p error rate. Skilled observers could see up to a 1000-node graph with less than a 10p error rate. This represented an order of magnitude increase over 2D display. In our second experiment, we varied both nodes and links to understand the constraints on the number of links and the size of graph that can be reliably traced. We found the difference between number of links and number of nodes to best account for error rates and suggest that this is evidence for a “perceptual phase transition.” These findings are discussed in terms of their implications for information display.", "Presentation and graphics software enables users to experiment with variations of illustrations. They can revisit recent editing operations using the ubiquitous undo command, but they are limited to sequential exploration. We propose a new interaction metaphor and visualization for operation history. While editing, a user can access a history mode in which actions are denoted by graphical depictions appearing on top of the document. Our work is inspired by the visual language of film storyboards and assembly instructions. Our storyboard provides an interactive visual history, summarizing the editing of a document or a selected object. Each view is composed of action depictions representing the user’s editing actions and enables the user to consider the operation history in context rather than in a disconnected list view. This metaphor provides instant access to any past action and we demonstrate that this is an intuitive interface to a selective undo mechanism." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
Guidance in visualization as defined by Ceneda at al. @cite_18 can be found in various forms and application scenarios. However, only a few approaches relate to our problem at hand. @cite_33 present a guidance approach to automatically generate a set of information-visualization designs appropriate for the given data and tasks. A selection of the most useful visualization mappings is input to the guidance mechanism, and influences future suggestions. O' @cite_0 present a similar approach that helps in creating graphic design layouts. The system interactively suggests changes in the position, scale, and alignment of elements that are placed on a page. Both systems present guidance approaches to optimize a design.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_33" ], "mid": [ "2488113179", "2142493242", "2012118336", "137863291" ], "abstract": [ "Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.", "We present a nested model for the visualization design and validation with four layers: characterize the task and data in the vocabulary of the problem domain, abstract into operations and data types, design visual encoding and interaction techniques, and create algorithms to execute techniques efficiently. The output from a level above is input to the level below, bringing attention to the design challenge that an upstream error inevitably cascades to all downstream levels. This model provides prescriptive guidance for determining appropriate evaluation approaches by identifying threats to validity unique to each level. We also provide three recommendations motivated by this model: authors should distinguish between these levels when claiming contributions at more than one of them, authors should explicitly state upstream assumptions at levels above the focus of a paper, and visualization venues should accept more papers on domain characterization.", "Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.", "Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable." ] }
1907.08553
2964344820
LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.
@cite_45 present a guidance approach that helps to discover interesting data and patterns based on the system user's interests. They provide a system to extract, combine, refine, and visualize such findings of interest. They distinguish between user-driven and data-driven findings. In our work we combine user-driven (i.e., the user chooses which areas and which illumination constraints are more important than others) with data-driven (i.e., optimizing the current design with respect to specified illumination constraints) steering of the guidance suggestions.
{ "cite_N": [ "@cite_45" ], "mid": [ "2033074184", "2024021823", "2519586580", "2093353037" ], "abstract": [ "The extraction of relevant and meaningful information from multivariate or high-dimensional data is a challenging problem. One reason for this is that the number of possible representations, which might contain relevant information, grows exponentially with the amount of data dimensions. Also, not all views from a possibly large view space, are potentially relevant to a given analysis task or user. Focus+Context or Semantic Zoom Interfaces can help to some extent to efficiently search for interesting views or data segments, yet they show scalability problems for very large data sets. Accordingly, users are confronted with the problem of identifying interesting views, yet the manual exploration of the entire view space becomes ineffective or even infeasible. While certain quality metrics have been proposed recently to identify potentially interesting views, these often are defined in a heuristic way and do not take into account the application or user context. We introduce a framework for a feedback-driven view exploration, inspired by relevance feedback approaches used in Information Retrieval. Our basic idea is that users iteratively express their notion of interestingness when presented with candidate views. From that expression, a model representing the user's preferences, is trained and used to recommend further interesting view candidates. A decision support system monitors the exploration process and assesses the relevance-driven search process for convergence and stability. We present an instantiation of our framework for exploration of Scatter Plot Spaces based on visual features. We demonstrate the effectiveness of this implementation by a case study on two real-world datasets. We also discuss our framework in light of design alternatives and point out its usefulness for development of user- and context-dependent visual exploration systems.", "We seek to elicit individual design preferences through human-computer interaction. During an iteration of the interactive session, the computer queries the subject by presenting a set of designs from which the subject must make a choice. The computer uses this choice feedback and creates the next set of designs using knowledge accumulated from previous choices. Under the hypothesis that human responses are deterministic, we discuss how query schemes in the elicitation task can be viewed mathematically as learning or optimization algorithms. Two query schemes are defined. Query type 1 considers the subject’s binary choices as definite preferences, i.e., only preferred designs are chosen, while others are skipped; query type 2 treats choices as comparisons among a set, i.e., preferred designs are chosen relative to those in the current set but may be dropped in future iterations. We show that query type 1 can be considered as an active learning problem, while type 2 as a “black-box” optimization problem. This paper concentrates on query type 2. Two algorithms based on support vector machine and efficient global optimization search are presented and discussed. Early user tests for vehicle exterior styling preference elicitation are also presented. [DOI: 10.1115 1.4005104]", "Humans navigate crowded spaces such as a university campus by following common sense rules based on social etiquette. In this paper, we argue that in order to enable the design of new target tracking or trajectory forecasting methods that can take full advantage of these rules, we need to have access to better data in the first place. To that end, we contribute a new large-scale dataset that collects videos of various types of targets (not just pedestrians, but also bikers, skateboarders, cars, buses, golf carts) that navigate in a real world outdoor environment such as a university campus. Moreover, we introduce a new characterization that describes the “social sensitivity” at which two targets interact. We use this characterization to define “navigation styles” and improve both forecasting models and state-of-the-art multi-target tracking–whereby the learnt forecasting models help the data association step.", "An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues." ] }
1907.08427
2964016487
Video person re-identification (re-ID) plays an important role in surveillance video analysis. However, the performance of video re-ID degenerates severely under partial occlusion. In this paper, we propose a novel network, called Spatio-Temporal Completion network (STCnet), to explicitly handle partial occlusion problem. Different from most previous works that discard the occluded frames, STCnet can recover the appearance of the occluded parts. For one thing, the spatial structure of a pedestrian frame can be used to predict the occluded body parts from the unoccluded body parts of this frame. For another, the temporal patterns of pedestrian sequence provide important clues to generate the contents of occluded parts. With the Spatio-temporal information, STCnet can recover the appearance for the occluded parts, which could be leveraged with those unoccluded parts for more accurate video re-ID. By combining a re-ID network with STCnet, a video re-ID framework robust to partial occlusion (VRSTC) is proposed. Experiments on three challenging video re-ID databases demonstrate that the proposed approach outperforms the state-of-the-art.
Person re-ID for still images has been extensively studied @cite_34 @cite_13 @cite_25 @cite_19 @cite_37 @cite_40 @cite_10 . Recently, researchers start to pay attention to video re-ID @cite_29 @cite_21 @cite_33 @cite_27 @cite_41 @cite_20 @cite_23 @cite_9 @cite_12 . McLaughlin al @cite_21 and Wu al @cite_33 proposed a basic pipeline for deep video re-ID. First, the frame features are extracted by convolutional neural network. Then a recurrent layer is applied to incorporate temporal context information into each frame. Finally, the temporal average pooling is adopted to obtain video representation. Wu al @cite_27 further proposed a temporal convolutional subnet to extract local motion information. These methods verify that the temporal information of video can help to identify the person. However, because these methods treat each frame of video equally, the frames with partial occlusion will distort the video representation.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_41", "@cite_10", "@cite_29", "@cite_21", "@cite_9", "@cite_19", "@cite_40", "@cite_27", "@cite_23", "@cite_12", "@cite_34", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "2788988323", "2463071499", "2887057599", "2473702307" ], "abstract": [ "In this paper, we propose a novel feature learning framework for video person re-identification (re-ID). The proposed framework largely aims to exploit the adequate temporal information of video sequences and tackle the poor spatial alignment of moving pedestrians. More specifically, for exploiting the temporal information, we design a temporal residual learning (TRL) module to simultaneously extract the generic and specific features of consecutive frames. The TRL module is equipped with two bi-directional LSTM (BiLSTM), which are, respectively, responsible to describe a moving person in different aspects, providing complementary information for better feature representations. To deal with the poor spatial alignment in video re-ID data sets, we propose a spatial-temporal transformer network (ST2N) module. Transformation parameters in the ST2N module are learned by leveraging the high-level semantic information of the current frame as well as the temporal context knowledge from other frames. The proposed ST2N module with less learnable parameters allows effective person alignments under significant appearance changes. Extensive experimental results on the large-scale MARS, PRID2011, ILIDS-VID, and SDU-VID data sets demonstrate that the proposed method achieves consistently superior performance and outperforms most of the very recent state-of-the-art methods.", "In this paper we propose a novel recurrent neural network architecture for video-based person re-identification. Given the video sequence of a person, features are extracted from each frame using a convolutional neural network that incorporates a recurrent final layer, which allows information to flow between time-steps. The features from all timesteps are then combined using temporal pooling to give an overall appearance feature for the complete sequence. The convolutional network, recurrent layer, and temporal pooling layer, are jointly trained to act as a feature extractor for video-based re-identification using a Siamese network architecture. Our approach makes use of colour and optical flow information in order to capture appearance and motion information which is useful for video re-identification. Experiments are conduced on the iLIDS-VID and PRID-2011 datasets to show that this approach outperforms existing methods of video-based re-identification.", "Video-based person re-identification (re-id) is a central application in surveillance systems with significant concern in security. Matching persons across disjoint camera views in their video fragments is inherently challenging due to the large visual variations and uncontrolled frame rates. There are two steps crucial to person re-id, namely discriminative feature learning and metric learning. However, existing approaches consider the two steps independently, and they do not make full use of the temporal and spatial information in videos. In this paper, we propose a Siamese attention architecture that jointly learns spatio-temporal video representations and their similarity metrics. The network extracts local convolutional features from regions of each frame, and enhance their discriminative capability by focusing on distinct regions when measuring the similarity with another pedestrian video. The attention mechanism is embedded into spatial gated recurrent units to selectively propagate relevant features and memorize their spatial dependencies through the network. The model essentially learns which parts (where) from which frames (when) are relevant and distinctive for matching persons and attaches higher importance therein. The proposed Siamese model is end-to-end trainable to jointly learn comparable hidden representations for paired pedestrian videos and their similarity value. Extensive experiments on three benchmark datasets show the effectiveness of each component of the proposed deep network while outperforming state-of-the-art methods.", "In this paper, we present an end-to-end approach to simultaneously learn spatio-temporal features and corresponding similarity metric for video-based person re-identification. Given the video sequence of a person, features from each frame that are extracted from all levels of a deep convolutional network can preserve a higher spatial resolution from which we can model finer motion patterns. These low-level visual percepts are leveraged into a variant of recurrent model to characterize the temporal variation between time-steps. Features from all time-steps are then summarized using temporal pooling to produce an overall feature representation for the complete sequence. The deep convolutional network, recurrent layer, and the temporal pooling are jointly trained to extract comparable hidden-unit representations from input pair of time series to compute their corresponding similarity value. The proposed framework combines time series modeling and metric learning to jointly learn relevant features and a good similarity measure between time sequences of person. Experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose." ] }
1907.08427
2964016487
Video person re-identification (re-ID) plays an important role in surveillance video analysis. However, the performance of video re-ID degenerates severely under partial occlusion. In this paper, we propose a novel network, called Spatio-Temporal Completion network (STCnet), to explicitly handle partial occlusion problem. Different from most previous works that discard the occluded frames, STCnet can recover the appearance of the occluded parts. For one thing, the spatial structure of a pedestrian frame can be used to predict the occluded body parts from the unoccluded body parts of this frame. For another, the temporal patterns of pedestrian sequence provide important clues to generate the contents of occluded parts. With the Spatio-temporal information, STCnet can recover the appearance for the occluded parts, which could be leveraged with those unoccluded parts for more accurate video re-ID. By combining a re-ID network with STCnet, a video re-ID framework robust to partial occlusion (VRSTC) is proposed. Experiments on three challenging video re-ID databases demonstrate that the proposed approach outperforms the state-of-the-art.
To handle partial occlusion, the attention based approaches are gaining popularity. Zhou al @cite_41 proposed a RNN temporal attention mechanism to select the most discriminative frames from video. Liu al @cite_9 used a convolutional subnet to predict quality score for each frame of video. Xu al @cite_23 presented a Spatial and Temporal Attention Pooling Network, where the spatial attention pooling layer selected discriminative regions from each frame and the temporal attention pooling selected informative frames in the sequence. Similarly, Li al @cite_20 used multiple spatial attention modules to localize distinctive body parts of person, and pooled these extracted local features across time with temporal attention.
{ "cite_N": [ "@cite_41", "@cite_9", "@cite_20", "@cite_23" ], "mid": [ "2963481014", "2808203533", "2964105113", "2963736028" ], "abstract": [ "In this paper, we propose a CNN-based framework for online MOT. This framework utilizes the merits of single object trackers in adapting appearance models and searching for target in the next frame. Simply applying single object tracker for MOT will encounter the problem in computational efficiency and drifted results caused by occlusion. Our framework achieves computational efficiency by sharing features and using ROI-Pooling to obtain individual features for each target. Some online learned target-specific CNN layers are used for adapting the appearance model for each target. In the framework, we introduce spatial-temporal attention mechanism (STAM) to handle the drift caused by occlusion and interaction among targets. The visibility map of the target is learned and used for inferring the spatial attention map. The spatial attention map is then applied to weight the features. Besides, the occlusion status can be estimated from the visibility map, which controls the online updating process via weighted loss on training samples with different occlusion statuses in different frames. It can be considered as temporal attention mechanism. The proposed algorithm achieves 34.3 and 46.0 in MOTA on challenging MOT15 and MOT16 benchmark dataset respectively.", "As characterizing videos simultaneously from spatial and temporal cues has been shown crucial for the video analysis, the combination of convolutional neural networks and recurrent neural networks, i.e., recurrent convolution networks (RCNs), should be a native framework for learning the spatio-temporal video features. In this paper, we develop a novel sequential vector of locally aggregated descriptor (VLAD) layer, named SeqVLAD, to combine a trainable VLAD encoding process and the RCNs architecture into a whole framework. In particular, sequential convolutional feature maps extracted from successive video frames are fed into the RCNs to learn soft spatio-temporal assignment parameters, so as to aggregate not only detailed spatial information in separate video frames but also fine motion information in successive video frames. Moreover, we improve the gated recurrent unit (GRU) of RCNs by sharing the input-to-hidden parameters and propose an improved GRU-RCN architecture named shared GRU-RCN (SGRU-RCN). Thus, our SGRU-RCN has a fewer parameters and a less possibility of overfitting. In experiments, we evaluate SeqVLAD with the tasks of video captioning and video action recognition. Experimental results on Microsoft Research Video Description Corpus, Montreal Video Annotation Dataset, UCF101, and HMDB51 demonstrate the effectiveness and good performance of our method.", "In this paper, we propose to incorporate convolutional neural networks with a multi-context attention mechanism into an end-to-end framework for human pose estimation. We adopt stacked hourglass networks to generate attention maps from features at multiple resolutions with various semantics. The Conditional Random Field (CRF) is utilized to model the correlations among neighboring regions in the attention map. We further combine the holistic attention model, which focuses on the global consistency of the full human body, and the body part attention model, which focuses on detailed descriptions for different body parts. Hence our model has the ability to focus on different granularity from local salient regions to global semantic consistent spaces. Additionally, we design novel Hourglass Residual Units (HRUs) to increase the receptive field of the network. These units are extensions of residual units with a side branch incorporating filters with larger receptive field, hence features with various scales are learned and combined within the HRUs. The effectiveness of the proposed multi-context attention mechanism and the hourglass residual units is evaluated on two widely used human pose estimation benchmarks. Our approach outperforms all existing methods on both benchmarks over all the body parts. Code has been made publicly available.", "Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics." ] }
1907.08427
2964016487
Video person re-identification (re-ID) plays an important role in surveillance video analysis. However, the performance of video re-ID degenerates severely under partial occlusion. In this paper, we propose a novel network, called Spatio-Temporal Completion network (STCnet), to explicitly handle partial occlusion problem. Different from most previous works that discard the occluded frames, STCnet can recover the appearance of the occluded parts. For one thing, the spatial structure of a pedestrian frame can be used to predict the occluded body parts from the unoccluded body parts of this frame. For another, the temporal patterns of pedestrian sequence provide important clues to generate the contents of occluded parts. With the Spatio-temporal information, STCnet can recover the appearance for the occluded parts, which could be leveraged with those unoccluded parts for more accurate video re-ID. By combining a re-ID network with STCnet, a video re-ID framework robust to partial occlusion (VRSTC) is proposed. Experiments on three challenging video re-ID databases demonstrate that the proposed approach outperforms the state-of-the-art.
Image completion aims to fill the missing or masked regions in images with plausibly synthesized contents. It has many applications in photo editing, textual synthesis and computational photography. Early works @cite_32 @cite_14 attempted to solve the problem by matching and copying background patches into the missing regions. Recently, deep learning approaches based on Generative Adversarial Network (GAN) @cite_16 had emerged as a promising paradigm for image completion. Pathak al @cite_22 proposed Context Encoder that generated the contents of an arbitrary image region conditioned on its surroundings. It was trained with pixel-wise reconstruction and an adversarial loss, which produced sharper results than training the model with only reconstruction loss. Iizuka al @cite_28 improved @cite_22 by using dilated convolution @cite_36 to handle arbitrary resolutions. In @cite_28 , global and local discriminators were introduced as adversarial losses. The global discriminator pursued global consistency of the input image, while the local discriminator encouraged the generated parts to be valid. Our proposed STCnet builds on @cite_28 and extends it to exploit the temporal information of video by the proposed temporal attention module. In addition, STCnet employs a guider sub-network endowed with a re-ID cross-entropy loss to preserve the identities of the generated images.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_28", "@cite_36", "@cite_32", "@cite_16" ], "mid": [ "2768098503", "2738588019", "2784649957", "2791184993" ], "abstract": [ "Image completion has achieved significant progress due to advances in generative adversarial networks (GANs). Albeit natural-looking, the synthesized contents still lack details, especially for scenes with complex structures or images with large holes. This is because there exists a gap between low-level reconstruction loss and high-level adversarial loss. To address this issue, we introduce a perceptual network to provide mid-level guidance, which measures the semantical similarity between the synthesized and original contents in a similarity-enhanced space. We conduct a detailed analysis on the effects of different losses and different levels of perceptual features in image completion, showing that there exist complementarity between adversarial training and perceptual features. By combining them together, our model can achieve nearly seamless fusion results in an end-to-end manner. Moreover, we design an effective lightweight generator architecture, which can achieve effective image inpainting with far less parameters. Evaluated on CelebA Face and Paris StreetView dataset, our proposed method significantly outperforms existing methods.", "We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.", "We present a deep learning approach for high resolution face completion with multiple controllable attributes (e.g., male and smiling) under arbitrary masks. Face completion entails understanding both structural meaningfulness and appearance consistency locally and globally to fill in \"holes\" whose content do not appear elsewhere in an input image. It is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of \"holes\" and the controllable attributes of filled-in fragments. Our system addresses the challenges by learning a fully end-to-end framework that trains generative adversarial networks (GANs) progressively from low resolution to high resolution with conditional vectors encoding controllable attributes. We design novel network architectures to exploit information across multiple scales effectively and efficiently. We introduce new loss functions encouraging sharp completion. We show that our system can complete faces with large structural and appearance variations using a single feed-forward pass of computation with mean inference time of 0.007 seconds for images at 1024 x 1024 resolution. We also perform a pilot human study that shows our approach outperforms state-of-the-art face completion methods in terms of rank analysis. The code will be released upon publication.", "Area of image inpainting over relatively large missing regions recently advanced substantially through adaptation of dedicated deep neural networks. However, current network solutions still introduce undesired artifacts and noise to the repaired regions. We present an image inpainting method that is based on the celebrated generative adversarial network (GAN) framework. The proposed PGGAN method includes a discriminator network that combines a global GAN (G-GAN) architecture with a patchGAN approach. PGGAN first shares network layers between G-GAN and patchGAN, then splits paths to produce two adversarial losses that feed the generator network in order to capture both local continuity of image texture and pervasive global features in images. The proposed framework is evaluated extensively, and the results including comparison to recent state-of-the-art demonstrate that it achieves considerable improvements on both visual and quantitative evaluations." ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
Ben- @cite_23 proposed generalization bounds for closed set domain adaptation. The bound represents that the performance of the target classifier depends on the performance of the source classifier and the discrepancy between the source and target domains. Many UCSDA methods @cite_26 @cite_11 @cite_3 have been proposed according to the theoretical bound and attempt to minimize the discrepancy between domains. We roughly separate these methods into two categories: feature matching and instance reweighting.
{ "cite_N": [ "@cite_26", "@cite_3", "@cite_23", "@cite_11" ], "mid": [ "2611292810", "2964285681", "2798593490", "2901021011" ], "abstract": [ "In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation.", "In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation.", "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings.", "Unsupervised domain adaptation aims to mitigate the domain shift when transferring knowledge from a supervised source domain to an unsupervised target domain. Adversarial Feature Alignment has been successfully explored to minimize the domain discrepancy. However, existing methods are usually struggling to optimize mixed learning objectives and vulnerable to negative transfer when two domains do not share the identical label space. In this paper, we empirically reveal that the erratic discrimination of target domain mainly reflects in its much lower feature norm value with respect to that of the source domain. We present a non-parametric Adaptive Feature Norm AFN approach, which is independent of the association between label spaces of the two domains. We demonstrate that adapting feature norms of source and target domains to achieve equilibrium over a large range of values can result in significant domain transfer gains. Without bells and whistles but a few lines of code, our method largely lifts the discrimination of target domain (23.7 from the Source Only in VisDA2017) and achieves the new state of the art under the vanilla setting. Furthermore, as our approach does not require to deliberately align the feature distributions, it is robust to negative transfer and can outperform the existing approaches under the partial setting by an extremely large margin (9.8 on Office-Home and 14.1 on VisDA2017). Code is available at this https URL. We are responsible for the reproducibility of our method." ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
Feature matching aims to reduce the distribution discrepancy by learning a new feature representation. (TCA) @cite_35 learns a new feature space to match distributions by employing the (MMD) @cite_29 . (JDA) @cite_14 improves TCA by jointly matching marginal distributions and conditional distributions. (ARTL) @cite_20 considers a manifold regularization term @cite_18 to learn the geometric relations between domains, while matching distributions. (JGSA) @cite_6 not only considers the distribution discrepancy but also matches the geometric shift. Recent advances show that deep networks can be successfully applied to closed set domain adaptation tasks. (DAN) @cite_4 considers three adaptation layers for matching distributions and applies multiple kernels (MK-MMD) @cite_21 for adapting deep representations. (WDGRL) @cite_17 minimizes the distribution discrepancy by employing in neural networks.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_14", "@cite_4", "@cite_29", "@cite_21", "@cite_6", "@cite_20", "@cite_17" ], "mid": [ "2963217615", "2592141621", "919364087", "2172156198" ], "abstract": [ "The learning of domain-invariant representations in the context of domain adaptation with neural networks is considered. We propose a new regularization method that minimizes the domain-specific latent feature representations directly in the hidden activation space. Although some standard distribution matching approaches exist that can be interpreted as the matching of weighted sums of moments, e.g. Maximum Mean Discrepancy (MMD), an explicit order-wise matching of higher order moments has not been considered before. We propose to match the higher order central moments of probability distributions by means of order-wise moment differences. Our model does not require computationally expensive distance and kernel matrix computations. We utilize the equivalent representation of probability distributions by moment sequences to define a new distance function, called Central Moment Discrepancy (CMD). We prove that CMD is a metric on the set of probability distributions on a compact interval. We further prove that convergence of probability distributions on compact intervals w.r.t. the new metric implies convergence in distribution of the respective random variables. We test our approach on two different benchmark data sets for object recognition (Office) and sentiment analysis of product reviews (Amazon reviews). CMD achieves a new state-of-the-art performance on most domain adaptation tasks of Office and outperforms networks trained with MMD, Variational Fair Autoencoders and Domain Adversarial Neural Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity analysis shows that the new approach is stable w. r. t. parameter changes in a certain interval. The source code of the experiments is publicly available.", "The learning of domain-invariant representations in the context of domain adaptation with neural networks is considered. We propose a new regularization method that minimizes the discrepancy between domain-specific latent feature representations directly in the hidden activation space. Although some standard distribution matching approaches exist that can be interpreted as the matching of weighted sums of moments, e.g. Maximum Mean Discrepancy (MMD), an explicit order-wise matching of higher order moments has not been considered before. We propose to match the higher order central moments of probability distributions by means of order-wise moment differences. Our model does not require computationally expensive distance and kernel matrix computations. We utilize the equivalent representation of probability distributions by moment sequences to define a new distance function, called Central Moment Discrepancy (CMD). We prove that CMD is a metric on the set of probability distributions on a compact interval. We further prove that convergence of probability distributions on compact intervals w.r.t. the new metric implies convergence in distribution of the respective random variables. We test our approach on two different benchmark data sets for object recognition (Office) and sentiment analysis of product reviews (Amazon reviews). CMD achieves a new state-of-the-art performance on most domain adaptation tasks of Office and outperforms networks trained with MMD, Variational Fair Autoencoders and Domain Adversarial Neural Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity analysis shows that the new approach is stable w.r.t. parameter changes in a certain interval. The source code of the experiments is publicly available.", "Cascaded subspace learning scheme is used for matching visible against thermal faces.Whitening transform, factor analysis and common discriminant analysis are employed.Cross-database evaluation is adopted to convey the effectiveness of the approach. Matching thermal (THM) face images against visible (VIS) face images poses a significant challenge to automated face recognition systems. In this work, we introduce a Heterogeneous Face Recognition (HFR) matching framework, which uses multiple sets of subspaces generated by sampling patches from VIS and THM face images and subjecting them to a sequence of transformations. In the training phase of the proposed scheme, face images from VIS and THM are subjected to three different filters separately and then tessellated into patches. Each patch is represented by either a Pyramid Scale Invariant Feature Transform (PSIFT) or Histograms of Principal Oriented Gradients (HPOG). Then, a cascaded subspace learning process consisting of whitening transformation, factor analysis, and common discriminant analysis is used to construct multiple common subspaces between VIS and THM facial images. During the testing phase, the projected feature vectors from individual subspaces are concatenated to form a final feature vector. Nearest Neighbor (NN) classifier is used to compare feature vectors and the resulting scores corresponding to three filtered images are combined via the sum-rule. The proposed face matching algorithm is evaluated on two multispectral face datasets and is shown to achieve very promising results.", "This paper proposes a new approach for the discovery of common patterns in a small set of images by region matching. The issues in feature robustness, matching robustness and noise artifact are addressed to delve into the potential of using regions as the basic matching unit. We novelly employ the many-to-many (M2M) matching strategy, specifically with the Earth Mover's Distance (EMD), to increase resilience towards the structural inconsistency from improper region segmentation. However, the matching pattern of M2M is dispersed and unregulated in nature, leading to the challenges of mining a common pattern while identifying the underlying transformation. To avoid analysis on unregulated matching, we propose localized matching for the collaborative mining of common patterns from multiple images. The patterns are refined iteratively using the expectation-maximization algorithm by taking advantage of the ''crowding'' phenomenon in the EMD flows. Experimental results show that our approach can handle images with significant image noise and background clutter. To pinpoint the potential of Common Pattern Discovery (CPD), we further use image retrieval as an example to show the application of CPD for pattern learning in relevance feedback." ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
The instance reweighting method reduces distribution discrepancy by weighting the source samples. (KMM) @cite_25 defines the weights as the density ratio between the source domain and the target domain. @cite_22 provided a theoretical analysis for important instance reweighting methods. However, when the domain discrepancy is substantially large, a large number of effective source samples will be down-weighted, resulting in the loss of effective information.
{ "cite_N": [ "@cite_22", "@cite_25" ], "mid": [ "2611292810", "2964285681", "2108018401", "1848260265" ], "abstract": [ "In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation.", "In domain adaptation, maximum mean discrepancy (MMD) has been widely adopted as a discrepancy metric between the distributions of source and target domains. However, existing MMD-based domain adaptation methods generally ignore the changes of class prior distributions, i.e., class weight bias across domains. This remains an open problem but ubiquitous for domain adaptation, which can be caused by changes in sample selection criteria and application scenarios. We show that MMD cannot account for class weight bias and results in degraded domain adaptation performance. To address this issue, a weighted MMD model is proposed in this paper. Specifically, we introduce class-specific auxiliary weights into the original MMD for exploiting the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable. To account for it, our proposed weighted MMD model is defined by introducing an auxiliary weight for each class in the source domain, and a classification EM algorithm is suggested by alternating between assigning the pseudo-labels, estimating auxiliary weights and updating model parameters. Extensive experiments demonstrate the superiority of our weighted MMD over conventional MMD for domain adaptation.", "In many important machine learning applications, the source distribution used to estimate a probabilistic classifier differs from the target distribution on which the classifier will be used to make predictions. Due to its asymptotic properties, sample reweighted empirical loss minimization is a commonly employed technique to deal with this difference. However, given finite amounts of labeled source data, this technique suffers from significant estimation errors in settings with large sample selection bias. We develop a framework for learning a robust bias-aware (RBA) probabilistic classifier that adapts to different sample selection biases using a minimax estimation formulation. Our approach requires only accurate estimates of statistics under the source distribution and is otherwise as robust as possible to unknown properties of the conditional label distribution, except when explicit generalization assumptions are incorporated. We demonstrate the behavior and effectiveness of our approach on binary classification tasks.", "We describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. This extends previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and using a simpler training procedure. We incorporate instance weighting into a mixture-model framework, and find that it yields consistent improvements over a wide range of baselines." ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
When the source domain and target domain for known classes share the same distribution, the open set domain adaptation becomes . A common method for handling open set recognition relies on the use of threshold-based classification strategies @cite_30 . Establishing a threshold on the similarity score means rejecting distant samples from the training samples. (OSNN) @cite_36 recognizes whether a sample is from unknown classes by comparing the threshold with the ratio of similarity scores to the two most similar classes of the sample. Another trend relies on modifying (SVM) @cite_5 @cite_39 @cite_16 . (OSVM) @cite_16 uses a multi-class SVM as a basis to learn the unnormalized posterior probability which is used to reject unknown samples.
{ "cite_N": [ "@cite_30", "@cite_36", "@cite_39", "@cite_5", "@cite_16" ], "mid": [ "2798593490", "2951103356", "2248269543", "1981658663" ], "abstract": [ "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings.", "A key topic in classification is the accuracy loss produced when the data distribution in the training (source) domain differs from that in the testing (target) domain. This is being recognized as a very relevant problem for many computer vision tasks such as image classification, object detection, and object category recognition. In this paper, we present a novel domain adaptation method that leverages multiple target domains (or sub-domains) in a hierarchical adaptation tree. The core idea is to exploit the commonalities and differences of the jointly considered target domains. Given the relevance of structural SVM (SSVM) classifiers, we apply our idea to the adaptive SSVM (A-SSVM), which only requires the target domain samples together with the existing source-domain classifier for performing the desired adaptation. Altogether, we term our proposal as hierarchical A-SSVM (HA-SSVM). As proof of concept we use HA-SSVM for pedestrian detection and object category recognition. In the former we apply HA-SSVM to the deformable part-based model (DPM) while in the latter HA-SSVM is applied to multi-category classifiers. In both cases, we show how HA-SSVM is effective in increasing the detection recognition accuracy with respect to adaptation strategies that ignore the structure of the target data. Since, the sub-domains of the target data are not always known a priori, we shown how HA-SSVM can incorporate sub-domain structure discovery for object category recognition.", "In this paper, we propose a novel multiclass classifier for the open-set recognition scenario. This scenario is the one in which there are no a priori training samples for some classes that might appear during testing. Usually, many applications are inherently open set. Consequently, successful closed-set solutions in the literature are not always suitable for real-world recognition problems. The proposed open-set classifier extends upon the Nearest-Neighbor (NN) classifier. Nearest neighbors are simple, parameter independent, multiclass, and widely used for closed-set problems. The proposed Open-Set NN (OSNN) method incorporates the ability of recognizing samples belonging to classes that are unknown at training time, being suitable for open-set recognition. In addition, we explore evaluation measures for open-set problems, properly measuring the resilience of methods to unknown classes during testing. For validation, we consider large freely-available benchmarks with different open-set recognition regimes and demonstrate that the proposed OSNN significantly outperforms their counterparts in the literature.", "In this paper, we propose a new framework called domain adaptation machine (DAM) for the multiple source domain adaption problem. Under this framework, we learn a robust decision function (referred to as target classifier) for label prediction of instances from the target domain by leveraging a set of base classifiers which are prelearned by using labeled instances either from the source domains or from the source domains and the target domain. With the base classifiers, we propose a new domain-dependent regularizer based on smoothness assumption, which enforces that the target classifier shares similar decision values with the relevant base classifiers on the unlabeled instances from the target domain. This newly proposed regularizer can be readily incorporated into many kernel methods (e.g., support vector machines (SVM), support vector regression, and least-squares SVM (LS-SVM)). For domain adaptation, we also develop two new domain adaptation methods referred to as FastDAM and UniverDAM. In FastDAM, we introduce our proposed domain-dependent regularizer into LS-SVM as well as employ a sparsity regularizer to learn a sparse target classifier with the support vectors only from the target domain, which thus makes the label prediction on any test instance very fast. In UniverDAM, we additionally make use of the instances from the source domains as Universum to further enhance the generalization ability of the target classifier. We evaluate our two methods on the challenging TRECIVD 2005 dataset for the large-scale video concept detection task as well as on the 20 newsgroups and email spam datasets for document retrieval. Comprehensive experiments demonstrate that FastDAM and UniverDAM outperform the existing multiple source domain adaptation methods for the two applications." ] }
1907.08375
2962785319
Unsupervised domain adaptation for classification tasks has achieved great progress in leveraging the knowledge in a labeled (source) domain to improve the task performance in an unlabeled (target) domain by mitigating the effect of distribution discrepancy. However, most existing methods can only handle unsupervised closed set domain adaptation (UCSDA), where the source and target domains share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that the source domain does not have. This study is the first to give the generalization bound of open set domain adaptation through theoretically investigating the risk of the target classifier on the unknown classes. The proposed generalization bound for open set domain adaptation has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. According to this generalization bound, we propose a novel and theoretically guided unsupervised open set domain adaptation method: Distribution Alignment with Open Difference (DAOD), which is based on the structural risk minimization principle and open set difference regularization. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.
The open set domain adaptation problem was proposed by Assign-and-Transform-Iteratively (ATI- @math ) @cite_41 . Using @math distance between each target sample and the center of each source class, ATI- @math constructs a constraint integer programming to recognize unknown target samples @math , then learns a linear transformation to match the source domain and target domain excluding @math . However, ATI- @math requires the help of unknown source samples, which are unavailable in our setting. Recently, a deep learning method, Open Set Back Propagation (OSBP) @cite_19 , has been proposed. OSBP relies on adversarial neural network and a binary cross entropy loss to learn the probability of target samples, then uses the estimated probability to separate unknown target classes samples. However, we have not found any paper that considers the generalization bound for open set domain adaptation. In this paper, we complete the blank in open set domain adaptation theory.
{ "cite_N": [ "@cite_41", "@cite_19" ], "mid": [ "2798593490", "2962997028", "1822439997", "2207593006" ], "abstract": [ "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings.", "This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function @math in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a non-linear transformation between the joint feature label space distributions of the two domain @math and @math . We propose a solution of this problem with optimal transport, that allows to recover an estimated target @math by optimizing simultaneously the optimal coupling and @math . We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.", "Recent domain adaptation methods successfully learn cross-domain transforms to map points between source and target domains. Yet, these methods are either restricted to a single training domain, or assume that the separation into source domains is known a priori. However, most available training data contains multiple unknown domains. In this paper, we present both a novel domain transform mixture model which outperforms a single transform model when multiple domains are present, and a novel constrained clustering method that successfully discovers latent domains. Our discovery method is based on a novel hierarchical clustering technique that uses available object category information to constrain the set of feasible domain separations. To illustrate the effectiveness of our approach we present experiments on two commonly available image datasets with and without known domain labels: in both cases our method outperforms baseline techniques which use no domain adaptation or domain adaptation methods that presume a single underlying domain shift.", "In many real-world applications, the domain of model learning (referred as source domain) is usually inconsistent with or even different from the domain of testing (referred as target domain), which makes the learnt model degenerate in target domain, i.e., the test domain. To alleviate the discrepancy between source and target domains, we propose a domain adaptation method, named as Bi-shifting Auto-Encoder network (BAE). The proposed BAE attempts to shift source domain samples to target domain, and also shift the target domain samples to source domain. The non-linear transformation of BAE ensures the feasibility of shifting between domains, and the distribution consistency between the shifted domain and the desirable domain is constrained by sparse reconstruction between them. As a result, the shifted source domain is supervised and follows similar distribution as target domain. Therefore, any supervised method can be applied on the shifted source domain to train a classifier for classification in target domain. The proposed method is evaluated on three domain adaptation scenarios of face recognition, i.e., domain adaptation across view angle, ethnicity, and imaging sensor, and the promising results demonstrate that our proposed BAE can shift samples between domains and thus effectively deal with the domain discrepancy." ] }
1907.08307
2963457350
The term Neural Architecture Search (NAS) refers to the automatic optimization of network architectures for a new, previously unknown task. Since testing an architecture is computationally very expensive, many optimizers need days or even weeks to find suitable architectures. However, this search time can be significantly reduced if knowledge from previous searches on different tasks is reused. In this work, we propose a generally applicable framework that introduces only minor changes to existing optimizers to leverage this feature. As an example, we select an existing optimizer and demonstrate the complexity of the integration of the framework as well as its impact. In experiments on CIFAR-10 and CIFAR-100, we observe a reduction in the search time from 200 to only 6 GPU days, a speed up by a factor of 33. In addition, we observe new records of 1.99 and 14.06 for NAS optimizers on the CIFAR benchmarks, respectively. In a separate study, we analyze the impact of the amount of source and target data. Empirically, we demonstrate that the proposed framework generally gives better results and, in the worst case, is just as good as the unmodified optimizer.
Neural Architecture Search (NAS), the structural optimization of neural networks, is solved with a variety of optimization techniques. These include reinforcement learning @cite_3 @cite_0 @cite_4 @cite_13 @cite_2 @cite_27 @cite_11 , evolutionary algorithms @cite_17 @cite_33 @cite_34 @cite_15 , and surrogate model-based optimization @cite_21 @cite_16 . These techniques have made great advancements with the idea of sharing weights across different architectures which are sampled during the search process @cite_18 @cite_22 @cite_32 @cite_23 @cite_30 instead of training them from scratch. For a detailed overview we refer to a recent survey @cite_12 .
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_3", "@cite_2", "@cite_15", "@cite_18", "@cite_4", "@cite_21", "@cite_23", "@cite_17", "@cite_32", "@cite_27", "@cite_16", "@cite_34", "@cite_12", "@cite_33", "@cite_0", "@cite_13", "@cite_11" ], "mid": [ "2957020430", "2902251695", "2910554758", "2888429796" ], "abstract": [ "We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS). Different from existing hardware-aware NAS which assumes a fixed hardware design and explores the neural architecture search space only, our framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both test accuracy and hardware efficiency. Such a practice greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. The framework iteratively performs a two-level (fast and slow) exploration. Without lengthy training, the fast exploration can effectively fine-tune hyperparameters and prune inferior architectures in terms of hardware specifications, which significantly accelerates the NAS process. Then, the slow exploration trains candidates on a validation set and updates a controller using the reinforcement learning to maximize the expected accuracy together with the hardware efficiency. Experiments on ImageNet show that our co-exploration NAS can find the neural architectures and associated hardware design with the same accuracy, 35.24 higher throughput, 54.05 higher energy efficiency and 136x reduced search time, compared with the state-of-the-art hardware-aware NAS.", "Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. @math GPU hours) makes it difficult to search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present that can learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08 test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6 @math fewer parameters. On ImageNet, our model achieves 3.1 better top-1 accuracy than MobileNetV2, while being 1.2 @math faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.", "Recently, Neural Architecture Search (NAS) has successfully identified neural network architectures that exceed human designed ones on large-scale image classification. In this paper, we study NAS for semantic image segmentation. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Auto-DeepLab, our architecture searched specifically for semantic image segmentation, attains state-of-the-art performance without any ImageNet pretraining.", "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods. Furthermore, the computational resource is 10 times fewer than typical methods based on RL and EA." ] }
1907.08116
2956997324
Designing fast and reliable distributed consensus protocols is a key to enabling mission-critical and real-time controls of industrial Internet of Things (IIoT) nodes communicating over wireless links. However, chasing both low-latency and reliability of a consensus protocol at once is a challenging task. The problem is even aggravated under wireless connectivity that is slower and less reliable, compared to wired connections presumed in traditional consensus protocols. To tackle this issue, we investigate fundamental relationships between consensus latency and reliability under wireless connectivity, and thereby co-design communication and consensus protocols for low-latency and reliable IIoT systems. Specifically, we propose a novel communication-efficient distributed consensus protocol, termed Random Representative Consensus (R2C), and show its effectiveness under gossip and broadcast communication protocols. To this end, we derive closed-form end-to-end (E2E) latency expression of R2C that guarantees a target reliability, and compare this with a baseline consensus protocol, referred to as Referendum Consensus (RC).
Nonetheless, most of the aforementioned algorithms postulate that nodes are communicating over fast and reliable wired links. To support large-scale systems, wireless connectivity is mandatory in consensus operations, and its impact on consensus reliability and latency should be carefully examined. On this account, wireless distributed consensus protocols have recently been studied in several works @cite_31 @cite_29 @cite_15 @cite_23 @cite_0 @cite_18 @cite_25 @cite_20 . For instance, a Hashgraph-motivated wireless distributed consensus protocol has been introduced in @cite_31 , in the context of distributed wireless spectrum access applications. For power grid applications, an Ethereum-based smart contract structure and its operation protocol has been studied in @cite_9 .
{ "cite_N": [ "@cite_18", "@cite_15", "@cite_29", "@cite_9", "@cite_0", "@cite_23", "@cite_31", "@cite_25", "@cite_20" ], "mid": [ "2151135252", "2152121970", "2161586373", "2144381048" ], "abstract": [ "Reaching consensus in a network is an important problem in control, estimation, and resource allocation. While many algorithms focus on computing the exact average of the initial values in the network, in some cases it is more important for nodes to reach a consensus quickly. In a distributed system establishing two-way communication may also be difficult or unreliable. In this paper, the effect of the wireless medium on simple consensus protocol is explored. In a wireless environment, a node's transmission is a broadcast to all nodes which can hear it, and due to signal propagation effects, the neighborhood size may change with time. A class of non-sum preserving algorithms involving unidirectional broadcasting is extended to a time-varying connection model. This algorithm converges almost surely and its expected consensus value is the true average. A simple bound is given on the convergence time.", "We develop and analyze low-complexity cooperative diversity protocols that combat fading induced by multipath propagation in wireless networks. The underlying techniques exploit space diversity available through cooperating terminals' relaying signals for one another. We outline several strategies employed by the cooperating radios, including fixed relaying schemes such as amplify-and-forward and decode-and-forward, selection relaying schemes that adapt based upon channel measurements between the cooperating terminals, and incremental relaying schemes that adapt based upon limited feedback from the destination terminal. We develop performance characterizations in terms of outage events and associated outage probabilities, which measure robustness of the transmissions to fading, focusing on the high signal-to-noise ratio (SNR) regime. Except for fixed decode-and-forward, all of our cooperative diversity protocols are efficient in the sense that they achieve full diversity (i.e., second-order diversity in the case of two terminals), and, moreover, are close to optimum (within 1.5 dB) in certain regimes. Thus, using distributed antennas, we can provide the powerful benefits of space diversity without need for physical arrays, though at a loss of spectral efficiency due to half-duplex operation and possibly at the cost of additional receive hardware. Applicable to any wireless setting, including cellular or ad hoc networks-wherever space constraints preclude the use of physical arrays-the performance characterizations reveal that large power or energy savings result from the use of these protocols.", "A fundamental problem in large scale wireless networks is the energy efficient broadcast of source messages to the whole network. The energy consumption increases as the network size grows, and the optimization of broadcast efficiency becomes more important. In this paper, we study the optimal power allocation problem for cooperative broadcast in dense large-scale networks. In the considered cooperation protocol, a single source initiates the transmission and the rest of the nodes retransmit the source message if they have decoded it reliably. Each node is allocated an-orthogonal channel and the nodes improve their receive signal-to-noise ratio (SNR), hence the energy efficiency, by maximal-ratio combining the receptions of the same packet from different transmitters. We assume that the decoding of the source message is correct as long as the receive SNR exceeds a predetermined threshold. Under the optimal cooperative broadcasting, the transmission order (i.e., the schedule) and the transmission powers of the source and the relays are designed so that every node receives the source message reliably and the total power consumption is minimized. In general, finding the best scheduling in cooperative broadcast is known to be an NP-complete problem. In this paper, we show that the optimal scheduling problem can be solved for dense networks, which we approximate as a continuum of nodes. Under the continuum model, we derive the optimal scheduling and the optimal power density. Furthermore, we propose low-complexity, distributed and power efficient broadcasting schemes and compare their power consumptions with those-of-a traditional noncooperative multihop transmission.", "Recently, several research contributions have justified that wireless communication is not only a security burden. Its unpredictable and erratic nature can also be turned against an adversary and used to augment conventional security protocols, especially key agreement. In this paper, we are inspired by promising studies on such key agreement schemes, yet aim for releasing some of their limiting assumptions. We demonstrate the feasibility of our scheme within performance-limited wireless sensor networks. The central idea is to use the reciprocity of the wireless channel response between two transceivers as a correlated random variable. Doing so over several frequencies results in a random vector from which a shared secret is extracted. By employing error correction techniques, we are able to control the trade-off between the amount of secrecy and the robustness of our key agreement protocol. To evaluate its applicability, the protocol is implemented on MicaZ sensor nodes and analyzed in indoor environments. Further, these experiments provide insights into realistic channel behavior, available information entropy, and show a high rate of successful key agreements, up to 95 ." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
The sample complexity of OP transforms @math has been largely pinned down by the compressed sensing literature. For example, suppose that @math is any orthogonal and sufficiently flat matrix, in the sense that none of the entries of @math are too large. Then a result of Rudelson and Vershynin (and a sharpening of their result by Bourgain) shows that @math samples suffice to establish that the matrix @math (which is made up of @math sampled rows from @math ) has the Restricted Isometry Property (RIP) @cite_0 @cite_4 . Finding @math from samples of @math of corresponds to the problem of finding an (approximately) @math -sparse vector @math from the linear measurements @math , which is precisely the compressed sensing problem. It is known that if @math satisfies the RIP, then this can be solved (for example with @math minimization) in time @math . We note that very recently a result due to B show that this is essentially tight, in that @math queries (for a certain range of @math ) to @math are not enough to compute a @math -sparse approximation of @math @cite_16 . Bounds specific to the DFT over finite fields can be found in @cite_30 .
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_4", "@cite_30" ], "mid": [ "2568623927", "2598205349", "2973707709", "2949898670" ], "abstract": [ "Consider the problem of recovering an unknown signal from undersampled measurements, given the knowledge that the signal has a sparse representation in a specified dictionary @math . This problem is now understood to be well-posed and efficiently solvable under suitable assumptions on the measurements and dictionary if the number of measurements scales roughly with the sparsity level. One sufficient condition for such is the @math -restricted isometry property ( @math -RIP), which asks that the sampling matrix approximately preserve the norm of all signals which are sufficiently sparse in @math . While many classes of random matrices are known to satisfy such conditions, such matrices are not representative of the structural constraints imposed by practical sensing systems. We close this gap in the theory by demonstrating that one can subsample a fixed orthogonal matrix in such a way that the @math -RIP will hold, provided this basis is sufficiently incoherent with the sparsifying dictionary $ ...", "As a paradigm for reconstructing sparse signals using a set of under sampled measurements, compressed sensing has received much attention in recent years. In identifying the sufficient condition under which the perfect recovery of sparse signals is ensured, a property of the sensing matrix referred to as the restricted isometry property (RIP) is popularly employed. In this article, we propose the RIP based bound of the orthogonal matching pursuit (OMP) algorithm guaranteeing the exact reconstruction of sparse signals. Our proof is built on an observation that the general step of the OMP process is in essence the same as the initial step in the sense that the residual is considered as a new measurement preserving the sparsity level of an input vector. Our main conclusion is that if the restricted isometry constant δ K of the sensing matrix satisfies δ K < K - 1 K - 1 + K then the OMP algorithm can perfectly recover K(> 1)-sparse signals from measurements. We show that our bound is sharp and indeed close to the limit conjectured by Dai and Milenkovic.", "In sparse approximation theory, the fundamental problem is to reconstruct a signal A∈ℝn from linear measurements 〈Aψi〉 with respect to a dictionary of ψi's. Recently, there is focus on the novel direction of Compressed Sensing [9] where the reconstruction can be done with very few—O(k logn)—linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [9, 4, 23] prove that there exists a single O(k logn) ×n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1 e, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + e and improves the reconstruction time from poly(n) to poly(k logn) Our second result is a randomized construction of O(kpolylog (n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms [11, 12, 6] and Complexity Theory [1] for this case Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement", "Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization -- the firm shrinkage nonlinearity and the minimax nonlinearity -- and also nonscalar denoisers -- block thresholding, monotone regression, and total variation minimization. Let the variables eps = k N and delta = n N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x_0 according to y=Ax_0. Here A is an n N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve delta = delta(eps) separating successful from unsuccessful reconstruction of x_0 by AMP is given by: delta = M(eps| Denoiser), where M(eps| Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally-tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
Rauhut and Ward @cite_13 show that for Jacobi polynomial transforms if the evaluation points were picked according to the Chebyshev measure , then with @math random measurements, the corresponding matrix has the RIP (note that the Foucart and Rauhut sample the evaluation points according to the measure of orthogonality for the Jacobi polynomials, which in general is not the Chebyshev measure). This result again does not give a sub-linear time algorithm but was used in the result of @cite_26 which we describe below.
{ "cite_N": [ "@cite_26", "@cite_13" ], "mid": [ "2141454789", "1993855803", "2949485285", "2124659530" ], "abstract": [ "We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre s-sparse polynomial of maximal degree N can be recovered from [email protected]?slog^4(N) random samples that are chosen independently according to the Chebyshev probability measure [email protected](x)[email protected]^-^1(1-x^2)^-^1^ ^2dx. As an efficient recovery method, @?\"1-minimization can be used. We establish these results by verifying the restricted isometry property of a preconditioned random Legendre matrix. We then extend these results to a large class of orthogonal polynomial systems, including the Jacobi polynomials, of which the Legendre polynomials are a special case. Finally, we transpose these results into the setting of approximate recovery for functions in certain infinite-dimensional function spaces.", "In this paper, we attempt to approximate and index a d- dimensional (d ≥ 1) spatio-temporal trajectory with a low order continuous polynomial. There are many possible ways to choose the polynomial, including (continuous)Fourier transforms, splines, non-linear regressino, etc. Some of these possiblities have indeed been studied beofre. We hypothesize that one of the best possibilities is the polynomial that minimizes the maximum deviation from the true value, which is called the minimax polynomial. Minimax approximation is particularly meaningful for indexing because in a branch-and-bound search (i.e., for finding nearest neighbours), the smaller the maximum deviation, the more pruning opportunities there exist. However, in general, among all the polynomials of the same degree, the optimal minimax polynomial is very hard to compute. However, it has been shown thta the Chebyshev approximation is almost identical to the optimal minimax polynomial, and is easy to compute [16]. Thus, in this paper, we explore how to use the Chebyshev polynomials as a basis for approximating and indexing d-dimenstional trajectories.The key analytic result of this paper is the Lower Bounding Lemma. that is, we show that the Euclidean distance between two d-dimensional trajectories is lower bounded by the weighted Euclidean distance between the two vectors of Chebyshev coefficients. this lemma is not trivial to show, and it ensures that indexing with Chebyshev cofficients aedmits no false negatives. To complement that analystic result, we conducted comprehensive experimental evaluation with real and generated 1-dimensional to 4-dimensional data sets. We compared the proposed schem with the Adaptive Piecewise Constant Approximation (APCA) scheme. Our preliminary results indicate that in all situations we tested, Chebyshev indexing dominates APCA in pruning power, I O and CPU costs.", "This paper proves that an \"old dog\", namely -- the classical Johnson-Lindenstrauss transform, \"performs new tricks\" -- it gives a novel way of preserving differential privacy. We show that if we take two databases, @math and @math , such that (i) @math is a rank-1 matrix of bounded norm and (ii) all singular values of @math and @math are sufficiently large, then multiplying either @math or @math with a vector of iid normal Gaussians yields two statistically close distributions in the sense of differential privacy. Furthermore, a small, deterministic and alteration of the input is enough to assert that all singular values of @math are large. We apply the Johnson-Lindenstrauss transform to the task of approximating cut-queries: the number of edges crossing a @math -cut in a graph. We show that the JL transform allows us to that preserves edge differential privacy (where two graphs are neighbors if they differ on a single edge) while adding only @math random noise to any given query (w.h.p). Comparing the additive noise of our algorithm to existing algorithms for answering cut-queries in a differentially private manner, we outperform all others on small cuts ( @math ). We also apply our technique to the task of estimating the variance of a given matrix in any given direction. The JL transform allows us to that preserves differential privacy w.r.t bounded changes (each row in the matrix can change by at most a norm-1 vector) while adding random noise of magnitude independent of the size of the matrix (w.h.p). In contrast, existing algorithms introduce an error which depends on the matrix dimensions.", "The Fast Johnson-Lindenstrauss Transform (FJLT) was recently discovered by Ailon and Chazelle as a novel technique for performing fast dimension reduction with small distortion from ed2 to ed2 in time O(max d log d,k3 ). For k in [Ω(log d), O(d1 2)] this beats time O(dk) achieved by naive multiplication by random dense matrices, an approach followed by several authors as a variant of the seminal result by Johnson and Lindenstrauss (JL) from the mid 80's. In this work we show how to significantly improve the running time to O(d log k) for k = O(d1 2−Δ), for any arbitrary small fixed Δ. This beats the better of FJLT and JL. Our analysis uses a powerful measure concentration bound due to Talagrand applied to Rademacher series in Banach spaces (sums of vectors in Banach spaces with random signs). The set of vectors used is a real embedding of dual BCH code vectors over GF(2). We also discuss the number of random bits used and reduction to e1 space. The connection between geometry and discrete coding theory discussed here is interesting in its own right and may be useful in other algorithmic applications as well." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
While these approaches can give near-optimal sample complexity, they do not give sublinear-time algorithms. In fact, it is faster to compute @math exactly by computing @math , if we care only about the running time and not about sample complexity @cite_8 . Thus, we turn our attention to sublinear-time algorithms.
{ "cite_N": [ "@cite_8" ], "mid": [ "2553316781", "2473549844", "2410099853", "2089135543" ], "abstract": [ "A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms for orthogonal tensor decomposition with provable guarantees. Their algorithm is based on computing sketches of the input tensor, which requires reading the entire input. We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i.e., even without reading most of the input tensor. Instead of using sketches to estimate inner products in tensor decomposition algorithms, we use importance sampling. To achieve sublinear time, we need to know the norms of tensor slices, and we show how to do this in a number of important cases. For symmetric tensors T = ∑ki=1 λiui⊗p with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below || T ||F then sublinear time is again possible. One of the main strengths of our work is empirical - in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy.", "We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math .", "In this paper we provide faster algorithms for solving the geometric median problem: given n points in d compute a point that minimizes the sum of Euclidean distances to the points. This is one of the oldest non-trivial problems in computational geometry yet despite a long history of research the previous fastest running times for computing a (1+є)-approximate geometric median were O(d· n4 3є−8 3) by Chin et. al, O(dexpє−4logє−1) by Badoiu et. al, O(nd+poly(d,є−1)) by Feldman and Langberg, and the polynomial running time of O((nd)O(1)log1 є) by Parrilo and Sturmfels and Xue and Ye. In this paper we show how to compute such an approximate geometric median in time O(ndlog3n є) and O(dє−2). While our O(dє−2) is a fairly straightforward application of stochastic subgradient descent, our O(ndlog3n є) time algorithm is a novel long step interior point method. We start with a simple O((nd)O(1)log1 є) time interior point method and show how to improve it, ultimately building an algorithm that is quite non-standard from the perspective of interior point literature. Our result is one of few cases of outperforming standard interior point theory. Furthermore, it is the only case we know of where interior point methods yield a nearly linear time algorithm for a canonical optimization problem that traditionally requires superlinear time.", "We give efficient algorithms for volume sampling, i.e., for picking @math -subsets of the rows of any given matrix with probabilities proportional to the squared volumes of the simplices defined by them and the origin (or the squared volumes of the parallelepipeds defined by these subsets of rows). In other words, we can efficiently sample @math -subsets of @math with probabilities proportional to the corresponding @math by @math principal minors of any given @math by @math positive semi definite matrix. This solves an open problem from the monograph on spectral algorithms by Kannan and Vempala (see Section @math of KV , also implicit in BDM, DRVW ). Our first algorithm for volume sampling @math -subsets of rows from an @math -by- @math matrix runs in @math arithmetic operations (where @math is the exponent of matrix multiplication) and a second variant of it for @math -approximate volume sampling runs in @math arithmetic operations, which is almost linear in the size of the input (i.e., the number of entries) for small @math . Our efficient volume sampling algorithms imply the following results for low-rank matrix approximation: (1) Given @math , in @math arithmetic operations we can find @math of its rows such that projecting onto their span gives a @math -approximation to the matrix of rank @math closest to @math under the Frobenius norm. This improves the @math -approximation of Boutsidis, Drineas and Mahoney BDM and matches the lower bound shown in DRVW . The method of conditional expectations gives a algorithm with the same complexity. The running time can be improved to @math at the cost of losing an extra @math in the approximation factor. (2) The same rows and projection as in the previous point give a @math -approximation to the matrix of rank @math closest to @math under the spectral norm. In this paper, we show an almost matching lower bound of @math , even for @math ." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
There have been several works generalizing and building on the sFFT results mentioned above. One direction is to the multi-dimensional DFT (for example in @cite_27 @cite_9 ). Another direction is to apply the sFFT framework to orthogonal polynomials with similar structure. One example is Chebyshev polynomials and the Discrete Cosine Transform (DCT). It was observed in @cite_26 (also see Appendix ) that this can be reduced to sFFT in a black box manner, solving the sparse recovery problem for Chebyshev polynomials and the DCT. A second example of OP transforms which can essentially be reduced to the sFFT is Legendre polynomials. @cite_26 seek to recover an unknown @math -term Legendre polynomial (with highest magnitude degree limited to be @math ), defined on @math , from samples. They give a sublinear two-phase algorithm: in the first phase, they reduce @math -sparse-Legendre to sFFT to identify a set of candidate Legendre polynomials. The second phase uses the RIP result for BOS to produce a matrix that is used to estimate the coefficients of the candidate Legendre polynomials. We note that in this work the setting is naturally continuous, while ours is discrete.
{ "cite_N": [ "@cite_9", "@cite_27", "@cite_26" ], "mid": [ "2141454789", "1993855803", "2147276092", "2172028873" ], "abstract": [ "We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre s-sparse polynomial of maximal degree N can be recovered from [email protected]?slog^4(N) random samples that are chosen independently according to the Chebyshev probability measure [email protected](x)[email protected]^-^1(1-x^2)^-^1^ ^2dx. As an efficient recovery method, @?\"1-minimization can be used. We establish these results by verifying the restricted isometry property of a preconditioned random Legendre matrix. We then extend these results to a large class of orthogonal polynomial systems, including the Jacobi polynomials, of which the Legendre polynomials are a special case. Finally, we transpose these results into the setting of approximate recovery for functions in certain infinite-dimensional function spaces.", "In this paper, we attempt to approximate and index a d- dimensional (d ≥ 1) spatio-temporal trajectory with a low order continuous polynomial. There are many possible ways to choose the polynomial, including (continuous)Fourier transforms, splines, non-linear regressino, etc. Some of these possiblities have indeed been studied beofre. We hypothesize that one of the best possibilities is the polynomial that minimizes the maximum deviation from the true value, which is called the minimax polynomial. Minimax approximation is particularly meaningful for indexing because in a branch-and-bound search (i.e., for finding nearest neighbours), the smaller the maximum deviation, the more pruning opportunities there exist. However, in general, among all the polynomials of the same degree, the optimal minimax polynomial is very hard to compute. However, it has been shown thta the Chebyshev approximation is almost identical to the optimal minimax polynomial, and is easy to compute [16]. Thus, in this paper, we explore how to use the Chebyshev polynomials as a basis for approximating and indexing d-dimenstional trajectories.The key analytic result of this paper is the Lower Bounding Lemma. that is, we show that the Euclidean distance between two d-dimensional trajectories is lower bounded by the weighted Euclidean distance between the two vectors of Chebyshev coefficients. this lemma is not trivial to show, and it ensures that indexing with Chebyshev cofficients aedmits no false negatives. To complement that analystic result, we conducted comprehensive experimental evaluation with real and generated 1-dimensional to 4-dimensional data sets. We compared the proposed schem with the Adaptive Piecewise Constant Approximation (APCA) scheme. Our preliminary results indicate that in all situations we tested, Chebyshev indexing dominates APCA in pruning power, I O and CPU costs.", "Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper, we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modeled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed lscr2 lscr1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP, we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.", "We analyze a sublinear RA@?SFA (randomized algorithm for Sparse Fourier analysis) that finds a near-optimal B-term Sparse representation R for a given discrete signal S of length N, in time and space poly(B,log(N)), following the approach given in [A.C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss, Near-Optimal Sparse Fourier Representations via Sampling, STOC, 2002]. Its time cost poly(log(N)) should be compared with the superlinear @W(NlogN) time requirement of the Fast Fourier Transform (FFT). A straightforward implementation of the RA@?SFA, as presented in the theoretical paper [A.C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss, Near-Optimal Sparse Fourier Representations via Sampling, STOC, 2002], turns out to be very slow in practice. Our main result is a greatly improved and practical RA@?SFA. We introduce several new ideas and techniques that speed up the algorithm. Both rigorous and heuristic arguments for parameter choices are presented. Our RA@?SFA constructs, with probability at least 1-@d, a near-optimal B-term representation R in time poly(B)log(N)log(1 @d) @e^2log(M) such that @?S-R@?\"2^2=<(1+@e)@?S-R\"o\"p\"t@?\"2^2. Furthermore, this RA@?SFA implementation already beats the FFTW for not unreasonably large N. We extend the algorithm to higher dimensional cases both theoretically and numerically. The crossover point lies at N 70,000 in one dimension, and at N 900 for data on a NxN grid in two dimensions for small B signals where there is noise." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
@cite_18 study higher dimensions and obtain sublinear-time algorithms for more general harmonic expansions in multiple dimensions. The results of @cite_18 complement our work. More precisely, that work shows how to use any algorithm for a univariate polynomial transform to design an algorithm for a multi-variate polynomial transform where the multi-variate polynomials are products of univariate polynomials in the individual variables. Thus our improvements for univariate polynomial transforms can be used with @cite_18 .
{ "cite_N": [ "@cite_18" ], "mid": [ "2295840581", "2016576580", "2168222442", "2034972122" ], "abstract": [ "We show an exponential separation between two well-studied models of algebraic computation, namely read-once oblivious algebraic branching programs (ROABPs) and multilinear depth three circuits. In particular we show the following: 1. There exists an explicit n-variate polynomial computable by linear sized multilinear depth three circuits (with only two product gates) such that every ROABP computing it requires 2^ Omega(n) size. 2. Any multilinear depth three circuit computing IMM_ n,d (the iterated matrix multiplication polynomial formed by multiplying d, n * n symbolic matrices) has n^ Omega(d) size. IMM_ n,d can be easily computed by a poly(n,d) sized ROABP. 3. Further, the proof of 2 yields an exponential separation between multilinear depth four and multilinear depth three circuits: There is an explicit n-variate, degree d polynomial computable by a poly(n,d) sized multilinear depth four circuit such that any multilinear depth three circuit computing it has size n^ Omega(d) . This improves upon the quasi-polynomial separation result by Raz and Yehudayoff [2009] between these two models. The hard polynomial in 1 is constructed using a novel application of expander graphs in conjunction with the evaluation dimension measure used previously in Nisan [1991], Raz [2006,2009], Raz and Yehudayoff [2009], and Forbes and Shpilka [2013], while 2 is proved via a new adaptation of the dimension of the partial derivatives measure used by Nisan and Wigderson [1997]. Our lower bounds hold over any field.", "In their paper on the ''chasm at depth four'', Agrawal and Vinay have shown that polynomials in m variables of degree O(m) which admit arithmetic circuits of size 2^o^(^m^) also admit arithmetic circuits of depth four and size 2^o^(^m^). This theorem shows that for problems such as arithmetic circuit lower bounds or black-box derandomization of identity testing, the case of depth four circuits is in a certain sense the general case. In this paper we show that smaller depth four circuits can be obtained if we start from polynomial size arithmetic circuits. For instance, we show that if the permanent of nxn matrices has circuits of size polynomial in n, then it also has depth 4 circuits of size n^O^(^n^l^o^g^n^). If the original circuit uses only integer constants of polynomial size, then the same is true for the resulting depth four circuit. These results have potential applications to lower bounds and deterministic identity testing, in particular for sums of products of sparse univariate polynomials. We also use our techniques to reprove two results on: -the existence of nontrivial boolean circuits of constant depth for languages in LOGCFL; -reduction to polylogarithmic depth for arithmetic circuits of polynomial size and polynomially bounded degree.", "We present algorithmic, complexity and implementation results concerning real root isolation of integer univariate polynomials using the continued fraction expansion of real algebraic numbers. One motivation is to explain the method's good performance in practice. We derive an expected complexity bound of [email protected]?\"B(d^6+d^[email protected]^2), where d is the polynomial degree and @t bounds the coefficient bit size, using a standard bound on the expected bit size of the integers in the continued fraction expansion, thus matching the current worst-case complexity bound for real root isolation by exact methods (Sturm, Descartes and Bernstein subdivision). Moreover, using a homothetic transformation we improve the expected complexity bound to [email protected]?\"B(d^[email protected]). We compute the multiplicities within the same complexity and extend the algorithm to non-square-free polynomials. Finally, we present an open-source C++ implementation in the algebraic library synaps, and illustrate its completeness and efficiency as compared to some other available software. For this we use polynomials with coefficient bit size up to 8000 bits and degree up to 1000.", "Classical dictionary learning algorithms (DLA) allow unicomponent signals to be processed. Due to our interest in two-dimensional (2D) motion signals, we wanted to mix the two components to provide rotation invariance. So, multicomponent frameworks are examined here. In contrast to the well-known multichannel framework, a multivariate framework is first introduced as a tool to easily solve our problem and to preserve the data structure. Within this multivariate framework, we then present sparse coding methods: multivariate orthogonal matching pursuit (M-OMP), which provides sparse approximation for multivariate signals, and multivariate DLA (M-DLA), which empirically learns the characteristic patterns (or features) that are associated to a multivariate signals set, and combines shift-invariance and online learning. Once the multivariate dictionary is learned, any signal of this considered set can be approximated sparsely. This multivariate framework is introduced to simply present the 2D rotation invariant (2DRI) case. By studying 2D motions that are acquired in bivariate real signals, we want the decompositions to be independent of the orientation of the movement execution in the 2D space. The methods are thus specified for the 2DRI case to be robust to any rotation: 2DRI-OMP and 2DRI-DLA. Shift and rotation invariant cases induce a compact learned dictionary and provide robust decomposition. As validation, our methods are applied to 2D handwritten data to extract the elementary features of this signals set, and to provide rotation invariant decomposition." ] }
1907.08362
2963011296
In this paper we consider the following sparse recovery problem. We have query access to a vector @math such that @math is @math -sparse (or nearly @math -sparse) for some orthogonal transform @math . The goal is to output an approximation (in an @math sense) to @math in sublinear time. This problem has been well-studied in the special case that @math is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time @math . However, for transforms @math other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms---running in time @math ---for solving the sparse recovery problem for orthogonal transforms @math that arise from orthogonal polynomials. More precisely, our algorithm works for any @math that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the @math -sparse sparse recovery problem to the @math -sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials.
Finally, there are sparse OP transforms based on Prony's method. The work @cite_20 extends Prony's method to a very general setting, including Jacobi polynomials, and gives an algorithm that requires only @math queries to recover exactly @math -sparse polynomials. However, these general results work only for exact sparsity and are in general not robust to noise. There has been work extending and modifying these techniques to settings with noise (for example, @cite_11 @cite_31 ), but to the best of our knowledge the only provable results for noise are for either the sFFT or closely related polynomial families. We note that @cite_32 presents a Prony-like algorithm for Legendre and Gegenbauer polynomials and demonstrates empirically that this algorithm is robust to noise, although they do not address the question theoretically.
{ "cite_N": [ "@cite_31", "@cite_32", "@cite_20", "@cite_11" ], "mid": [ "2211482116", "1559478034", "2141454789", "1977375620" ], "abstract": [ "We present a new deterministic approximate algorithm for the reconstruction of sparse Legendre expansions from a small number of given samples. Using asymptotic properties of Legendre polynomials, this reconstruction is based on Prony-like methods. The method proposed is robust with respect to noisy sampled data. Furthermore we show that the suggested method can be extended to the reconstruction of sparse Gegenbauer expansions of low positive order.", "In this survey, we describe the classical Prony method and whose relatives. We sketch a frequently used Prony–like method for equispaced sampled data, namely the ESPRIT method. The case of nonequispaced sampled data is discussed too. For the reconstruction of a sparse eigenfunction expansion, a generalized Prony method is presented. The Prony methods are applied to the recovery of structured functions (such as exponential sums and extended exponential sums) and of sparse vectors. The recovery of spline functions with arbitrary knots from Fourier data is also based on Prony methods. Finally, some numerical examples are given. (© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)", "We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre s-sparse polynomial of maximal degree N can be recovered from [email protected]?slog^4(N) random samples that are chosen independently according to the Chebyshev probability measure [email protected](x)[email protected]^-^1(1-x^2)^-^1^ ^2dx. As an efficient recovery method, @?\"1-minimization can be used. We establish these results by verifying the restricted isometry property of a preconditioned random Legendre matrix. We then extend these results to a large class of orthogonal polynomial systems, including the Jacobi polynomials, of which the Legendre polynomials are a special case. Finally, we transpose these results into the setting of approximate recovery for functions in certain infinite-dimensional function spaces.", "We consider the problem of sparse interpolation of an approximate multivariate black-box polynomial in floating point arithmetic. That is, both the inputs and outputs of the black-box polynomial have some error, and all numbers are represented in standard, fixed-precision, floating point arithmetic. By interpolating the black box evaluated at random primitive roots of unity, we give efficient and numerically robust solutions. We note the similarity between the exact Ben-Or Tiwari sparse interpolation algorithm and the classical Prony's method for interpolating a sum of exponential functions, and exploit the generalized eigenvalue reformulation of Prony's method. We analyse the numerical stability of our algorithms and the sensitivity of the solutions, as well as the expected conditioning achieved through randomization. Finally, we demonstrate the effectiveness of our techniques in practice through numerical experiments and applications." ] }
1907.08302
2964151964
With the demand to process ever-growing data volumes, a variety of new data stream processing frameworks have been developed. Moving an implementation from one such system to another, e.g., for performance reasons, requires adapting existing applications to new interfaces. Apache Beam addresses these high substitution costs by providing an abstraction layer that enables executing programs on any of the supported streaming frameworks. In this paper, we present a novel benchmark architecture for comparing the performance impact of using Apache Beam on three streaming frameworks: Apache Spark Streaming, Apache Flink, and Apache Apex. We find significant performance penalties when using Apache Beam for application development in the surveyed systems. Overall, usage of Apache Beam for the examined streaming applications caused a high variance of query execution times with a slowdown of up to a factor of 58 compared to queries developed without the abstraction layer. All developed benchmark artifacts are publicly available to ensure reproducible results.
With respect to benchmarking DSPSs in general, the Linear Road benchmark by @cite_42 is a very well-known work. It is an application benchmark that provides a benchmarking toolkit. This toolkit consists of a data generator, a data sender, and a result validator. The underlying idea of the benchmark is a variable tolling system for a metropolitan area. This area covers multiple expressways with moving vehicles. The amount of accumulated tolls depends on various aspects concerning the traffic situation.
{ "cite_N": [ "@cite_42" ], "mid": [ "2112215401", "1970238092", "2157076480", "1975912085" ], "abstract": [ "This paper specifies the Linear Road Benchmark for Stream Data Management Systems (SDMS). Stream Data Management Systems process streaming data by executing continuous and historical queries while producing query results in real-time. This benchmark makes it possible to compare the performance characteristics of SDMS' relative to each other and to alternative (e.g., Relational Database) systems. Linear Road has been endorsed as an SDMS benchmark by the developers of both the Aurora [1] (out of Brandeis University, Brown University and MIT) and STREAM [8] (out of Stanford University) stream systems. Linear Road simulates a toll system for the motor vehicle expressways of a large metropolitan area. The tolling system uses \"variable tolling\" [6, 11, 9]: an increasingly prevalent tolling technique that uses such dynamic factors as traffic congestion and accident proximity to calculate toll charges. Linear Road specifies a variable tolling system for a fictional urban area including such features as accident detection and alerts, traffic congestion measurements, toll calculations and historical queries. After specifying the benchmark, we describe experimental results involving two implementations: one using a commercially available Relational Database and the other using Aurora. Our results show that a dedicated Stream Data Management System can outperform a Relational Database by at least a factor of 5 on streaming data applications.", "In this paper, we combine the most complete record of daily mobility, based on large-scale mobile phone data, with detailed Geographic Information System (GIS) data, uncovering previously hidden patterns in urban road usage. We find that the major usage of each road segment can be traced to its own - surprisingly few - driver sources. Based on this finding we propose a network of road usage by defining a bipartite network framework, demonstrating that in contrast to traditional approaches, which define road importance solely by topological measures, the role of a road segment depends on both: its betweeness and its degree in the road usage network. Moreover, our ability to pinpoint the few driver sources contributing to the major traffic flow allows us to create a strategy that achieves a significant reduction of the travel time across the entire road system, compared to a benchmark approach.", "This paper describes a system for detecting and estimating the properties of multiple travel lanes in an urban road network from calibrated video imagery and laser range data acquired by a moving vehicle. The system operates in real-time in several stages on multiple processors, fusing detected road markings, obstacles, and curbs into a stable non-parametric estimate of nearby travel lanes. The system incorporates elements of a provided piecewise-linear road network as a weak prior. Our method is notable in several respects: it detects and estimates multiple travel lanes; it fuses asynchronous, heterogeneous sensor streams; it handles high-curvature roads; and it makes no assumption about the position or orientation of the vehicle with respect to the road. We analyze the system's performance in the context of the 2007 DARPA Urban Challenge. With five cameras and thirteen lidars, our method was incorporated into a closed-loop controller to successfully guide an autonomous vehicle through a 90 km urban course at speeds up to 40 km h amidst moving traffic.", "Recently, big data has been evolved into a buzzword from academia to industry all over the world. Benchmarks are important tools for evaluating an IT system. However, benchmarking big data systems is much more challenging than ever before. First, big data systems are still in their infant stage and consequently they are not well understood. Second, big data systems are more complicated compared to previous systems such as a single node computing platform. While some researchers started to design benchmarks for big data systems, they do not consider the redundancy between their benchmarks. Moreover, they use artificial input data sets rather than real world data for their benchmarks. It is therefore unclear whether these benchmarks can be used to precisely evaluate the performance of big data systems. In this paper, we first analyze the redundancy among benchmarks from ICTBench, HiBench and typical workloads from real world applications: spatio-temporal data analysis for Shenzhen transportation system. Subsequently, we present an initial idea of a big data benchmark suite for spatio-temporal data. There are three findings in this work: (1) redundancy exists in these pioneering benchmark suites and some of them can be removed safely. (2) The workload behavior of trajectory data analysis applications is dramatically affected by their input data sets. (3) The benchmarks created for academic research cannot represent the cases of real world applications." ] }