aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1708.04728
2747590145
Deploying deep neural networks on mobile devices is a challenging task. Current model compression methods such as matrix decomposition effectively reduce the deployed model size, but still cannot satisfy real-time processing requirement. This paper first discovers that the major obstacle is the excessive execution time of non-tensor layers such as pooling and normalization without tensor-like trainable parameters. This motivates us to design a novel acceleration framework: DeepRebirth through "slimming" existing consecutive and parallel non-tensor and tensor layers. The layer slimming is executed at different substructures: (a) streamline slimming by merging the consecutive non-tensor and tensor layer vertically; (b) branch slimming by merging non-tensor and tensor branches horizontally. The proposed optimization operations significantly accelerate the model execution and also greatly reduce the run-time memory cost since the slimmed model architecture contains less hidden layers. To maximally avoid accuracy loss, the parameters in new generated layers are learned with layer-wise fine-tuning based on both theoretical analysis and empirical verification. As observed in the experiment, DeepRebirth achieves more than 3x speed-up and 2.5x run-time memory saving on GoogLeNet with only 0.4 drop of top-5 accuracy on ImageNet. Furthermore, by combining with other model compression techniques, DeepRebirth offers an average of 65ms inference time on the CPU of Samsung Galaxy S6 with 86.5 top-5 accuracy, 14 faster than SqueezeNet which only has a top-5 accuracy of 80.5 .
Recently, SqueezeNet @cite_15 has became widely used for its much smaller memory cost and increased speed. However, the near-AlexNet accuracy is far below the state-of-the-art performance. Compared with these two newly networks, our approach has much better accuracy with more significant acceleration. @cite_30 showed that the conv-relu-pool substructure may not be necessary for a neural network architecture. The authors find that max-pooling can simply be replaced by another convolution layer with increased stride without loss in accuracy. Different from this work, replaces a complete substructure (e.g., conv-relu-pool, conv-relu-LRN-pool) with a single convolution layer, and aims to speed-up the model execution on the mobile device. In addition, our work slims a well-trained network by relearning the merged layers and does not require to train from scratch. Essentially, can be considered as a special form of distillation @cite_12 that transfers the knowledge from the cumbersome substructure of multiple layers to the new accelerated substructure.
{ "cite_N": [ "@cite_30", "@cite_15", "@cite_12" ], "mid": [ "2279098554", "2788715907", "2179596243", "1570197553" ], "abstract": [ "Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).", "In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1 , 79.7 and 60.9 , respectively. Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.", "We propose a new method for creating computationally efficient convolutional neural networks (CNNs) by using low-rank representations of convolutional filters. Rather than approximating filters in previously-trained networks with more efficient versions, we learn a set of small basis filters from scratch; during training, the network learns to combine these basis filters into more complex filters that are discriminative for image classification. To train such networks, a novel weight initialization scheme is used. This allows effective initialization of connection weights in convolutional layers composed of groups of differently-shaped filters. We validate our approach by applying it to several existing CNN architectures and training these networks from scratch using the CIFAR, ILSVRC and MIT Places datasets. Our results show similar or higher accuracy than conventional CNNs with much less compute. Applying our method to an improved version of VGG-11 network using global max-pooling, we achieve comparable validation accuracy using 41 less compute and only 24 of the original VGG-11 model parameters; another variant of our method gives a 1 percentage point increase in accuracy over our improved VGG-11 model, giving a top-5 center-crop validation accuracy of 89.7 while reducing computation by 16 relative to the original VGG-11 model. Applying our method to the GoogLeNet architecture for ILSVRC, we achieved comparable accuracy with 26 less compute and 41 fewer model parameters. Applying our method to a near state-of-the-art network for CIFAR, we achieved comparable accuracy with 46 less compute and 55 fewer parameters.", "In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy." ] }
1708.04863
2748458152
Agreement plays a central role in distributed systems working on a common task. The increasing size of modern distributed systems makes them more susceptible to single component failures. Fault-tolerant distributed agreement protocols rely for the most part on leader-based atomic broadcast algorithms, such as Paxos. Such protocols are mostly used for data replication, which requires only a small number of servers to reach agreement. Yet, their centralized nature makes them ill-suited for distributed agreement at large scales. The recently introduced atomic broadcast algorithm AllConcur enables high throughput for distributed agreement while being completely decentralized. In this paper, we extend the work on AllConcur in two ways. First, we provide a formal specification of AllConcur that enables a better understanding of the algorithm. Second, we formally prove AllConcur's safety property on the basis of this specification. Therefore, our work not only ensures operators safe usage of AllConcur, but also facilitates the further improvement of distributed agreement protocols based on AllConcur.
Atomic broadcast plays a central role in fault-tolerant distributed systems; for instance, it enables the implementation of both state machine replication @cite_19 @cite_21 and distributed agreement @cite_0 @cite_22 . As a result, the atomic broadcast problem sparked numerous proposals for algorithms @cite_13 . Many of the proposed algorithms rely on a distinguished server (i.e., a leader) to provide total order; yet, the leader may become a bottleneck, especially at large scale. As an alternative, total order can be achieved by destinations agreement @cite_13 @cite_11 @cite_0 . On the one hand, destinations agreement enables decentralized atomic broadcast algorithms; on the other hand, it entails agreement on the set of delivered messages and, thus, it requires consensus.
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_0", "@cite_19", "@cite_13", "@cite_11" ], "mid": [ "2130264930", "2167100431", "2680467112", "2136564093" ], "abstract": [ "Total order broadcast and multicast (also called atomic broadcast multicast) present an important problem in distributed systems, especially with respect to fault-tolerance. In short, the primitive ensures that messages sent to a set of processes are, in turn, delivered by all those processes in the same total order.", "Atomic broadcast is an important communication primitive often used to implement state-machine replication. Despite the large number of atomic broadcast algorithms proposed in the literature, few papers have discussed how to turn these algorithms into efficient executable protocols. Our main contribution, Ring Paxos, is a protocol derived from Paxos. Ring Paxos inherits the reliability of Paxos and can be implemented very efficiently. We report a detailed performance analysis of Ring Paxos and compare it to other atomic broadcast protocols.", "Many distributed systems require coordination between the components involved. With the steady growth of such systems, the probability of failures increases, which necessitates scalable fault-tolerant agreement protocols. The most common practical agreement protocol, for such scenarios, is leader-based atomic broadcast. In this work, we propose AllConcur, a distributed system that provides agreement through a leaderless concurrent atomic broadcast algorithm, thus, not suffering from the bottleneck of a central coordinator. In AllConcur, all components exchange messages concurrently through a logical overlay network that employs early termination to minimize the agreement latency. Our implementation of AllConcur supports standard sockets-based TCP as well as high-performance InfiniBand Verbs communications. AllConcur can handle up to 135 million requests per second and achieves 17x higher throughput than today's standard leader-based protocols, such as Libpaxos. Thus, AllConcur is highly competitive with regard to existing solutions and, due to its decentralized approach, enables hitherto unattainable system designs in a variety of fields.", "We address the minimum-energy broadcast problem under the assumption that nodes beyond the nominal range of a transmitter can collect the energy of unreliably received overheard signals. As a message is forwarded through the network, a node will have multiple opportunities to reliably receive the message by collecting energy during each retransmission. We refer to this cooperative strategy as accumulative broadcast. We seek to employ accumulative broadcast in a large scale loosely synchronized, low-power network. Therefore, we focus on distributed network layer approaches for accumulative broadcast in which loosely synchronized nodes use only local information. To further simplify the system architecture, we assume that nodes forward only reliably decoded messages. Under these assumptions, we formulate the minimum-energy accumulative broadcast problem. We present a solution employing two subproblems. First, we identify the ordering in which nodes should transmit. Second, we determine the optimum power levels for that ordering. While the second subproblem can be solved by means of linear programming, the ordering subproblem is found to be NP-complete. We devise a heuristic algorithm to find a good ordering. Simulation results show the performance of the algorithm to be close to optimum and a significant improvement over the well known BIP algorithm for constructing energy-efficient broadcast trees. We then formulate a distributed version of the accumulative broadcast algorithm that uses only local information at the nodes and has performance close to its centralized counterpart." ] }
1708.04955
2774797382
A decentralized online quantum cash system, called qBitcoin, is given. We design the system which has great benefits of quantization in the following sense. Firstly, quantum teleportation technology is used for coin transaction, which prevents from the owner of the coin keeping the original coin data even after sending the coin to another. This was a main problem in a classical circuit and a blockchain was introduced to solve this issue. In qBitcoin, the double-spending problem never happens and its security is guaranteed theoretically by virtue of quantum information theory. Making a block is time consuming and the system of qBitcoin is based on a quantum chain, instead of blocks. Therefore a payment can be completed much faster than Bitcoin. Moreover we employ quantum digital signature so that it naturally inherits properties of peer-to-peer (P2P) cash system as originally proposed in Bitcoin.
The attempt to making a money system based on quantum mechanics has a long history. It is believed that Wiesner made a prototype in about 1970 (published in 1983) @cite_5 , in which quantum money that can be verified by a bank is given. In his scheme, quantum money was secure in the sense that it cannot be copied due to the no-cloning theorem, however there were several problems. For example a bank need to maintain a giant data base to store a classical information of quantum money. Aaronson proposed a quantum money scheme where public key was used to verify a banknote @cite_17 and later his scheme was developed in @cite_9 . There is a survey on trying to quantize Bitcoin @cite_21 based on a classical blockchain system and a classical digital signature protocol proposed in @cite_9 . However, all of those works rely on classical digital signature protocols and classical coin transmission system, hence computational hardness assumptions are vital to their systems. In other words, if a computer equipped with ultimate computational ability appears someday, the money systems above are in danger of collapsing, as the bank systems today face.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_21", "@cite_17" ], "mid": [ "2162106291", "2949369157", "1782172301", "1508636262" ], "abstract": [ "Forty years ago, Wiesner proposed using quantum states to create money that is physically impossible to counterfeit, something that cannot be done in the classical world. However, Wiesner's scheme required a central bank to verify the money, and the question of whether there can be unclonable quantum money that anyone can verify has remained open since. One can also ask a related question, which seems to be new: can quantum states be used as copy-protected programs, which let the user evaluate some function f, but not create more programs for f? This paper tackles both questions using the arsenal of modern computational complexity. Our main result is that there exist quantum oracles relative to which publicly-verifiable quantum money is possible, and any family of functions that cannot be efficiently learned from its input-output behavior can be quantumly copy-protected. This provides the first formal evidence that these tasks are achievable. The technical core of our result is a \"Complexity-Theoretic No-Cloning Theorem,\" which generalizes both the standard No-Cloning Theorem and the optimality of Grover search, and might be of independent interest. Our security argument also requires explicit constructions of quantum t-designs. Moving beyond the oracle world, we also present an explicit candidate scheme for publicly-verifiable quantum money, based on random stabilizer states; as well as two explicit schemes for copy-protecting the family of point functions. We do not know how to base the security of these schemes on any existing cryptographic assumption. (Note that without an oracle, we can only hope for security under some computational assumption.)", "Forty years ago, Wiesner pointed out that quantum mechanics raises the striking possibility of money that cannot be counterfeited according to the laws of physics. We propose the first quantum money scheme that is (1) public-key, meaning that anyone can verify a banknote as genuine, not only the bank that printed it, and (2) cryptographically secure, under a \"classical\" hardness assumption that has nothing to do with quantum money. Our scheme is based on hidden subspaces, encoded as the zero-sets of random multivariate polynomials. A main technical advance is to show that the \"black-box\" version of our scheme, where the polynomials are replaced by classical oracles, is unconditionally secure. Previously, such a result had only been known relative to a quantum oracle (and even there, the proof was never published). Even in Wiesner's original setting -- quantum money that can only be verified by the bank -- we are able to use our techniques to patch a major security hole in Wiesner's scheme. We give the first private-key quantum money scheme that allows unlimited verifications and that remains unconditionally secure, even if the counterfeiter can interact adaptively with the bank. Our money scheme is simpler than previous public-key quantum money schemes, including a knot-based scheme of The verifier needs to perform only two tests, one in the standard basis and one in the Hadamard basis -- matching the original intuition for quantum money, based on the existence of complementary observables. Our security proofs use a new variant of Ambainis's quantum adversary method, and several other tools that might be of independent interest.", "Work on quantum cryptography was started by S. J. Wiesner in a paper written in about 1970, but remained unpublished until 1983 [1]. Recently, there have been lots of renewed activities in the subject. The most wellknown application of quantum cryptography is the socalled quantum key distribution (QKD) [2–4], which is useful for making communications between two users totally unintelligible to an eavesdropper. QKD takes advantage of the uncertainty principle of quantum mechanics: Measuring a quantum system in general disturbs it. Therefore, eavesdropping on a quantum communication channel will generally leave unavoidable disturbance in the transmitted signal which can be detected by the legitimate users. Besides QKD, other quantum cryptographic protocols [5] have also been proposed. In particular, it is generally believed [4] that quantum mechanics can protect private information while it is being used for public decision. Suppose Alice has a secret x and Bob a secret y. In a “two-party secure computation” (TPSC), Alice and Bob compute a prescribed function f(x,y) in such a way that nothing about each party’s input is disclosed to the other, except for what follows logically from one’s private input and the function’s output. An example of the TPSC is the millionaires’ problem: Two persons would like to know who is richer, but neither wishes the other to know the exact amount of money he she has. In classical cryptography, TPSC can be achieved either through trusted intermediaries or by invoking some unproven computational assumptions such as the hardness of factoring large integers. The great expectation is that quantum cryptography can get rid of those requirements and achieve the same goal using the laws of physics alone. At the heart of such optimism has been the widespread belief that unconditionally secure quantum bit commitment (QBC) schemes exist [6]. Here we put such optimism into very serious doubt by showing", "It had been widely claimed that quantum mechanics can protect private information during public decision in, for example, the so-called two-party secure computation. If this were the case, quantum smart-cards, storing confidential information accessible only to a proper reader, could prevent fake teller machines from learning the PIN (personal identification number) from the customers' input. Although such optimism has been challenged by the recent surprising discovery of the insecurity of the so-called quantum bit commitment, the security of quantum two-party computation itself remains unaddressed. Here I answer this question directly by showing that all one-sided two-party computations (which allow only one of the two parties to learn the result) are necessarily insecure. As corollaries to my results, quantum one-way oblivious password identification and the so-called quantum one-out-of-two oblivious transfer are impossible. I also construct a class of functions that cannot be computed securely in any two-sided two-party computation. Nevertheless, quantum cryptography remains useful in key distribution and can still provide partial security in quantum money'' proposed by Wiesner." ] }
1708.04871
2748038854
We present SMAUG (Secure Mobile Authentication Using Gestures), a novel biometric assisted authentication algorithm for mobile devices that is solely based on data collected from multiple sensors that are usually installed on modern devices -- touch screen, gyroscope and accelerometer. As opposed to existing approaches, our system supports a fully flexible user input such as free-form gestures, multi-touch, and arbitrary amount of strokes. Our experiments confirm that this approach provides a high level of robustness and security. More precisely, in 77 of all our test cases over all gestures considered, a user has been correctly identified during the first authentication attempt and in 99 after the third attempt, while an attacker has been detected in 97 of all test cases. As an example, gestures that have a good balance between complexity and usability, e.g., drawing a two parallel lines using two fingers at the same time, 100 success rate after three login attempts and 97 impostor detection rate were given. We stress that we consider the strongest possible attacker model: an attacker is not only allowed to monitor the legitimate user during the authentication process, but also receives additional information on the biometric properties, for example pressure, speed, rotation, and acceleration. We see this method as a significant step beyond existing authentication methods that can be deployed directly to devices in use without the need of additional hardware.
With respect to gesture recognition for single-touch gestures, Rubine @cite_30 is the usual reference when comparing new single-touch algorithms. Another prominent example for single-touch and single-stroke gesture recognition is @cite_0 . The authors of @cite_36 present a very efficient follow-up work for single-touch and multi-stroke. In short, the algorithm joins all strokes in all possible combinations and reduces the gesture recognition problem therefore to the case of single-touch gestures. However, this algorithm family needs a predefined set of gestures. The authors in @cite_1 have developed a multi-dimension DTW for gesture recognition. In @cite_35 , the authors present a gesture based user authentication scheme for touch screens using solely the accelerometer. 3D hand gesture recognition in the air with mobile devices and accelerometer is examined in @cite_15 . Similar research was done for Kinect and gesture recognition in @cite_27 , also for Wii @cite_38 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_38", "@cite_36", "@cite_1", "@cite_0", "@cite_27", "@cite_15" ], "mid": [ "2595328592", "2097384738", "1963581134", "2134917076" ], "abstract": [ "Gesture recognition aims to recognize meaningful movements of human bodies, and is of utmost importance in intelligent human–computer robot interactions. In this paper, we present a multimodal gesture recognition method based on 3-D convolution and convolutional long-short-term-memory (LSTM) networks. The proposed method first learns short-term spatiotemporal features of gestures through the 3-D convolutional neural network, and then learns long-term spatiotemporal features by convolutional LSTM networks based on the extracted short-term spatiotemporal features. In addition, fine-tuning among multimodal data is evaluated, and we find that it can be considered as an optional skill to prevent overfitting when no pre-trained models exist. The proposed method is verified on the ChaLearn LAP large-scale isolated gesture data set (IsoGD) and the Sheffield Kinect gesture (SKIG) data set. The results show that our proposed method can obtain the state-of-the-art recognition accuracy (51.02 on the validation set of IsoGD and 98.89 on SKIG).", "We present a new framework for multimodal gesture recognition that is based on a multiple hypotheses rescoring fusion scheme. We specifically deal with a demanding Kinect-based multimodal data set, introduced in a recent gesture recognition challenge (ChaLearn 2013), where multiple subjects freely perform multimodal gestures. We employ multiple modalities, that is, visual cues, such as skeleton data, color and depth images, as well as audio, and we extract feature descriptors of the hands' movement, handshape, and audio spectral properties. Using a common hidden Markov model framework we build single-stream gesture models based on which we can generate multiple single stream-based hypotheses for an unknown gesture sequence. By multimodally rescoring these hypotheses via constrained decoding and a weighted combination scheme, we end up with a multimodally-selected best hypothesis. This is further refined by means of parallel fusion of the monomodal gesture models applied at a segmental level. In this setup, accurate gesture modeling is proven to be critical and is facilitated by an activity detection system that is also presented. The overall approach achieves 93.3 gesture recognition accuracy in the ChaLearn Kinect-based multimodal data set, significantly outperforming all recently published approaches on the same challenging multimodal gesture recognition task, providing a relative error rate reduction of at least 47.6 .", "In most applications of touch based humancomputer interaction, multi-touch gestures are used for directlymanipulating the interface such as scaling, panning, etc. In thispaper, we propose using multi-touch gesture as indirectcommand, such as redo, undo, erase, etc., for the operatingsystem. The proposed recognition system is guided by temporal,spatial and shape information. This is achieved using a graphembedding approach where all previous information are used.We evaluated our multi-touch recognition system on a set of 18different multi-touch gestures. With this graph embeddingmethod and a SVM classifier, we achieve 94.50 recognition rate.We believe that our research points out a possibility ofintegrating together raw ink, direct manipulation and indirectcommand in many gesture-based complex application such as asketch drawing application.", "In many applications today user interaction is moving away from mouse and pens and is becoming pervasive and much more physical and tangible. New emerging interaction technologies allow developing and experimenting with new interaction methods on the long way to providing intuitive human computer interaction. In this paper, we aim at recognizing gestures to interact with an application and present the design and evaluation of our sensor-based gesture recognition. As input device we employ the Wii-controller (Wiimote) which recently gained much attention world wide. We use the Wiimote's acceleration sensor independent of the gaming console for gesture recognition. The system allows the training of arbitrary gestures by users which can then be recalled for interacting with systems like photo browsing on a home TV. The developed library exploits Wii-sensor data and employs a hidden Markov model for training and recognizing user-chosen gestures. Our evaluation shows that we can already recognize gestures with a small number of training samples. In addition to the gesture recognition we also present our experiences with the Wii-controller and the implementation of the gesture recognition. The system forms the basis for our ongoing work on multimodal intuitive media browsing and are available to other researchers in the field." ] }
1708.04871
2748038854
We present SMAUG (Secure Mobile Authentication Using Gestures), a novel biometric assisted authentication algorithm for mobile devices that is solely based on data collected from multiple sensors that are usually installed on modern devices -- touch screen, gyroscope and accelerometer. As opposed to existing approaches, our system supports a fully flexible user input such as free-form gestures, multi-touch, and arbitrary amount of strokes. Our experiments confirm that this approach provides a high level of robustness and security. More precisely, in 77 of all our test cases over all gestures considered, a user has been correctly identified during the first authentication attempt and in 99 after the third attempt, while an attacker has been detected in 97 of all test cases. As an example, gestures that have a good balance between complexity and usability, e.g., drawing a two parallel lines using two fingers at the same time, 100 success rate after three login attempts and 97 impostor detection rate were given. We stress that we consider the strongest possible attacker model: an attacker is not only allowed to monitor the legitimate user during the authentication process, but also receives additional information on the biometric properties, for example pressure, speed, rotation, and acceleration. We see this method as a significant step beyond existing authentication methods that can be deployed directly to devices in use without the need of additional hardware.
Continuous authentication means that the device constantly tracks and evaluates the inputs and movements of the user onto the device to authenticate the user. They generally suffer fom privacy loss in some kind. Algorithms can be found in @cite_20 @cite_25 @cite_10 . In @cite_31 , the authors present an attack on the graphical password system of Windows 8. @cite_3 gives an overview of graphical password schemes developed so far. An enhancement for the Android pattern authentication was presented in @cite_29 , which utilizes the accelerometer. The authors of @cite_12 give an authentication algorithm where up to five fingers can be used for multi-touch single-stroke (per finger) in combination with touch screen and accelerometer. Furthermore, they defined the adversary models for mobile gesture recognition based on @cite_27 , which are all weaker than our adversary model. In @cite_18 the authors allow multi-touch and free-form gestures to measure the amount of information of a gesture which can be used for authentication. Finally, @cite_17 presents a multi-touch authentication algorithm for five fingers using touch screen data and a predefined gesture set. In @cite_28 the authors test free-form gesture authentication in non-labor environments.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_29", "@cite_3", "@cite_27", "@cite_12", "@cite_31", "@cite_10", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2952848219", "2079024329", "2052525588", "2102932275" ], "abstract": [ "This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metrics to analyze security of either template or free-form gestures. For example, entropy-based metrics used for text-based passwords are not suitable for capturing the security and memorability of free-form gestures. Hence, we modify a recently proposed metric for analyzing information capacity of continuous full-body movements for this purpose. Our metric computed estimated mutual information in repeated sets of gestures. Surprisingly, one-finger gestures had higher average mutual information. Gestures with many hard angles and turns had the highest mutual information. The best-remembered gestures included signatures and simple angular shapes. We also implemented a multitouch recognizer to evaluate the practicality of free-form gestures in a real authentication system and how they perform against shoulder surfing attacks. We conclude the paper with strategies for generating secure and memorable free-form gestures, which present a robust method for mobile authentication.", "Common authentication methods based on passwords, tokens, or fingerprints perform one-time authentication and rely on users to log out from the computer terminal when they leave. Users often do not log out, however, which is a security risk. The most common solution, inactivity timeouts, inevitably fail security (too long a timeout) or usability (too short a timeout) goals. One solution is to authenticate users continuously while they are using the terminal and automatically log them out when they leave. Several solutions are based on user proximity, but these are not sufficient: they only confirm whether the user is nearby but not whether the user is actually using the terminal. Proposed solutions based on behavioral biometric authentication (e.g., keystroke dynamics) may not be reliable, as a recent study suggests. To address this problem we propose Zero-Effort Bilateral Recurring Authentication (ZEBRA). In ZEBRA, a user wears a bracelet (with a built-in accelerometer, gyroscope, and radio) on her dominant wrist. When the user interacts with a computer terminal, the bracelet records the wrist movement, processes it, and sends it to the terminal. The terminal compares the wrist movement with the inputs it receives from the user (via keyboard and mouse), and confirms the continued presence of the user only if they correlate. Because the bracelet is on the same hand that provides inputs to the terminal, the accelerometer and gyroscope data and input events received by the terminal should correlate because their source is the same - the user's hand movement. In our experiments ZEBRA performed continuous authentication with 85 accuracy in verifying the correct user and identified all adversaries within 11s. For a different threshold that trades security for usability, ZEBRA correctly verified 90 of users and identified all adversaries within 50s.", "This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metrics to analyze security of either template or free-form gestures. For example, entropy-based metrics used for text-based passwords are not suitable for capturing the security and memorability of free-form gestures. Hence, we modify a recently proposed metric for analyzing information capacity of continuous full-body movements for this purpose. Our metric computed estimated mutual information in repeated sets of gestures. Surprisingly, one-finger gestures had higher average mutual information. Gestures with many hard angles and turns had the highest mutual information. The best-remembered gestures included signatures and simple angular shapes. We also implemented a multitouch recognizer to evaluate the practicality of free-form gestures in a real authentication system and how they perform against shoulder surfing attacks. We discuss strategies for generating secure and memorable free-form gestures. We conclude that free-form gestures present a robust method for mobile authentication.", "Current smartphones generally cannot continuously authenticate users during runtime. This poses severe security and privacy threats: A malicious user can manipulate the phone if bypassing the screen lock. To solve this problem, our work adopts a continuous and passive authentication mechanism based on a user’s touch operations on the touchscreen. Such a mechanism is suitable for smartphones, as it requires no extra hardware or intrusive user interface. We study how to model multiple types of touch data and perform continuous authentication accordingly. As a first attempt, we also investigate the fundamentals of touch operations as biometrics by justifying their distinctiveness and permanence. A onemonth experiment is conducted involving over 30 users. Our experiment results verify that touch biometrics can serve as a promising method for continuous and passive authentication." ] }
1708.05071
2746521834
In this paper, we propose to use deep 3-dimensional convolutional networks (3D CNNs) in order to address the challenge of modelling spectro-temporal dynamics for speech emotion recognition (SER). Compared to a hybrid of Convolutional Neural Network and Long-Short-Term-Memory (CNN-LSTM), our proposed 3D CNNs simultaneously extract short-term and long-term spectral features with a moderate number of parameters. We evaluated our proposed and other state-of-the-art methods in a speaker-independent manner using aggregated corpora that give a large and diverse set of speakers. We found that 1) shallow temporal and moderately deep spectral kernels of a homogeneous architecture are optimal for the task; and 2) our 3D CNNs are more effective for spectro-temporal feature learning compared to other methods. Finally, we visualised the feature space obtained with our proposed method using t-distributed stochastic neighbour embedding (T-SNE) and could observe distinct clusters of emotions.
The performance of SER using deep architectures can still be much improved, and an optimal feature set has not been found yet for SER. For example, in @cite_17 @cite_0 @cite_12 , high-level features obtained from off-the-shelf features outperformed conventional methods. However, representation learning using log-spectrogram features did not outperform that of using off-the-shelf features - learning such a complex sequential structure of emotional speech appeared to be hard for representation learning @cite_8 .
{ "cite_N": [ "@cite_0", "@cite_8", "@cite_12", "@cite_17" ], "mid": [ "2889374687", "2087618018", "2232901134", "2785325870" ], "abstract": [ "This paper proposes an attention pooling based representation learning method for speech emotion recognition (SER). The emotional representation is learned in an end-to-end fashion by applying a deep convolutional neural network (CNN) directly to spectrograms extracted from speech utterances. Motivated by the success of GoogleNet, two groups of filters with different shapes are designed to capture both temporal and frequency domain context information from the input spectrogram. The learned features are concatenated and fed into the subsequent convolutional layers. To learn the final emotional representation, a novel attention pooling method is further proposed. Compared with the existing pooling methods, such as max-pooling and average-pooling, the proposed attention pooling can effectively incorporate class-agnostic bottom-up, and class-specific top-down, attention maps. We conduct extensive evaluations on benchmark IEMOCAP data to assess the effectiveness of the proposed representation. Results demonstrate a recognition performance of 71.8 weighted accuracy (WA) and 68 unweighted accuracy (UA) over four emotions, which outperforms the state-of-the-art method by about 3 absolute for WA and 4 for UA.", "As an essential way of human emotional behavior understanding, speech emotion recognition (SER) has attracted a great deal of attention in human-centered signal processing. Accuracy in SER heavily depends on finding good affect- related , discriminative features. In this paper, we propose to learn affect-salient features for SER using convolutional neural networks (CNN). The training of CNN involves two stages. In the first stage, unlabeled samples are used to learn local invariant features (LIF) using a variant of sparse auto-encoder (SAE) with reconstruction penalization. In the second step, LIF is used as the input to a feature extractor, salient discriminative feature analysis (SDFA), to learn affect-salient, discriminative features using a novel objective function that encourages feature saliency, orthogonality, and discrimination for SER. Our experimental results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and language variation, and environment distortion) and outperforms several well-established SER features.", "There has been a lot of prior work on representation learning for speech recognition applications, but not much emphasis has been given to an investigation of effective representations of affect from speech, where the paralinguistic elements of speech are separated out from the verbal content. In this paper, we explore denoising autoencoders for learning paralinguistic attributes i.e. categorical and dimensional affective traits from speech. We show that the representations learnt by the bottleneck layer of the autoencoder are highly discriminative of activation intensity and at separating out negative valence (sadness and anger) from positive valence (happiness). We experiment with different input speech features (such as FFT and log-mel spectrograms with temporal context windows), and different autoencoder architectures (such as stacked and deep autoencoders). We also learn utterance specific representations by a combination of denoising autoencoders and BLSTM based recurrent autoencoders. Emotion classification is performed with the learnt temporal dynamic representations to evaluate the quality of the representations. Experiments on a well-established real-life speech dataset (IEMOCAP) show that the learnt representations are comparable to state of the art feature extractors (such as voice quality features and MFCCs) and are competitive with state-of-the-art approaches at emotion and dimensional affect recognition.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL ." ] }
1708.05071
2746521834
In this paper, we propose to use deep 3-dimensional convolutional networks (3D CNNs) in order to address the challenge of modelling spectro-temporal dynamics for speech emotion recognition (SER). Compared to a hybrid of Convolutional Neural Network and Long-Short-Term-Memory (CNN-LSTM), our proposed 3D CNNs simultaneously extract short-term and long-term spectral features with a moderate number of parameters. We evaluated our proposed and other state-of-the-art methods in a speaker-independent manner using aggregated corpora that give a large and diverse set of speakers. We found that 1) shallow temporal and moderately deep spectral kernels of a homogeneous architecture are optimal for the task; and 2) our 3D CNNs are more effective for spectro-temporal feature learning compared to other methods. Finally, we visualised the feature space obtained with our proposed method using t-distributed stochastic neighbour embedding (T-SNE) and could observe distinct clusters of emotions.
CNN-based methods using low-level features were proposed and outperformed off-the-shelf feature-based methods @cite_6 @cite_16 @cite_11 @cite_1 @cite_26 . @cite_6 @cite_16 @cite_11 2D feature maps were composed of spectrogram features with a fine resolution. However, these 2D CNNs cannot model temporal dependency directly. Instead, LSTM should be followed to model temporal dependencies @cite_11 @cite_1 . Moreover, temporal convolutions can extract spectral features from raw wave signals and capture long-term dependencies @cite_1 . Lastly, CNN-LSTM-DNN was proposed to address frequency variations in spectral domain, long-term dependencies, separation in utterance-level feature space for the task of speech recognition @cite_13 . While these methods augment CNNs and LSTM to handle spectral variations and temporal dynamics, a large number of parameters are required, and it is hard to learn complex dynamics with limited depths. Without these complex memory mechanisms, 3D CNNs could learn temporal features @cite_21 @cite_7 . In @cite_21 @cite_7 , a series of human's motion was modelled by 3D CNNs, it empirically turned out that 3D CNNs are not only effective but also efficient to capture spatio-temporal features.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_21", "@cite_1", "@cite_6", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2751445731", "2892998444", "2529337537", "2344328023" ], "abstract": [ "3-D convolutional neural networks (3-D-convNets) have been very recently proposed for action recognition in videos, and promising results are achieved. However, existing 3-D-convNets has two “artificial” requirements that may reduce the quality of video analysis: 1) It requires a fixed-sized (e.g., 112 @math 112) input video; and 2) most of the 3-D-convNets require a fixed-length input (i.e., video shots with fixed number of frames). To tackle these issues, we propose an end-to-end pipeline named Two-stream 3-D-convNet Fusion , which can recognize human actions in videos of arbitrary size and length using multiple features. Specifically, we decompose a video into spatial and temporal shots. By taking a sequence of shots as input, each stream is implemented using a spatial temporal pyramid pooling (STPP) convNet with a long short-term memory (LSTM) or CNN-E model, softmax scores of which are combined by a late fusion. We devise the STPP convNet to extract equal-dimensional descriptions for each variable-size shot, and we adopt the LSTM CNN-E model to learn a global description for the input video using these time-varying descriptions. With these advantages, our method should improve all 3-D CNN-based video analysis methods. We empirically evaluate our method for action recognition in videos and the experimental results show that our method outperforms the state-of-the-art methods (both 2-D and 3-D based) on three standard benchmark datasets (UCF101, HMDB51 and ACT datasets).", "The performance of single image super-resolution has achieved significant improvement by utilizing deep convolutional neural networks (CNNs). The features in deep CNN contain different types of information which make different contributions to image reconstruction. However, most CNN-based models lack discriminative ability for different types of information and deal with them equally, which results in the representational capacity of the models being limited. On the other hand, as the depth of neural networks grows, the long-term information coming from preceding layers is easy to be weaken or lost in late layers, which is adverse to super-resolving image. To capture more informative features and maintain long-term information for image super-resolution, we propose a channel-wise and spatial feature modulation (CSFM) network in which a sequence of feature-modulation memory (FMM) modules is cascaded with a densely connected structure to transform low-resolution features to high informative features. In each FMM module, we construct a set of channel-wise and spatial attention residual (CSAR) blocks and stack them in a chain structure to dynamically modulate multi-level features in a global-and-local manner. This feature modulation strategy enables the high contribution information to be enhanced and the redundant information to be suppressed. Meanwhile, for long-term information persistence, a gated fusion (GF) node is attached at the end of the FMM module to adaptively fuse hierarchical features and distill more effective information via the dense skip connections and the gating mechanism. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over the state-of-the-art methods.", "Learning acoustic models directly from the raw waveform data with minimal processing is challenging. Current waveform-based models have generally used very few (∼2) convolutional layers, which might be insufficient for building high-level discriminative features. In this work, we propose very deep convolutional neural networks (CNNs) that directly use time-domain waveforms as inputs. Our CNNs, with up to 34 weight layers, are efficient to optimize over very long sequences (e.g., vector of size 32000), necessary for processing acoustic waveforms. This is achieved through batch normalization, residual learning, and a careful design of down-sampling in the initial layers. Our networks are fully convolutional, without the use of fully connected layers and dropout, to maximize representation learning. We use a large receptive field in the first convolutional layer to mimic bandpass filters, but very small receptive fields subsequently to control the model capacity. We demonstrate the performance gains with the deeper models. Our evaluation shows that the CNN with 18 weight layers outperforms the CNN with 3 weight layers by over 15 in absolute accuracy for an environmental sound recognition task and is competitive with the performance of models using log-mel features.", "In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D 3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D 3-D registration with a significantly enlarged capture range when compared to intensity-based methods." ] }
1708.05905
2749529790
Internet of Things (IoT) systems have aroused enthusiasm and concerns. Enthusiasm comes from their utilities in people daily life, and concerns may be associated with privacy issues. By using two IoT systems as case-studies, we examine users' privacy beliefs, concerns and attitudes. We focus on four major dimensions: the collection of personal data, the inference of new information, the exchange of information to third parties, and the risk-utility trade-off posed by the features of the system. Altogether, 113 Brazilian individuals answered a survey about such dimensions. Although their perceptions seem to be dependent on the context, there are recurrent patterns. Our results suggest that IoT users can be classified into unconcerned, fundamentalists and pragmatists. Most of them exhibit a pragmatist profile and believe in privacy as a right guaranteed by law. One of the most privacy concerning aspect is the exchange of personal information to third parties. Individuals' perceived risk is negatively correlated with their perceived utility in the features of the system. We discuss practical implications of these results and suggest heuristics to cope with privacy concerns when designing IoT systems.
From a legislation viewpoint, considering the laws in the United States of America, privacy can be defined as the right of an individual to be let alone'' @cite_6 . People, in turn, usually associate the word privacy with a diversity of meanings. Some people believe that privacy is the right to control what information about them may be made public @cite_21 @cite_2 @cite_19 . Other people believe that if someone cares about privacy is because he she is involved in wrongdoing @cite_9 . Privacy is also associated with the states of solitude, intimacy, anonymity, and reserve @cite_1 @cite_19 . Solitude means the physical separation from other individuals. Intimacy is some kind of close relationship between individuals with which information is exchanged. Anonymity is the state of freedom from identification and surveillance. Finally, reserve means the creation of psychological protection against intrusion by other unwanted individuals.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_19", "@cite_2" ], "mid": [ "2075476409", "2089513810", "2103088651", "2164649498" ], "abstract": [ "Philosophical and legal theories of privacy have long recognized the relationship between privacy and information about persons. They have, however, focused on personal, intimate, and sensitive information, assuming that with public information, and information drawn from public spheres, either privacy norms do not apply, or applying privacy norms is so burdensome as to be morally and legally unjustifiable. Against this preponderant view, I argue that information and communications technology, by facilitating surveillance, by vastly enhancing the collection, storage, and analysis of information, by enabling profiling, data mining and aggregation, has significantly altered the meaning of public information. As a result, a satisfactory legal and philosophical understanding of a right to privacy, capable of protecting the important values at stake in protecting privacy, must incorporate, in addition to traditional aspects of privacy, a degree of protection for privacy in public.", "Privacy is a concept which received relatively little attention during the rapid growth and spread of information technology through the 1980's and 1990's. Design to make information easily accessible, without particular attention to issues such as whether an individual had a desire or right to control access to and use of particular information was seen as the more pressing goal. We believe that there will be an increasing awareness of a fundamental need to address privacy concerns in information technology, and that doing so will require an understanding of policies that govern information use as well as the development of technologies that can implement such policies. The research reported here describes our efforts to design a privacy management workbench which facilitates privacy policy authoring, implementation, and compliance monitoring. This case study highlights the work of identifying organizational privacy requirements, analyzing existing technology, on-going research to identify approaches that address these requirements, and iteratively designing and validating a prototype with target users for flexible privacy technologies.", "The prevailing paradigm in Internet privacy literature, treating privacy within a context merely of rights and violations, is inadequate for studying the Internet as a social realm. Following Goffman on self-presentation and Altman's theorizing of privacy as an optimization between competing pressures for disclosure and withdrawal, the author investigates the mechanisms used by a sample (n = 704) of college students, the vast majority users of Facebook and Myspace, to negotiate boundaries between public and private. Findings show little to no relationship between online privacy concerns and information disclosure on online social network sites. Students manage unwanted audience concerns by adjusting profile visibility and using nicknames but not by restricting the information within the profile. Mechanisms analogous to boundary regulation in physical space, such as walls, locks, and doors, are favored; little adaptation is made to the Internet's key features of persistence, searchability, and cross-indexa...", "The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain “identifying” attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented (in Section 2) values for each sensitive attribute. In this paper, we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. Motivated by these limitations, we propose a new notion of privacy called “closeness.” We first present the base model t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We then propose a more flexible privacy model called (n,t)-closeness that offers higher utility. We describe our desiderata for designing a distance measure between two probability distributions and present two distance measures. We discuss the rationale for using closeness as a privacy measure and illustrate its advantages through examples and experiments." ] }
1708.05905
2749529790
Internet of Things (IoT) systems have aroused enthusiasm and concerns. Enthusiasm comes from their utilities in people daily life, and concerns may be associated with privacy issues. By using two IoT systems as case-studies, we examine users' privacy beliefs, concerns and attitudes. We focus on four major dimensions: the collection of personal data, the inference of new information, the exchange of information to third parties, and the risk-utility trade-off posed by the features of the system. Altogether, 113 Brazilian individuals answered a survey about such dimensions. Although their perceptions seem to be dependent on the context, there are recurrent patterns. Our results suggest that IoT users can be classified into unconcerned, fundamentalists and pragmatists. Most of them exhibit a pragmatist profile and believe in privacy as a right guaranteed by law. One of the most privacy concerning aspect is the exchange of personal information to third parties. Individuals' perceived risk is negatively correlated with their perceived utility in the features of the system. We discuss practical implications of these results and suggest heuristics to cope with privacy concerns when designing IoT systems.
In information and communications technology (ICT), the concept of privacy is usually associated to the degree of control over the flow of personal information @cite_15 . In this context, people associate privacy to something regarding to their level of control over the collection of personal information, and usage of the collected information, and the third parties that can have access to the information, such as relatives, friends, hierarchical superiors, and government agencies @cite_32 @cite_15 @cite_33 .
{ "cite_N": [ "@cite_15", "@cite_32", "@cite_33" ], "mid": [ "2089513810", "2075476409", "2119486295", "2103088651" ], "abstract": [ "Privacy is a concept which received relatively little attention during the rapid growth and spread of information technology through the 1980's and 1990's. Design to make information easily accessible, without particular attention to issues such as whether an individual had a desire or right to control access to and use of particular information was seen as the more pressing goal. We believe that there will be an increasing awareness of a fundamental need to address privacy concerns in information technology, and that doing so will require an understanding of policies that govern information use as well as the development of technologies that can implement such policies. The research reported here describes our efforts to design a privacy management workbench which facilitates privacy policy authoring, implementation, and compliance monitoring. This case study highlights the work of identifying organizational privacy requirements, analyzing existing technology, on-going research to identify approaches that address these requirements, and iteratively designing and validating a prototype with target users for flexible privacy technologies.", "Philosophical and legal theories of privacy have long recognized the relationship between privacy and information about persons. They have, however, focused on personal, intimate, and sensitive information, assuming that with public information, and information drawn from public spheres, either privacy norms do not apply, or applying privacy norms is so burdensome as to be morally and legally unjustifiable. Against this preponderant view, I argue that information and communications technology, by facilitating surveillance, by vastly enhancing the collection, storage, and analysis of information, by enabling profiling, data mining and aggregation, has significantly altered the meaning of public information. As a result, a satisfactory legal and philosophical understanding of a right to privacy, capable of protecting the important values at stake in protecting privacy, must incorporate, in addition to traditional aspects of privacy, a degree of protection for privacy in public.", "The advent of the Internet has made the transmission of personally identifiable information more common and often unintended by the user. As personal information becomes more accessible, individuals worry that businesses misuse the information that is collected while they are online. Organizations have tried to mitigate this concern in two ways: (1) by offering privacy policies regarding the handling and use of personal information and (2) by offering benefits such as financial gains or convenience. In this paper, we interpret these actions in the context of the information-processing theory of motivation. Information-processing theories, also known as expectancy theories in the context of motivated behavior, are built on the premise that people process information about behavior-outcome relationships. By doing so, they are forming expectations and making decisions about what behavior to choose. Using an experimental setting, we empirically validate predictions that the means to mitigate privacy concerns are associated with positive valences resulting in an increase in motivational score. In a conjoint analysis exercise, 268 participants from the United States and Singapore face trade-off situations, where an organization may only offer incomplete privacy protection or some benefits. While privacy protections (against secondary use, improper access, and error) are associated with positive valences, we also find that financial gains and convenience can significantly increase individuals' motivational score of registering with a Web site. We find that benefits-monetary reward and future convenience-significantly affect individuals' preferences over Web sites with differing privacy policies. We also quantify the value of Web site privacy protection. Among U.S. subjects, protection against errors, improper access, and secondary use of personal information is worth @math 44.62. Finally, our approach also allows us to identify three distinct segments of Internet users-privacy guardians, information sellers, and convenience seekers.", "The prevailing paradigm in Internet privacy literature, treating privacy within a context merely of rights and violations, is inadequate for studying the Internet as a social realm. Following Goffman on self-presentation and Altman's theorizing of privacy as an optimization between competing pressures for disclosure and withdrawal, the author investigates the mechanisms used by a sample (n = 704) of college students, the vast majority users of Facebook and Myspace, to negotiate boundaries between public and private. Findings show little to no relationship between online privacy concerns and information disclosure on online social network sites. Students manage unwanted audience concerns by adjusting profile visibility and using nicknames but not by restricting the information within the profile. Mechanisms analogous to boundary regulation in physical space, such as walls, locks, and doors, are favored; little adaptation is made to the Internet's key features of persistence, searchability, and cross-indexa..." ] }
1708.05905
2749529790
Internet of Things (IoT) systems have aroused enthusiasm and concerns. Enthusiasm comes from their utilities in people daily life, and concerns may be associated with privacy issues. By using two IoT systems as case-studies, we examine users' privacy beliefs, concerns and attitudes. We focus on four major dimensions: the collection of personal data, the inference of new information, the exchange of information to third parties, and the risk-utility trade-off posed by the features of the system. Altogether, 113 Brazilian individuals answered a survey about such dimensions. Although their perceptions seem to be dependent on the context, there are recurrent patterns. Our results suggest that IoT users can be classified into unconcerned, fundamentalists and pragmatists. Most of them exhibit a pragmatist profile and believe in privacy as a right guaranteed by law. One of the most privacy concerning aspect is the exchange of personal information to third parties. Individuals' perceived risk is negatively correlated with their perceived utility in the features of the system. We discuss practical implications of these results and suggest heuristics to cope with privacy concerns when designing IoT systems.
Concerns about privacy usually arise from unauthorized collection of personal data, unauthorized secondary use of the data, errors in personal data, and improper access to personal data @cite_22 . People concerns are indeed associated to possible consequences that these occurrences may have on their lives. Two relevant theoretical constructs that explore this view are face keeping and information boundary.
{ "cite_N": [ "@cite_22" ], "mid": [ "2119486295", "2898683213", "2077217970", "1981794546" ], "abstract": [ "The advent of the Internet has made the transmission of personally identifiable information more common and often unintended by the user. As personal information becomes more accessible, individuals worry that businesses misuse the information that is collected while they are online. Organizations have tried to mitigate this concern in two ways: (1) by offering privacy policies regarding the handling and use of personal information and (2) by offering benefits such as financial gains or convenience. In this paper, we interpret these actions in the context of the information-processing theory of motivation. Information-processing theories, also known as expectancy theories in the context of motivated behavior, are built on the premise that people process information about behavior-outcome relationships. By doing so, they are forming expectations and making decisions about what behavior to choose. Using an experimental setting, we empirically validate predictions that the means to mitigate privacy concerns are associated with positive valences resulting in an increase in motivational score. In a conjoint analysis exercise, 268 participants from the United States and Singapore face trade-off situations, where an organization may only offer incomplete privacy protection or some benefits. While privacy protections (against secondary use, improper access, and error) are associated with positive valences, we also find that financial gains and convenience can significantly increase individuals' motivational score of registering with a Web site. We find that benefits-monetary reward and future convenience-significantly affect individuals' preferences over Web sites with differing privacy policies. We also quantify the value of Web site privacy protection. Among U.S. subjects, protection against errors, improper access, and secondary use of personal information is worth @math 44.62. Finally, our approach also allows us to identify three distinct segments of Internet users-privacy guardians, information sellers, and convenience seekers.", "Concerns about genetic privacy affect individuals’ willingness to accept genetic testing in clinical care and to participate in genomics research. To learn what is already known about these views, we conducted a systematic review, which ultimately analyzed 53 studies involving the perspectives of 47,974 participants on real or hypothetical privacy issues related to human genetic data. Bibliographic databases included MEDLINE, Web of Knowledge, and Sociological Abstracts. Three investigators independently screened studies against predetermined criteria and assessed risk of bias. The picture of genetic privacy that emerges from this systematic literature review is complex and riddled with gaps. When asked specifically “are you worried about genetic privacy,” the general public, patients, and professionals frequently said yes. In many cases, however, that question was posed poorly or only in the most general terms. While many participants expressed concern that genomic and medical information would be revealed to others, respondents frequently seemed to conflate privacy, confidentiality, control, and security. People varied widely in how much control they wanted over the use of data. They were more concerned about use by employers, insurers, and the government than they were about researchers and commercial entities. In addition, people are often willing to give up some privacy to obtain other goods. Importantly, little attention was paid to understanding the factors–sociocultural, relational, and media—that influence people’s opinions and decisions. Future investigations should explore in greater depth which concerns about genetic privacy are most salient to people and the social forces and contexts that influence those perceptions. It is also critical to identify the social practices that will make the collection and use of these data more trustworthy for participants as well as to identify the circumstances that lead people to set aside worries and decide to participate in research.", "In the information realm, loss of privacy is usually associated with failure to control access to information, to control the flow of information, or to control the purposes for which information is employed. Differential privacy arose in a context in which ensuring privacy is a challenge even if all these control problems are solved: privacy-preserving statistical analysis of data. The problem of statistical disclosure control – revealing accurate statistics about a set of respondents while preserving the privacy of individuals – has a venerable history, with an extensive literature spanning statistics, theoretical computer science, security, databases, and cryptography (see, for example, the excellent survey [1], the discussion of related work in [2] and the Journal of Official Statistics 9 (2), dedicated to confidentiality and disclosure control). This long history is a testament the importance of the problem. Statistical databases can be of enormous social value; they are used for apportioning resources, evaluating medical therapies, understanding the spread of disease, improving economic utility, and informing us about ourselves as a species. The data may be obtained in diverse ways. Some data, such as census, tax, and other sorts of official data, are compelled; others are collected opportunistically, for example, from traffic on the internet, transactions on Amazon, and search engine query logs; other data are provided altruistically, by respondents who hope that sharing their information will help others to avoid a specific misfortune, or more generally, to increase the public good. Altruistic data donors are typically promised their individual data will be kept confidential – in short, they are promised “privacy.” Similarly, medical data and legally compelled data, such as census data, tax return data, have legal privacy mandates. In our view, ethics demand that opportunistically obtained data should be treated no differently, especially when there is no reasonable alternative to engaging in the actions that generate the data in question. The problems remain: even if data encryption, key management, access control, and the motives of the data curator", "In today's data-rich networked world, people express many aspects of their lives online. It is common to segregate different aspects in different places: you might write opinionated rants about movies in your blog under a pseudonym while participating in a forum or web site for scholarly discussion of medical ethics under your real name. However, it may be possible to link these separate identities, because the movies, journal articles, or authors you mention are from a sparse relation space whose properties (e.g., many items related to by only a few users) allow re-identification. This re-identification violates people's intentions to separate aspects of their life and can have negative consequences; it also may allow other privacy violations, such as obtaining a stronger identifier like name and address.This paper examines this general problem in a specific setting: re-identification of users from a public web movie forum in a private movie ratings dataset. We present three major results. First, we develop algorithms that can re-identify a large proportion of public users in a sparse relation space. Second, we evaluate whether private dataset owners can protect user privacy by hiding data; we show that this requires extensive and undesirable changes to the dataset, making it impractical. Third, we evaluate two methods for users in a public forum to protect their own privacy, suppression and misdirection. Suppression doesn't work here either. However, we show that a simple misdirection strategy works well: mention a few popular items that you haven't rated." ] }
1708.05870
2748257817
We address a fundamental question in wireless networks that, surprisingly, has not been studied before: what is the maximum density of concurrently active links that satisfy a certain outage constraint? We call this quantity the spatial outage capacity (SOC), give a rigorous definition, and analyze it for Poisson bipolar networks with ALOHA. Specifically, we provide exact analytical and approximate expressions for the density of links satisfying an outage constraint and give simple upper and lower bounds on the SOC. In the high-reliability regime where the target outage probability is close to zero, we obtain an exact closed-form expression of the SOC, which reveals the interesting and perhaps counter-intuitive result that all transmitters need to be always active to achieve the SOC, i.e., the transmit probability needs to be set to 1 to achieve the SOC.
For Poisson bipolar networks, the mean success probability @math is calculated in @cite_1 and @cite_12 . For ad hoc networks modeled by the Poisson point process (PPP), the link success probability @math is studied in @cite_9 , where the focus is on the mean local delay, i.e. , the @math st moment of @math in our notation. The notion of the (TC) is introduced in @cite_11 , which is defined as the maximum density of successful transmissions provided the outage probability of the typical user stays below a predefined threshold @math . While the results obtained in @cite_11 are certainly important, the TC does not represent the maximum density of successful transmissions for the target outage probability, as claimed in @cite_11 , since the metric implicitly assumes that each link in a realization of the network is typical.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_12", "@cite_11" ], "mid": [ "1640283668", "2114106914", "2963847582", "2545435035" ], "abstract": [ "The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links. This paper focuses on the meta distribution of the SIR, which is the distribution of the conditional success probability @math given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it. The meta distribution provides fine-grained information on the SIR and answers questions such as “What fraction of users in a Poisson cellular network achieve 90 link reliability if the required SIR is 5 dB?” Interestingly, in the bipolar model, if the transmit probability @math is reduced while increasing the network density @math such that the density of concurrent transmitters @math stays constant as @math , @math degenerates to a constant, i.e., all links have exactly the same success probability in the limit, which is the one of the typical link. In contrast, in the cellular case, if the interfering base stations are active independently with probability @math , the variance of @math approaches a non-zero constant when @math is reduced to 0 while keeping the mean success probability constant.", "We develop a new metric for quantifying end-to-end throughput in multihop wireless networks, which we term random access transport capacity, since the interference model presumes uncoordinated transmissions. The metric quantifies the average maximum rate of successful end-to-end transmissions, multiplied by the communication distance, and normalized by the network area. We show that a simple upper bound on this quantity is computable in closed-form in terms of key network parameters when the number of retransmissions is not restricted and the hops are assumed to be equally spaced on a line between the source and destination. We also derive the optimum number of hops and optimal per hop success probability and show that our result follows the well-known square root scaling law while providing exact expressions for the preconstants, which contain most of the design-relevant network parameters. Numerical results demonstrate that the upper bound is accurate for the purpose of determining the optimal hop count and success (or outage) probability.", "Transmission capacity (TC) is a performance metric for wireless networks that measures the spatial intensity of successful transmissions per unit area, subject to a constraint on the permissible outage probability (where outage occurs when the signal to interference plus noise ratio (SINR) at a receiver is below a threshold). This volume gives a unified treatment of the TC framework that has been developed by the authors and their collaborators over the past decade. The mathematical framework underlying the analysis (reviewed in Section 2) is stochastic geometry: Poisson point processes model the locations of interferers, and (stable) shot noise processes represent the aggregate interference seen at a receiver. Section 3 presents TC results (exact, asymptotic, and bounds) on a simple model in order to illustrate a key strength of the framework: analytical tractability yields explicit performance dependence upon key model parameters. Section 4 presents enhancements to this basic model — channel fading, variable link distances (VLD), and multihop. Section 5 presents four network design case studies well-suited to TC: (i) spectrum management, (ii) interference cancellation, (iii) signal threshold transmission scheduling, and (iv) power control. Section 6 studies the TC when nodes have multiple antennas, which provides a contrast vs. classical results that ignore interference.", "In this paper we consider a network where the nodes locations are modeled by a realization of a Poisson point process and remains fixed or changes very slowly over time. Most of the literature focuses on the spatial average of the link outage probabilities. But each link in the network has an associated link-outage probability that depends on the fading, path loss, and the relative locations of the interfering nodes. Since the node locations are random, the outage probability of each link is a random variable and in this paper we obtain its distribution, instead of just the spatial average. This work supplements the existing results which focus mainly on the average outage probability averaged over space. We propose a new notion of transmission capacity (TC) based on the outage distribution, and provide asymptotically tight bounds for the TC." ] }
1708.05870
2748257817
We address a fundamental question in wireless networks that, surprisingly, has not been studied before: what is the maximum density of concurrently active links that satisfy a certain outage constraint? We call this quantity the spatial outage capacity (SOC), give a rigorous definition, and analyze it for Poisson bipolar networks with ALOHA. Specifically, we provide exact analytical and approximate expressions for the density of links satisfying an outage constraint and give simple upper and lower bounds on the SOC. In the high-reliability regime where the target outage probability is close to zero, we obtain an exact closed-form expression of the SOC, which reveals the interesting and perhaps counter-intuitive result that all transmitters need to be always active to achieve the SOC, i.e., the transmit probability needs to be set to 1 to achieve the SOC.
A version of the TC based on the link success probability distribution is introduced in @cite_8 , but it does not consider a MAC scheme, , all nodes always transmit ( @math ). The choice of @math is important as it greatly affects the link success probability distribution as shown in Fig. . In this paper, we consider the general case with the transmit probability @math .
{ "cite_N": [ "@cite_8" ], "mid": [ "1640283668", "2151792936", "2168959209", "2545435035" ], "abstract": [ "The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links. This paper focuses on the meta distribution of the SIR, which is the distribution of the conditional success probability @math given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it. The meta distribution provides fine-grained information on the SIR and answers questions such as “What fraction of users in a Poisson cellular network achieve 90 link reliability if the required SIR is 5 dB?” Interestingly, in the bipolar model, if the transmit probability @math is reduced while increasing the network density @math such that the density of concurrent transmitters @math stays constant as @math , @math degenerates to a constant, i.e., all links have exactly the same success probability in the limit, which is the one of the typical link. In contrast, in the cellular case, if the interfering base stations are active independently with probability @math , the variance of @math approaches a non-zero constant when @math is reduced to 0 while keeping the mean success probability constant.", "This paper surveys and unifies a number of recent contributions that have collectively developed a metric for decentralized wireless network analysis known as transmission capacity. Although it is notoriously difficult to derive general end-to-end capacity results for multi-terminal or adhoc networks, the transmission capacity (TC) framework allows for quantification of achievable single-hop rates by focusing on a simplified physical MAC-layer model. By using stochastic geometry to quantify the multi-user interference in the network, the relationship between the optimal spatial density and success probability of transmissions in the network can be determined, and expressed-often fairly simply-in terms of the key network parameters. The basic model and analytical tools are first discussed and applied to a simple network with path loss only and we present tight upper and lower bounds on transmission capacity (via lower and upper bounds on outage probability). We then introduce random channels (fading shadowing) and give TC and outage approximations for an arbitrary channel distribution, as well as exact results for the special cases of Rayleigh and Nakagami fading. We then apply these results to show how TC can be used to better understand scheduling, power control, and the deployment of multiple antennas in a decentralized network. The paper closes by discussing shortcomings in the model as well as future research directions.", "In this paper, a mathematical model for the beacon-enabled mode of the IEEE 802.15.4 medium-access control (MAC) protocol is provided. A personal area network (PAN) composed of multiple nodes, which transmit data to a PAN coordinator through direct links or multiple hops, is considered. The application is query based: Upon reception of the beacon transmitted by the PAN coordinator, each node tries to transmit its packet using the superframe structure defined by the IEEE 802.15.4 protocol. Those nodes that do not succeed in accessing the channel discard the packet; at the next superframe, a new packet is generated. The aim of the paper is to develop a flexible mathematical tool able to study beacon-enabled 802.15.4 networks organized in different topologies. Both the contention access period (CAP) and the contention-free period defined by the standard are considered. The slotted carrier-sense multiple access with collision avoidance (CSMA CA) algorithm used in the CAP portion of the superframe is analytically modeled. The model describes the probability of packet successful reception and access delay statistics. Moreover, both star and tree-based topologies are dealt with; a suitable comparison between these topologies is provided. The model is a useful tool for the design of MAC parameters and to select the better topology. The mathematical model is validated through simulation results. The model differs from those previously published by other authors in the literature as it precisely follows the MAC procedure defined by the standard in the context of the application scenario described.", "In this paper we consider a network where the nodes locations are modeled by a realization of a Poisson point process and remains fixed or changes very slowly over time. Most of the literature focuses on the spatial average of the link outage probabilities. But each link in the network has an associated link-outage probability that depends on the fading, path loss, and the relative locations of the interfering nodes. Since the node locations are random, the outage probability of each link is a random variable and in this paper we obtain its distribution, instead of just the spatial average. This work supplements the existing results which focus mainly on the average outage probability averaged over space. We propose a new notion of transmission capacity (TC) based on the outage distribution, and provide asymptotically tight bounds for the TC." ] }
1708.05870
2748257817
We address a fundamental question in wireless networks that, surprisingly, has not been studied before: what is the maximum density of concurrently active links that satisfy a certain outage constraint? We call this quantity the spatial outage capacity (SOC), give a rigorous definition, and analyze it for Poisson bipolar networks with ALOHA. Specifically, we provide exact analytical and approximate expressions for the density of links satisfying an outage constraint and give simple upper and lower bounds on the SOC. In the high-reliability regime where the target outage probability is close to zero, we obtain an exact closed-form expression of the SOC, which reveals the interesting and perhaps counter-intuitive result that all transmitters need to be always active to achieve the SOC, i.e., the transmit probability needs to be set to 1 to achieve the SOC.
The meta distribution @math for Poisson bipolar networks with ALOHA and cellular networks is calculated in @cite_10 , where a closed-form expression for the moments of @math is obtained, and an exact integral expression and simple bounds on @math are provided. A key result in @cite_10 is that, for constant transmitter density @math , as the Poisson bipolar network becomes very dense ( @math ) with a very small transmit probability ( @math ), the disparity among link success probabilities vanishes and all links have the same success probability, which is the mean success probability @math . For the Poisson cellular network, the meta distribution of the SIR is calculated for the downlink and uplink scenarios with fractional power control in @cite_14 , with base station cooperation in @cite_5 , and for D2D networks underlaying the cellular network (downlink) in @cite_3 . Furthermore, the meta distribution of the SIR is calculated for millimeter-wave D2D networks in @cite_15 and for D2D networks with interference cancellation in @cite_6 . * -1mm
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_6", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "1640283668", "2605261135", "2963262497", "2769047447" ], "abstract": [ "The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links. This paper focuses on the meta distribution of the SIR, which is the distribution of the conditional success probability @math given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it. The meta distribution provides fine-grained information on the SIR and answers questions such as “What fraction of users in a Poisson cellular network achieve 90 link reliability if the required SIR is 5 dB?” Interestingly, in the bipolar model, if the transmit probability @math is reduced while increasing the network density @math such that the density of concurrent transmitters @math stays constant as @math , @math degenerates to a constant, i.e., all links have exactly the same success probability in the limit, which is the one of the typical link. In contrast, in the cellular case, if the interfering base stations are active independently with probability @math , the variance of @math approaches a non-zero constant when @math is reduced to 0 while keeping the mean success probability constant.", "We study the performance of device-to-device (D2D) communication underlaying cellular wireless network in terms of the meta distribution of the signal-to-interference ratio (SIR), which is the distribution of the conditional SIR distribution given the locations of the wireless nodes. Modeling D2D transmitters and base stations as Poisson point processes (PPPs), moments of the conditional SIR distribution are derived in order to calculate analytical expressions for the meta distribution and the mean local delay of the typical D2D receiver and cellular downlink user. It turns out that for D2D users, the total interference from the D2D interferers and base stations is equal in distribution to that of a single PPP, while for downlink users, the effect of the interference from the D2D network is more complicated. We also derive the region of transmit probabilities for the D2D users and base stations that result in a finite mean local delay and give a simple inner bound on that region. Finally, the impact of increasing the base station density on the mean local delay, the meta distribution, and the density of users reliably served is investigated with numerical results.", "The meta distribution of the signal-to-interference ratio (SIR) provides fine-grained information about the performance of individual links in a wireless network. This paper focuses on the analysis of the meta distribution of the SIR for both the cellular network uplink and downlink with fractional power control. For the uplink scenario, an approximation of the interfering user point process with a non-homogeneous Poisson point process is used. The moments of the meta distribution for both scenarios are calculated. Some bounds, the analytical expression, the mean local delay, and the beta approximation of the meta distribution are provided. The results give interesting insights into the effect of the power control in both the uplink and downlink. Detailed simulations show that the approximations made in the analysis are well justified.", "The meta distribution provides fine-grained information on the signal-to-interference ratio (SIR) compared with the SIR distribution at the typical user. This paper first derives the meta distribution of the SIR in heterogeneous cellular networks with downlink coordinated multipoint transmission reception, including joint transmission (JT), dynamic point blanking (DPB), and dynamic point selection dynamic point blanking (DPS DPB), for the general typical user and the worst-case user (the typical user located at the Voronoi vertex in a single-tier network). A more general scheme called JT-DPB, which is the combination of JT and DPB, is studied. The moments of the conditional success probability are derived for the calculation of the meta distribution and the mean local delay. An exact analytical expression, the beta approximation, and simulation results of the meta distribution are provided. From the theoretical results, we gain insights on the benefits of different cooperation schemes and the impact of the number of cooperating base stations and other network parameters." ] }
1708.05768
2750396241
We consider the analysis of high dimensional data given in the form of a matrix with columns consisting of observations and rows consisting of features. Often the data is such that the observations do not reside on a regular grid, and the given order of the features is arbitrary and does not convey a notion of locality. Therefore, traditional transforms and metrics cannot be used for data organization and analysis. In this paper, our goal is to organize the data by defining an appropriate representation and metric such that they respect the smoothness and structure underlying the data. We also aim to generalize the joint clustering of observations and features in the case the data does not fall into clear disjoint groups. For this purpose, we propose multiscale data-driven transforms and metrics based on trees. Their construction is implemented in an iterative refinement procedure that exploits the co-dependencies between features and observations. Beyond the organization of a single dataset, our approach enables us to transfer the organization learned from one dataset to another and to integrate several datasets together. We present an application to breast cancer gene expression analysis: learning metrics on the genes to cluster the tumor samples into cancer sub-types and validating the joint organization of both the genes and the samples. We demonstrate that using our approach to combine information from multiple gene expression cohorts, acquired by different profiling technologies, improves the clustering of tumor samples.
This work is also related to the matrix factorization proposed by @cite_10 , where the graph Laplacians of both the features and the observation regularize the decomposition of a dataset into a low-rank matrix and a sparse matrix representing noise. Then the observations are clustered using k-means on the low-dimensional principal components of the smooth low-rank matrix. Our work differs in that we preform an iterative embedding of the observations and features, not jointly, but alternating between the two while updating the graph Laplacian of each in turn. In addition, we provide a clustering of the data.
{ "cite_N": [ "@cite_10" ], "mid": [ "2346506533", "2964122166", "2084376194", "2963701987" ], "abstract": [ "Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.", "The smallest eigenvalues and the associated eigenvectors (i.e., eigenpairs) of a graph Laplacian matrix have been widely used for spectral clustering and community detection. However, in real-life applications, the number of clusters or communities (say, K) is generally unknown a priori. Consequently, the majority of the existing methods either choose K heuristically or they repeat the clustering method with different choices of K and accept the best clustering result. The first option, more often, yields suboptimal result, while the second option is computationally expensive. In this work, we propose an incremental method for constructing the eigenspectrum of the graph Laplacian matrix. This method leverages the eigenstructure of graph Laplacian matrix to obtain the Kth smallest eigenpair of the Laplacian matrix given a collection of all previously computed (K-1 ) smallest eigenpairs. Our proposed method adapts the Laplacian matrix such that the batch eigenvalue decomposition problem transforms into an efficient sequential leading eigenpair computation problem. As a practical application, we consider user-guided spectral clustering. Specifically, we demonstrate that users can utilize the proposed incremental method for effective eigenpair computation and for determining the desired number of clusters based on multiple clustering metrics.", "In this paper, we consider the problem of unsupervised feature selection. Recently, spectral feature selection algorithms, which leverage both graph Laplacian and spectral regression, have received increasing attention. However, existing spectral feature selection algorithms suffer from two major problems: 1) since the graph Laplacian is constructed from the original feature space, noisy and irrelevant features may have adverse effect on the estimated graph Laplacian and hence degenerate the quality of the induced graph embedding, 2) since the cluster labels are discrete in natural, relaxing and approximating these labels into a continuous embedding can inevitably introduce noise into the estimated cluster labels. Without considering the noise in the cluster labels, the feature selection process may be misguided. In this paper, we propose a Robust Spectral learning framework for unsupervised Feature Selection (RSFS), which jointly improves the robustness of graph embedding and sparse spectral regression. Compared with existing methods which are sensitive to noisy features, our proposed method utilizes a robust local learning method to construct the graph Laplacian and a robust spectral regression method to handle the noise on the learned cluster labels. In order to solve the proposed optimization problem, an efficient iterative algorithm is proposed. We also show the close connection between the proposed robust spectral regression and robust Huber M-estimator. Experimental results on different datasets show the superiority of RSFS.", "Multi-view spectral clustering, which aims at yielding an agreement or consensus data objects grouping across multi-views with their graph laplacian matrices, is a fundamental clustering problem. Among the existing methods, Low-Rank Representation (LRR) based method is quite superior in terms of its effectiveness, intuitiveness and robustness to noise corruptions. However, it aggressively tries to learn a common low-dimensional subspace for multi-view data, while inattentively ignoring the local manifold structure in each view, which is critically important to the spectral clustering; worse still, the low-rank minimization is enforced to achieve the data correlation consensus among all views, failing to flexibly preserve the local manifold structure for each view. In this paper, 1) we propose a multi-graph laplacian regularized LRR with each graph laplacian corresponding to one view to characterize its local manifold structure. 2) Instead of directly enforcing the low-rank minimization among all views for correlation consensus, we separately impose low-rank constraint on each view, coupled with a mutual structural consensus constraint, where it is able to not only well preserve the local manifold structure but also serve as a constraint for that from other views, which iteratively makes the views more agreeable. Extensive experiments on real-world multi-view data sets demonstrate its superiority." ] }
1708.05732
2746990488
Inter-connected objects, either via public or private networks are the near future of modern societies. Such inter-connected objects are referred to as Internet-of-Things (IoT) and or Cyber-Physical Systems (CPS). One example of such a system is based on Unmanned Aerial Vehicles (UAVs). The fleet of such vehicles are prophesied to take on multiple roles involving mundane to high-sensitive, such as, prompt pizza or shopping deliveries to your homes to battlefield deployment for reconnaissance and combat missions. Drones, as we refer to UAVs in this paper, either can operate individually (solo missions) or part of a fleet (group missions), with and without constant connection with the base station. The base station acts as the command centre to manage the activities of the drones. However, an independent, localised and effective fleet control is required, potentially based on swarm intelligence, for the reasons: 1) increase in the number of drone fleets, 2) number of drones in a fleet might be multiple of tens, 3) time-criticality in making decisions by such fleets in the wild, 4) potential communication congestions lag, and 5) in some cases working in challenging terrains that hinders or mandates-limited communication with control centre (i.e., operations spanning long period of times or military usage of such fleets in enemy territory). This self-ware, mission-focused and independent fleet of drones that potential utilises swarm intelligence for a) air-traffic and or flight control management, b) obstacle avoidance, c) self-preservation while maintaining the mission criteria, d) collaboration with other fleets in the wild (autonomously) and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.
The swarm intelligence paradigm has been used to optimise and control single UAVs: In @cite_0 , single vehicle autonomous path planning by learning from small number of examples.
{ "cite_N": [ "@cite_0" ], "mid": [ "1060861436", "2783023810", "2282034761", "1986607129" ], "abstract": [ "Swarm intelligence principles have been widely studied and applied to a number of different tasks where a group of autonomous robots is used to solve a problem with a distributed approach, i.e. without central coordination. A survey of such tasks is presented, illustrating various algorithms that have been used to tackle the challenges imposed by each task. Aggregation, flocking, foraging, object clustering and sorting, navigation, path formation, deployment, collaborative manipulation and task allocation problems are described in detail, and a high-level overview is provided for other swarm robotics tasks. For each of the main tasks, (1) swarm design methods are identified, (2) past works are divided in task-specific categories, and (3) mathematical models and performance metrics are described. Consistently with the swarm intelligence paradigm, the main focus is on studies characterized by distributed control, simplicity of individual robots and locality of sensing and communication. Distributed algorithms are shown to bring cooperation between agents, obtained in various forms and often without explicitly programming a cooperative behavior in the single robot controllers. Offline and online learning approaches are described, and some examples of past works utilizing these approaches are reviewed.", "This work proposes a method for moving a swarm of autonomous Unmanned Aerial Vehicles to accomplish an specific task. The approach uses a centralized strategy which considers a trajectory calculation and collision avoidance. The solution was implemented in a simulated scenario as well as in a real controlled environment using a swarm of nano drones, together with a setup supported by a motion capture system. The solution was tested while planting virtual seeds in a field composed by a grid of points that represent the places to be sown. Experiments were performed for measuring completion times and attempts to prevent impacts in order to test the effectiveness, scalability and stability of the solution as well as the robustness of the collision avoidance algorithm while increasing the number of agents to perform the task.", "This paper presents a distributed, guidance and control algorithm for reconfiguring swarms composed of hundreds to thousands of agents with limited communication and computation capabilities. This algorithm solves both the optimal assignment and collision-free trajectory generation for robotic swarms, in an integrated manner, when given the desired shape of the swarm without pre-assigned terminal positions. The optimal assignment problem is solved using a distributed auction assignment that can vary the number of target positions in the assignment, and the collision-free trajectories are generated using sequential convex programming. Finally, model predictive control is used to solve the assignment and trajectory generation in real time using a receding horizon. The model predictive control formulation uses current state measurements to resolve for the optimal assignment and trajectory. The implementation of the distributed auction algorithm and sequential convex programming using model predictive control produces the swarm assignment and trajectory optimization SATO algorithm that transfers a swarm of robots or vehicles to a desired shape in a distributed fashion. Once the desired shape is uploaded to the swarm, the algorithm determines where each robot goes and how it should get there in a fuel-efficient, collision-free manner. Results of flight experiments using multiple quadcopters show the effectiveness of the proposed SATO algorithm.", "In recent years, Unmanned Aerial Vehicles (UAV), have been increasingly utilized by both military and civilian organizations because they are less expensive, provide greater flexibilities and remove the need for on-board pilot support. Largely due to their utility and increased capabilities, in the near future, swarms of UAVs will replace single UAV use. Efficient control of swarms opens a set of new challenges, such as automatic UAV coordination, efficient swarm monitoring and dynamic mission planning. In this paper, we investigate the problem of dynamic mission planning for a UAV swarm. A centralized-distributed hybrid control framework is proposed for mission assignment and scheduling. The Dynamic Data Driven Application System (DDDAS) principles are applied to the framework so that it can adapt to the changing nature of the environment and the missions. A prototype simulation program is implemented as a proof-ofconcept of the framework. Experimentation with the framework suggests the effectiveness of swarm control for several mission planning mechanisms." ] }
1708.05732
2746990488
Inter-connected objects, either via public or private networks are the near future of modern societies. Such inter-connected objects are referred to as Internet-of-Things (IoT) and or Cyber-Physical Systems (CPS). One example of such a system is based on Unmanned Aerial Vehicles (UAVs). The fleet of such vehicles are prophesied to take on multiple roles involving mundane to high-sensitive, such as, prompt pizza or shopping deliveries to your homes to battlefield deployment for reconnaissance and combat missions. Drones, as we refer to UAVs in this paper, either can operate individually (solo missions) or part of a fleet (group missions), with and without constant connection with the base station. The base station acts as the command centre to manage the activities of the drones. However, an independent, localised and effective fleet control is required, potentially based on swarm intelligence, for the reasons: 1) increase in the number of drone fleets, 2) number of drones in a fleet might be multiple of tens, 3) time-criticality in making decisions by such fleets in the wild, 4) potential communication congestions lag, and 5) in some cases working in challenging terrains that hinders or mandates-limited communication with control centre (i.e., operations spanning long period of times or military usage of such fleets in enemy territory). This self-ware, mission-focused and independent fleet of drones that potential utilises swarm intelligence for a) air-traffic and or flight control management, b) obstacle avoidance, c) self-preservation while maintaining the mission criteria, d) collaboration with other fleets in the wild (autonomously) and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.
In @cite_8 , three-dimensional path planning for a single drone using a bat inspired algorithm to determine suitable points in space and applying B-spline curves to improve smoothness of the path.
{ "cite_N": [ "@cite_8" ], "mid": [ "2196839768", "2409009991", "2156852101", "2091744661" ], "abstract": [ "Abstract As a challenging high dimension optimization problem, three-dimensional path planning for Uninhabited Combat Air Vehicles (UCAV) mainly centralizes on optimizing the flight route with different types of constrains under complicated combating environments. An improved version of Bat Algorithm (BA) in combination with a Differential Evolution (DE), namely IBA, is proposed to optimize the UCAV three-dimensional path planning problem for the first time. In IBA, DE is required to select the most suitable individual in the bat population. By connecting the selected nodes using the proposed IBA, a safe path is successfully obtained. In addition, B-Spline curves are employed to smoothen the path obtained further and make it practically more feasible for UCAV. The performance of IBA is compared to that of the basic BA on a 3-D UCAV path planning problem. The experimental results demonstrate that IBA is a better technique for UCAV three-dimensional path planning problems compared to the basic BA model.", "This paper presents a novel path planning algorithm for the autonomous exploration of unknown space using aerial robotic platforms. The proposed planner employs a receding horizon “next-best-view” scheme: In an online computed random tree it finds the best branch, the quality of which is determined by the amount of unmapped space that can be explored. Only the first edge of this branch is executed at every planning step, while repetition of this procedure leads to complete exploration results. The proposed planner is capable of running online, onboard a robot with limited resources. Its high performance is evaluated in detailed simulation studies as well as in a challenging real world experiment using a rotorcraft micro aerial vehicle. Analysis on the computational complexity of the algorithm is provided and its good scaling properties enable the handling of large scale and complex problem setups.", "This paper presents a strategy for improving motion planning of an unmanned helicopter flying in a dense and complex city-like environment. Although Sampling Based Motion planning algorithms have shown success in many robotic problems, problems that exhibit ldquonarrow passagerdquo properties involving kinodynamic planning of high dimensional vehicles like aerial vehicles still present computational challenges. In this work, to solve the kinodynamic motion planning problem of an unmanned helicopter, we suggest a two step planner. In the first step, the planner explores the environment through a randomized reachability tree search using an approximate line segment model. The resulting connecting path is converted into flight way points through a line-of-sight segmentation. In the second step, every consecutive way points are connected with B-Spline curves and these curves are repaired probabilistically to obtain a dynamically feasible path. Numerical simulations in 3D indicate the ability of the method to provide real-time solutions in dense and complex environments.", "The problem of generating a smooth reference path, given a finite family of discrete, locally optimal paths, is investigated. A finite discretization of the environment results in a sequence of obstacle-free square cells. The generated path must lie inside the channel generated by these obstacle-free cells, while minimizing certain performance criteria. Two constrained optimization problems are formulated and solved subject to the given geometric (linear) constraints and boundary conditions in order to generate a library of B-spline path templates offline. These templates are recalled during implementation and are merged together on the fly in order to construct a smooth and feasible reference path to be followed by a closed-loop tracking controller. Combined with a discrete path planner, the proposed algorithm provides a complete solution to the obstacle-free path-generation problem for an unmanned aerial vehicle in a computationally efficient manner, which is suitable for real-time implementation." ] }
1708.05732
2746990488
Inter-connected objects, either via public or private networks are the near future of modern societies. Such inter-connected objects are referred to as Internet-of-Things (IoT) and or Cyber-Physical Systems (CPS). One example of such a system is based on Unmanned Aerial Vehicles (UAVs). The fleet of such vehicles are prophesied to take on multiple roles involving mundane to high-sensitive, such as, prompt pizza or shopping deliveries to your homes to battlefield deployment for reconnaissance and combat missions. Drones, as we refer to UAVs in this paper, either can operate individually (solo missions) or part of a fleet (group missions), with and without constant connection with the base station. The base station acts as the command centre to manage the activities of the drones. However, an independent, localised and effective fleet control is required, potentially based on swarm intelligence, for the reasons: 1) increase in the number of drone fleets, 2) number of drones in a fleet might be multiple of tens, 3) time-criticality in making decisions by such fleets in the wild, 4) potential communication congestions lag, and 5) in some cases working in challenging terrains that hinders or mandates-limited communication with control centre (i.e., operations spanning long period of times or military usage of such fleets in enemy territory). This self-ware, mission-focused and independent fleet of drones that potential utilises swarm intelligence for a) air-traffic and or flight control management, b) obstacle avoidance, c) self-preservation while maintaining the mission criteria, d) collaboration with other fleets in the wild (autonomously) and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.
In @cite_35 , authors introduced and validated a decentralised architecture for search and rescue missions in ground based robot groups of different sizes. Considered limited communication ability with a command centre and employs distributed communication.
{ "cite_N": [ "@cite_35" ], "mid": [ "1601776805", "2125834926", "2060794637", "1964918420" ], "abstract": [ "We propose a multi-robot exploration algorithm that uses adaptive coordination to provide heterogeneous behavior. The key idea is to maximize the efficiency of exploring and mapping an unknown environment when a team is faced with unreliable communication and limited battery life (e.g., with aerial rotorcraft). The proposed algorithm utilizes four states: explore, meet, sacrifice, and relay. The explore state uses a frontier-based exploration algorithm, the meet state returns to the last known location of communication to share data, the sacrifice state sends the robot out to explore without consideration of remaining battery, and the relay state lands the robot until a meeting occurs. This approach allows robots to take on the role of a relay to improve communication between team members. In addition, the robots can “sacrifice” themselves by continuing to explore even when they do not have sufficient battery to return to the base station. We compare the performance of the proposed approach to state-of-the-art frontier-based exploration, and results show gains in explored area. The feasibility of components of the proposed approach is also demonstrated on a team of two custom-built quadcopters exploring an office environment.", "Robot search and rescue is a time critical task, i.e. a large terrain has to be explored by multiple robots within a short amount of time. The efficiency of exploration depends mainly on the coordination between the robots and hence on the reliability of communication, which considerably suffers under the hostile conditions encountered after a disaster. Furthermore, rescue robots have to generate a map of the environment which has to be sufficiently accurate for reporting the locations of victims to human task forces. Basically, the robots have to solve autonomously in real-time the problem of Simultaneous Localization and Mapping (SLAM). This paper proposes a novel method for real-time exploration and SLAM based on RFID tags that are autonomously distributed in the environment. We utilized the algorithm of Lu and Milios [7] for calculating globally consistent maps from detected RFID tags. Furthermore we show how RFID tags can be used for coordinating the exploration of multiple robots. Results from experiments conducted in the simulation and on a robot show that our approach allows the computationally efficient construction of a map within harsh environments, and coordinated exploration of a team of robots.", "Urban search and rescue missions raise special requirements on robotic systems. Small aerial systems provide essential support to human task forces in situation assessment and surveillance. As external infrastructure for navigation and communication is usually not available, robotic systems must be able to operate autonomously. A limited payload of small aerial systems poses a great challenge to the system design. The optimal tradeoff between flight performance, sensors, and computing resources has to be found. Communication to external computers cannot be guaranteed; therefore, all processing and decision making has to be done on board. In this article, we present an unmanned aircraft system design fulfilling these requirements. The components of our system are structured into groups to encapsulate their functionality and interfaces. We use both laser and stereo vision odometry to enable seamless indoor and outdoor navigation. The odometry is fused with an inertial measurement unit in an extended Kalman filter. Navigation is supported by a module that recognizes known objects in the environment. A distributed computation approach is adopted to address the computational requirements of the used algorithms. The capabilities of the system are validated in flight experiments, using a quadrotor.", "Current applications of mobile robots in urban search and rescue (USAR) environments require a human operator in the loop to help guide the robot remotely. Although human operation can be effective, the unknown cluttered nature of the environments make robot navigation and victim identification highly challenging. Operators can become stressed and fatigued very quickly due to a loss of situational awareness, leading to the robots getting stuck and not being able to find victims in the scene during this time-sensitive operation. In addition, current autonomous robots are not capable of traversing these complex unpredictable environments. To address this challenge, a balance between the level of autonomy of the robot and the amount of human control over the robot needs to be addressed. In this paper, we present a unique control architecture for semi-autonomous navigation of a robotic platform utilizing sensory information provided by a novel real-time 3D mapping sensor. The control system provides the robot with the ability to learn and make decisions regarding which rescue tasks should be carried out at a given time and whether an autonomous robot or a human controlled robot can perform these tasks more efficiently without compromising the safety of the victims, rescue workers and the rescue robot. Preliminary experiments were conducted to evaluate the performance of the proposed collaborative control approach for a USAR robot in an unknown cluttered environment." ] }
1708.05732
2746990488
Inter-connected objects, either via public or private networks are the near future of modern societies. Such inter-connected objects are referred to as Internet-of-Things (IoT) and or Cyber-Physical Systems (CPS). One example of such a system is based on Unmanned Aerial Vehicles (UAVs). The fleet of such vehicles are prophesied to take on multiple roles involving mundane to high-sensitive, such as, prompt pizza or shopping deliveries to your homes to battlefield deployment for reconnaissance and combat missions. Drones, as we refer to UAVs in this paper, either can operate individually (solo missions) or part of a fleet (group missions), with and without constant connection with the base station. The base station acts as the command centre to manage the activities of the drones. However, an independent, localised and effective fleet control is required, potentially based on swarm intelligence, for the reasons: 1) increase in the number of drone fleets, 2) number of drones in a fleet might be multiple of tens, 3) time-criticality in making decisions by such fleets in the wild, 4) potential communication congestions lag, and 5) in some cases working in challenging terrains that hinders or mandates-limited communication with control centre (i.e., operations spanning long period of times or military usage of such fleets in enemy territory). This self-ware, mission-focused and independent fleet of drones that potential utilises swarm intelligence for a) air-traffic and or flight control management, b) obstacle avoidance, c) self-preservation while maintaining the mission criteria, d) collaboration with other fleets in the wild (autonomously) and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.
In @cite_1 , the authors achieved area coverage for surveillance in a FoD using visual relative localisation for keeping formation autonomously.
{ "cite_N": [ "@cite_1" ], "mid": [ "2053842193", "2889337521", "2133235827", "2124917681" ], "abstract": [ "We describe a novel method for directing the attention of an automated surveillance system. Our starting premise is that the attention of people in a scene can be used as an indicator of interesting areas and events. To determine people’s attention from passive visual observations we develop a system for automatic tracking and detection of individual heads to infer their gaze direction. The former is achieved by combining a histogram of oriented gradient (HOG) based head detector with frame-to-frame tracking using multiple point features to provide stable head images. The latter is achieved using a head pose classification method which uses randomised ferns with decision branches based on both HOG and colour based features to determine a coarse gaze direction for each person in the scene. By building both static and temporally varying maps of areas where people look we are able to identify interesting regions.", "Nowadays, video surveillance scenarios usually rely on manually annotated focus areas to constrain automatic video analysis tasks. Although manual annotation simplifies several stages of the analysis, its use hinders the scalability of the developed solutions and might induce operational problems in scenarios recorded with multiple moving cameras (MMCs). To tackle these problems, an automatic method for the cooperative extraction of areas of interest (AoIs) is proposed. Each captured frame is segmented into regions with semantic roles using a state-of-the-art method. Semantic evidences from different junctures, cameras, and points-of-view are, then, spatio-temporally aligned on a common ground plane. Experimental results on widely used datasets recorded with multiple but static cameras suggest that this process provides broader and more accurate AoIs than those manually defined in the datasets. Moreover, the proposed method naturally determines the projection of obstacles and functional objects in the scene, paving the road towards systems focused on the automatic analysis of human behavior. To our knowledge, this is the first study dealing with this problem, as evidenced by the lack of publicly available MMC benchmarks. To also cope with this issue, we provide a new MMC dataset with associated semantic scene annotations.", "This paper presents a survey of trajectory-based activity analysis for visual surveillance. It describes techniques that use trajectory data to define a general set of activities that are applicable to a wide range of scenes and environments. Events of interest are detected by building a generic topographical scene description from underlying motion structure as observed over time. The scene topology is automatically learned and is distinguished by points of interest and motion characterized by activity paths. The methods we review are intended for real-time surveillance through definition of a diverse set of events for further analysis triggering, including virtual fencing, speed profiling, behavior classification, anomaly detection, and object interaction.", "In a typical surveillance installation, a human operator has to constantly monitor a large array of video feeds for suspicious behaviour. As the number of cameras increases, information overload makes manual surveillance increasingly difficult, adding to other confounding factors such as human fatigue and boredom. The objective of an intelligent vision-based surveillance system is to automate the monitoring and event detection components of surveillance, alerting the operator only when unusual behaviour or other events of interest are detected. While most traditional methods for trajectory-based unusual behaviour detection rely on low-level trajectory features such as flow vectors or control points, this paper builds upon a recently introduced approach that makes use of higher-level features of intentionality. Individuals in the scene are modelled as intentional agents, and unusual behaviour is detected by evaluating the explicability of the agent's trajectory with respect to known spatial goals. The proposed method extends the original goal-based approach in three ways: first, the spatial scene structure is learned in a training phase; second, a region transition model is learned to describe normal movement patterns between spatial regions; and third, classification of trajectories in progress is performed in a probabilistic framework using particle filtering. Experimental validation on three published third-party datasets demonstrates the validity of the proposed approach." ] }
1708.05894
2745765770
Sepsis is a poorly understood and potentially life-threatening complication that can occur as a result of infection. Early detection and treatment improves patient outcomes, and as such it poses an important challenge in medicine. In this work, we develop a flexible classifier that leverages streaming lab results, vitals, and medications to predict sepsis before it occurs. We model patient clinical time series with multi-output Gaussian processes, maintaining uncertainty about the physiological state of a patient while also imputing missing values. The mean function takes into account the effects of medications administered on the trajectories of the physiological variables. Latent function values from the Gaussian process are then fed into a deep recurrent neural network to classify patient encounters as septic or not, and the overall model is trained end-to-end using back-propagation. We train and validate our model on a large dataset of 18 months of heterogeneous inpatient stays from the Duke University Health System, and develop a new "real-time" validation scheme for simulating the performance of our model as it will actually be used. Our proposed method substantially outperforms clinical baselines, and improves on a previous related model for detecting sepsis. Our model's predictions will be displayed in a real-time analytics dashboard to be used by a sepsis rapid response team to help detect and improve treatment of sepsis.
There are many previously published early warning scores for predicting clinical deterioration or other related outcomes. For instance, the NEWS score ( @cite_13 ) and MEWS score ( @cite_25 ) are two of the more common scores used to assess overall deterioration. The SIRS score for systemic inflammatory response syndrome was commonly used to screen for sepsis in the past ( @cite_10 ), although it has been phased out by other scores designed for sepsis such as SOFA ( @cite_26 ) and qSOFA ( @cite_6 ) in recent years.
{ "cite_N": [ "@cite_13", "@cite_26", "@cite_6", "@cite_10", "@cite_25" ], "mid": [ "1999351703", "1943063538", "2150979970", "2014224402" ], "abstract": [ "Objective. —To develop customized versions of the Simplified Acute Physiology Score II (SAPS II) and the 24-hour Mortality Probability Model II (MPM II24) to estimate the probability of mortality for intensive care unit patients with early severe sepsis. Design and Setting. —Logistic regression models developed for patients with severe sepsis in a database of adult medical and surgical intensive care units in 12 countries. Patients. —Of 11 458 patients in the intensive care unit for at least 24 hours, 1130 had severe sepsis based on criteria of the American College of Chest Physicians and the Society of Critical Care Medicine (systemic inflammatory response syndrome in response to infection, plus hypotension, hypoperfusion, or multiple organ dysfunction). Results. —In patients with severe sepsis, mortality was higher (48.0 vs 19.6 among other patients) and 28-day survival was lower. The customized SAPS II was well calibrated (P=.92 for the goodness-of-fit test) and discriminated well (area under the receiver operating characteristic [ROC] curve, 0.78). Performance in the validation sample was equally good (P=.85 for the goodness-of-fit test; area under the ROC curve, 0.79). The customized MPM II24was well calibrated (P=.92 for the goodness-of-fit test) and discriminated well (area under the ROC curve, 0.79). Performance in the validation sample was equally good (P=.52 for the goodness-of-fit test; area under the ROC curve, 0.75). The models are independent of each other; either can be used alone to estimate the probability of mortality of patients with severe sepsis. Conclusions. —Customization provides a simple technique to apply existing models to a subgroup of patients. Accurately assessing the probability of hospital mortality is a useful adjunct for clinical trials. (JAMA. 1995;273:644-650)", "Sepsis is a leading cause of death in the United States, with mortality highest among patients who develop septic shock. Early aggressive treatment decreases morbidity and mortality. Although automated screening tools can detect patients currently experiencing severe sepsis and septic shock, none predict those at greatest risk of developing shock. We analyzed routinely available physiological and laboratory data from intensive care unit patients and developed “TREWScore,” a targeted real-time early warning score that predicts which patients will develop septic shock. TREWScore identified patients before the onset of septic shock with an area under the ROC (receiver operating characteristic) curve (AUC) of 0.83 [95 confidence interval (CI), 0.81 to 0.85]. At a specificity of 0.67, TREWScore achieved a sensitivity of 0.85 and identified patients a median of 28.2 [interquartile range (IQR), 10.6 to 94.2] hours before onset. Of those identified, two-thirds were identified before any sepsis-related organ dysfunction. In comparison, the Modified Early Warning Score, which has been used clinically for septic shock prediction, achieved a lower AUC of 0.73 (95 CI, 0.71 to 0.76). A routine screening protocol based on the presence of two of the systemic inflammatory response syndrome criteria, suspicion of infection, and either hypotension or hyperlactatemia achieved a lower sensitivity of 0.74 at a comparable specificity of 0.64. Continuous sampling of data from the electronic health records and calculation of TREWScore may allow clinicians to identify patients at risk for septic shock and provide earlier interventions that would prevent or mitigate the associated morbidity and mortality.", "Abstract Introduction Early warning scores (EWS) are recommended as part of the early recognition and response to patient deterioration. The Royal College of Physicians recommends the use of a National Early Warning Score (NEWS) for the routine clinical assessment of all adult patients. Methods We tested the ability of NEWS to discriminate patients at risk of cardiac arrest, unanticipated intensive care unit (ICU) admission or death within 24h of a NEWS value and compared its performance to that of 33 other EWSs currently in use, using the area under the receiver-operating characteristic (AUROC) curve and a large vital signs database ( n =198,755 observation sets) collected from 35,585 consecutive, completed acute medical admissions. Results The AUROCs (95 CI) for NEWS for cardiac arrest, unanticipated ICU admission, death, and any of the outcomes, all within 24h, were 0.722 (0.685–0.759), 0.857 (0.847–0.868), 0.894 (0.887–0.902), and 0.873 (0.866–0.879), respectively. Similarly, the ranges of AUROCs (95 CI) for the other 33 EWSs were 0.611 (0.568–0.654) to 0.710 (0.675–0.745) (cardiac arrest); 0.570 (0.553–0.568) to 0.827 (0.814–0.840) (unanticipated ICU admission); 0.813 (0.802–0.824) to 0.858 (0.849–0.867) (death); and 0.736 (0.727–0.745) to 0.834 (0.826–0.842) (any outcome). Conclusions NEWS has a greater ability to discriminate patients at risk of the combined outcome of cardiac arrest, unanticipated ICU admission or death within 24h of a NEWS value than 33 other EWSs.", "INTRODUCTIONThe Modified Early Warning Score (MEWS) is a simple, physiological score that may allow improvement in the quality and safety of management provided to surgical ward patients. The primary purpose is to prevent delay in intervention or transfer of critically ill patients. PATIENTS AND METHODSA total of 334 consecutive ward patients were prospectively studied. MEWS were recorded on all patients and the primary end-point was transfer to ITU or HDU. RESULTSFifty-seven (17 ) ward patients triggered the call-out algorithm by scoring four or more on MEWS. Emergency patients were more likely to trigger the system than elective patients. Sixteen (5 of the total) patients were admitted to the ITU or HDU. MEWS with a threshold of four or more was 75 sensitive and 83 specific for patients who required transfer to ITU or HDU. CONCLUSIONSThe MEWS in association with a call-out algorithm is a useful and appropriate risk-management tool that should be implemented for all surgical in-patients." ] }
1708.05688
2746260445
One of the most crucial issues in data mining is to model human behaviour in order to provide personalisation, adaptation and recommendation. This usually involves implicit or explicit knowledge, either by observing user interactions, or by asking users directly. But these sources of information are always subject to the volatility of human decisions, making utilised data uncertain to a particular extent. In this contribution, we elaborate on the impact of this human uncertainty when it comes to comparative assessments of different data mining approaches. In particular, we reveal two problems: (1) biasing effects on various metrics of model-based prediction and (2) the propagation of uncertainty and its thus induced error probabilities for algorithm rankings. For this purpose, we introduce a probabilistic view and prove the existence of those problems mathematically, as well as provide possible solution strategies. We exemplify our theory mainly in the context of recommender systems along with the metric RMSE as a prominent example of precision quality measures.
The central role of information systems led to a lot of research and produced a variety of techniques and approaches @cite_29 . Here, we focus especially on recommender systems which are comprehensively described in @cite_18 @cite_33 . For the comparative assessment, different metrics are used to determine the prediction accuracy, such as the root mean squared error (RMSE), the mean absolute error (MAE), the mean average precision (MAP) along with many others @cite_30 @cite_23 @cite_32 . These accuracy metrics are often criticised @cite_6 and various researchers suggest that human computer interaction should be taken more into account @cite_5 @cite_8 . With our contribution, we extend existing criticism by an additional aspect that has little discussed so far. Although we exemplify our methodology in accordance with the RMSE, the main results of this contribution can be easily adopted for alternative assessment metrics without substantial loss of generality, insofar they require for (uncertain) human input. [.25ex]
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_33", "@cite_8", "@cite_29", "@cite_32", "@cite_6", "@cite_23", "@cite_5" ], "mid": [ "2259944928", "2171357718", "2150886314", "2138632244" ], "abstract": [ "We evaluate and compare two common methods, artificial neural networks (ANN) and support vector regression (SVR), for predicting energy productions from a solar photovoltaic (PV) system in Florida 15 min, 1 h and 24 h ahead of time. A hierarchical approach is proposed based on the machine learning algorithms tested. The production data used in this work corresponds to 15 min averaged power measurements collected from 2014. The accuracy of the model is determined using computing error statistics such as mean bias error (MBE), mean absolute error (MAE), root mean square error (RMSE), relative MBE (rMBE), mean percentage error (MPE) and relative RMSE (rRMSE). This work provides findings on how forecasts from individual inverters will improve the total solar power generation forecast of the PV system.", "Recent works in Recommender Systems (RS) have investigated the relationships between the prediction accuracy, i.e. the ability of a RS to minimize a cost function (for instance the RMSE measure) in estimating users’ preferences, and the accuracy of the recommendation list provided to users. State-of-the-art recommendation algorithms, which focus on the minimization of RMSE, have shown to achieve weak results from the recommendation accuracy perspective, and vice versa. In this work we present a novel Bayesian probabilistic hierarchical approach for users’ preference data, which is designed to overcome the limitation of current methodologies and thus to meet both prediction and recommendation accuracy. According to the generative semantics of this technique, each user is modeled as a random mixture over latent factors, which identify users community interests. Each individual user community is then modeled as a mixture of topics, which capture the preferences of the members on a set of items. We provide two dierent formalization of the basic hierarchical model: BH-Forced focuses on rating prediction, while BH-Free models both the popularity of items and the distribution over item ratings. The combined modeling of item popularity and rating provides a powerful framework for the generation of highly accurate recommendations. An extensive evaluation over two popular benchmark datasets reveals the eectiveness and the quality of the proposed algorithms, showing that BH-Free realizes the most satisfactory compromise between prediction and recommendation accuracy with respect to several stateof-the-art competitors.", "In many commercial systems, the 'best bet' recommendations are shown, but the predicted rating values are not. This is usually referred to as a top-N recommendation task, where the goal of the recommender system is to find a few specific items which are supposed to be most appealing to the user. Common methodologies based on error metrics (such as RMSE) are not a natural fit for evaluating the top-N recommendation task. Rather, top-N performance can be directly measured by alternative methodologies based on accuracy metrics (such as precision recall). An extensive evaluation of several state-of-the art recommender algorithms suggests that algorithms optimized for minimizing RMSE do not necessarily perform as expected in terms of top-N recommendation task. Results show that improvements in RMSE often do not translate into accuracy improvements. In particular, a naive non-personalized algorithm can outperform some common recommendation approaches and almost match the accuracy of sophisticated algorithms. Another finding is that the very few top popular items can skew the top-N performance. The analysis points out that when evaluating a recommender algorithm on the top-N recommendation task, the test set should be chosen carefully in order to not bias accuracy metrics towards non-personalized solutions. Finally, we offer practitioners new variants of two collaborative filtering algorithms that, regardless of their RMSE, significantly outperform other recommender algorithms in pursuing the top-N recommendation task, with offering additional practical advantages. This comes at surprise given the simplicity of these two methods.", "An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML) is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS), root relative square error (RRSE), normalized mean absolute error (MAE), and correlation factor (R). Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91), the lowest RRSE errors (79.46 and 79.78 ), the lowest average MAE errors (18.12 and 19.42 ), and the highest average correlation factors (0.41 and 0.42). Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning. Additional key words: regression trees; neural networks; support vector regression; k-nearest neighbor; multiple linear regression." ] }
1708.05688
2746260445
One of the most crucial issues in data mining is to model human behaviour in order to provide personalisation, adaptation and recommendation. This usually involves implicit or explicit knowledge, either by observing user interactions, or by asking users directly. But these sources of information are always subject to the volatility of human decisions, making utilised data uncertain to a particular extent. In this contribution, we elaborate on the impact of this human uncertainty when it comes to comparative assessments of different data mining approaches. In particular, we reveal two problems: (1) biasing effects on various metrics of model-based prediction and (2) the propagation of uncertainty and its thus induced error probabilities for algorithm rankings. For this purpose, we introduce a probabilistic view and prove the existence of those problems mathematically, as well as provide possible solution strategies. We exemplify our theory mainly in the context of recommender systems along with the metric RMSE as a prominent example of precision quality measures.
Probabilistic modelling of human cognition processes is quite common to the field of computational neuroscience. In particular, aspects of human decision-making can be stated as problems of probabilistic inference @cite_7 (often referred to as Bayesian Brain'' paradigm). Besides external influential factors, the belief precision is influenced by biological factors like current activity of dopamin cells @cite_21 . In other words, human decisions can be seen as uncertain quantities by nature of the underlying cognition mechanisms. Recently, this idea has been adopted for various probabilistic approaches of neural coding @cite_16 . In parallel, many methods of predictive data mining employ probabilistic (e.g. Bayesian) models for approximating mechanisms of human decisions based on prior observations as training data. At the same time, common evaluation approaches still use non-random quality metrics and thus do not account for possible decision ranking errors in a natural way. As a consequence, we systematically tackle both, observed user responses and resulting quality of the evaluated predictor as random quantities. This allows us to elaborate on the impact of human uncertainty and provide solutions for a more differentiated and objective assessment of predictive models. [.25ex]
{ "cite_N": [ "@cite_21", "@cite_7", "@cite_16" ], "mid": [ "2521579474", "2114157818", "2962861173", "1511986666" ], "abstract": [ "Objective Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. @PARASPLIT Materials and Methods The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. @PARASPLIT Results Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16 , and recall 35 . This can be improved to 0.90, 24 , and 47 ( P < 10−20) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., “critical care,” “pneumonia,” “neurologic evaluation”). @PARASPLIT Discussion Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. @PARASPLIT Conclusion Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support.", "Several real-world applications need to effectively manage and reason about large amounts of data that are inherently uncertain. For instance, pervasive computing applications must constantly reason about volumes of noisy sensory readings for a variety of reasons, including motion prediction and human behavior modeling. Such probabilistic data analyses require sophisticated machine-learning tools that can effectively model the complex spatio temporal correlation patterns present in uncertain sensory data. Unfortunately, to date, most existing approaches to probabilistic database systems have relied on somewhat simplistic models of uncertainty that can be easily mapped onto existing relational architectures: Probabilistic information is typically associated with individual data tuples, with only limited or no support for effectively capturing and reasoning about complex data correlations. In this paper, we introduce BayesStore, a novel probabilistic data management architecture built on the principle of handling statistical models and probabilistic inference tools as first-class citizens of the database system. Adopting a machine-learning view, BAYESSTORE employs concise statistical relational models to effectively encode the correlation patterns between uncertain data, and promotes probabilistic inference and statistical model manipulation as part of the standard DBMS operator repertoire to support efficient and sound query processing. We present BAYESSTORE's uncertainty model based on a novel, first-order statistical model, and we redefine traditional query processing operators, to manipulate the data and the probabilistic models of the database in an efficient manner. Finally, we validate our approach, by demonstrating the value of exploiting data correlations during query processing, and by evaluating a number of optimizations which significantly accelerate query processing.", "We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. Our models are decision lists, which consist of a series of if...then... statements (for example, if high blood pressure, then stroke) that discretize a high-dimensional, multivariate feature space into a series of simple, readily interpretable decision statements. We introduce a generative model called Bayesian Rule Lists that yields a posterior distribution over possible decision lists. It employs a novel prior structure to encourage sparsity. Our experiments show that Bayesian Rule Lists has predictive accuracy on par with the current top algorithms for prediction in machine learning. Our method is motivated by recent developments in personalized medicine, and can be used to produce highly accurate and interpretable medical scoring systems. We demonstrate this by producing an alternative to the CHADS2 score, actively used in clinical practice for estimating the risk of stroke in patients that have atrial fibrillation. Our model is as interpretable as CHADS2, but more accurate.", "Most tasks require a person or an automated system to reason -- to reach conclusions based on available information. The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality. Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones: representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material: skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs." ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
Mapping from laser sensors is a well studied research area in Robotics; in the early studies, the map has been estimated in 2 dimensions @cite_14 , while, in recent years, the prevalent approach is to estimate it in 3D thanks to advances in algorithms, processing and sensors. Mapping can be pursued together with robot self-localization leading to Simultaneous Localization and Mapping systems; these algorithms do not focus on the mapping part, indeed they reconstruct a sparse point-based map of the environment, while in our case we aim at reconstructing a dense representation of it.
{ "cite_N": [ "@cite_14" ], "mid": [ "63626580", "2172103629", "2017995647", "2118429180" ], "abstract": [ "This paper deals with the determination of the position and orientation of a mobile robot from distance measurements provided by a belt of onboard ultrasonic sensors. The environment is assumed to be two-dimensional, and a map of its landmarks is available to the robot. In this context, classical localization methods have three main limitations. First, each data point provided by a sensor must be associated with a given landmark. This data-association step turns out to be extremely complex and time-consuming, and its results can usually not be guaranteed. The second limitation is that these methods are based on linearization, which makes them inherently local. The third limitation is their lack of robustness to outliers due, e.g., to sensor malfunctions or outdated maps. By contrast, the method proposed here, based on interval analysis, bypasses the data-association step, handles the problem as nonlinear and in a global way and is (extraordinarily) robust to outliers.", "Exploration involving mapping and concurrent localization in an unknown environment is a pervasive task in mobile robotics. In general, the accuracy of the mapping process depends directly on the accuracy of the localization process. This paper address the problem of maximizing the accuracy of the map building process during exploration by adaptively selecting control actions that maximize localisation accuracy. The map building and exploration task is modeled using an Occupancy Grid (OG) with concurrent localisation performed using a feature-based Simultaneous Localisation And Mapping (SLAM) algorithm. Adaptive sensing aims at maximizing the map information by simultaneously maximizing the expected Shannon information gain (Mutual Information) on the OG map and minimizing the uncertainty of the vehicle pose and map feature uncertainty in the SLAM process. The resulting map building system is demonstrated in an indoor environment using data from a laser scanner mounted on a mobile platform.", "Automatically building maps from sensor data is a necessary and fundamental skill for mobile robots; as a result, considerable research attention has focused on the technical challenges inherent in the mapping problem. While statistical inference techniques have led to computationally efficient mapping algorithms, the next major challenge in robotic mapping is to automate the data collection process. In this paper, we address the problem of how a robot should plan to explore an unknown environment and collect data in order to maximize the accuracy of the resulting map. We formulate exploration as a constrained optimization problem and use reinforcement learning to find trajectories that lead to accurate maps. We demonstrate this process in simulation and show that the learned policy not only results in improved map building, but that the learned policy also transfers successfully to a real robot exploring on MIT campus.", "Discusses a significant open problem in mobile robotics: simultaneous map building and localization, which the authors define as long-term globally referenced position estimation without a priori information. This problem is difficult because of the following paradox: to move precisely, a mobile robot must have an accurate environment map; however, to build an accurate map, the mobile robot's sensing locations must be known precisely. In this way, simultaneous map building and localization can be seen to present a question of 'which came first, the chicken or the egg?' (The map or the motion?) When using ultrasonic sensing, to overcome this issue the authors equip the vehicle with multiple servo-mounted sonar sensors, to provide a means in which a subset of environment features can be precisely learned from the robot's initial location and subsequently tracked to provide precise positioning. >" ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
Some approaches estimate a 2.5D map of the environment by populating a grid on the ground plane with the corresponding cell heights @cite_24 . These maps are useful for robot navigation, but neglect most of the environment details. A more coherent representation of the scene is volumetric, i.e, the space is partitioned into small parts classified as , and, in some cases, , and the boundary between occupied and free space represents the 3D map. In laser-based mapping the most common volumetric representation is voxel-based due to its good trade-off between expressiveness and easiness of implementation @cite_6 ; the drawback of this representation is the large memory consumption, and, therefore its non-scalability. Many efforts have been directed to improve the scalability and accuracy of voxel based mapping. Ryde and Hu @cite_29 store only occupied voxels, while Dryanovski @cite_15 store both occupied and free voxels, in order to represent also the uncertainty of unknown space. The state-of-the-art system OctoMap @cite_32 , and its extension @cite_11 , are able to efficiently store large maps by including an octree indexing to add flexibility to the framework.
{ "cite_N": [ "@cite_29", "@cite_32", "@cite_6", "@cite_24", "@cite_15", "@cite_11" ], "mid": [ "2784112303", "2726894975", "1979061520", "2133844819" ], "abstract": [ "We present a dense volumetric simultaneous localisation and mapping (SLAM) framework that uses an octree representation for efficient fusion and rendering of either a truncated signed distance field (TSDF) or an occupancy map. The primary aim of this letter is to use one single representation of the environment that can be used not only for robot pose tracking and high-resolution mapping, but seamlessly for planning. We show that our highly efficient octree representation of space fits SLAM and planning purposes in a real-time control loop. In a comprehensive evaluation, we demonstrate dense SLAM accuracy and runtime performance on-par with flat hashing approaches when using TSDF-based maps, and considerable speed-ups when using occupancy mapping compared to standard occupancy maps frameworks. Our SLAM system can run at 10–40 Hz on a modern quadcore CPU, without the need for massive parallelization on a GPU. We, furthermore, demonstrate a probabilistic occupancy mapping as an alternative to TSDF mapping in dense SLAM and show its direct applicability to online motion planning, using the example of informed rapidly-exploring random trees (RRT @math ).", "In this paper, we present an improved octree-based mapping framework for autonomous navigation of mobile robots. Octree is best known for its memory efficiency for representing large-scale environments. However, existing implementations, including the state-of-the-art OctoMap [1], are computationally too expensive for online applications that require frequent map updates and inquiries. Utilizing the sparse nature of the environment, we propose a ray tracing method with early termination for efficient probabilistic map update. We also propose a divide-and-conquer volume occupancy inquiry method which serves as the core operation for generation of free-space configurations for optimization-based trajectory generation. We experimentally demonstrate that our method maintains the same storage advantage of the original OctoMap, but being computationally more efficient for map update and occupancy inquiry. Finally, by integrating the proposed map structure in a complete navigation pipeline, we show autonomous quadrotor flight through complex environments.", "This paper proposes an approach to real-time dense localisation and mapping that aims at unifying two different representations commonly used to define dense models. On one hand, much research has looked at 3D dense model representations using voxel grids in 3D. On the other hand, image-based key-frame representations for dense environment mapping have been developed. Both techniques have their relative advantages and disadvantages which will be analysed in this paper. In particular each representation's space-size requirements, their effective resolution, the computation efficiency, their accuracy and robustness will be compared. This paper then proposes a new model which unifies various concepts and exhibits the main advantages of each approach within a common framework. One of the main results of the proposed approach is its ability to perform large scale reconstruction accurately at the scale of mapping a building.", "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum." ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
Voxel-based approaches usually produce unappealing reconstructions, due to the voxelization of the space, and they need a very high resolution to capture fine details of the scene, trading off their efficiency. In Computer Vision community, different volumetric representations have been explored, in particular many algorithms adopt the 3D Delaunay triangulation @cite_18 @cite_22 @cite_2 @cite_4 . Delaunay triangulation is self-adaptive according to the density of the data, i.e., the points, without any indexing policy; moreover its structure is made up of tetraedra from which it is easy to extract a triangular mesh, widely used in the Computer Graphics community to accurately model objects. These algorithms are consistent with the visibility, i.e., they mark the tetrahedra as free space or occupied according to the camera-to-point rays, assuming that a tetrahedron is empty if one, or at least one, ray intersects them.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_2" ], "mid": [ "2020429267", "2787366651", "2211977492", "2888702972" ], "abstract": [ "We present a novel method to obtain fine-scale detail in 3D reconstructions generated with low-budget RGB-D cameras or other commodity scanning devices. As the depth data of these sensors is noisy, truncated signed distance fields are typically used to regularize out the noise, which unfortunately leads to over-smoothed results. In our approach, we leverage RGB data to refine these reconstructions through shading cues, as color input is typically of much higher resolution than the depth data. As a result, we obtain reconstructions with high geometric detail, far beyond the depth resolution of the camera itself. Our core contribution is shading-based refinement directly on the implicit surface representation, which is generated from globally-aligned RGB-D images. We formulate the inverse shading problem on the volumetric distance field, and present a novel objective function which jointly optimizes for fine-scale surface geometry and spatially-varying surface reflectance. In order to enable the efficient reconstruction of sub-millimeter detail, we store and process our surface using a sparse voxel hashing scheme which we augment by introducing a grid hierarchy. A tailored GPU-based Gauss-Newton solver enables us to refine large shape models to previously unseen resolution within only a few seconds.", "In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 256^3 by recovering the occluded missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets and real-world Kinect datasets show that the proposed 3D-RecGAN++ significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects.", "Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we propose to use a Delaunay triangulation of Edge-Points, which are the 3D points corresponding to image edges. These points constrain the edges of the 3D Delaunay triangulation to real-world edges. Besides the use of the Edge-Points, a second contribution of this paper is the Inverse Cone Heuristic that preemptively avoids the creation of artifacts in the reconstructed manifold surface. We force the reconstruction of a manifold surface since it makes it possible to apply computer graphics or photometric refinement algorithms to the output mesh. We evaluated our approach on four real sequences of the public available KITTI dataset by comparing the incremental reconstruction against Velodyne measurements.", "In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of @math by recovering the occluded missing regions. The key idea is to combine the generative capabilities of 3D encoder-decoder and the conditional adversarial networks framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets and real-world Kinect datasets show that the proposed 3D-RecGAN++ significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects." ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
Among image-based dense photoconsistent algorithms, the mesh-based algorithm @cite_4 @cite_1 have been proven to estimate very accurate models and to be scalable in large-scale environments. They bootstrap form an initial mesh with a volumetric method such as @cite_22 or @cite_0 and they refine it by minimizing a photometric energy function defined over the images. The most relevant drawback happens when moving objects appear in the images: their pixels affect the refinement process leading to inaccurate results.
{ "cite_N": [ "@cite_1", "@cite_0", "@cite_4", "@cite_22" ], "mid": [ "2129404737", "1951289974", "2951054736", "2064451896" ], "abstract": [ "This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and \"crowded\" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.", "We propose a novel approach for optical flow estimation, targeted at large displacements with significant occlusions. It consists of two steps: i) dense matching by edge-preserving interpolation from a sparse set of matches; ii) variational energy minimization initialized with the dense matches. The sparse-to-dense interpolation relies on an appropriate choice of the distance, namely an edge-aware geodesic distance. This distance is tailored to handle occlusions and motion boundaries - two common and difficult issues for optical flow computation. We also propose an approximation scheme for the geodesic distance to allow fast computation without loss of performance. Subsequent to the dense interpolation step, standard one-level variational energy minimization is carried out on the dense matches to obtain the final flow estimation. The proposed approach, called Edge-Preserving Interpolation of Correspondences (EpicFlow) is fast and robust to large displacements. It significantly outperforms the state of the art on MPI-Sintel and performs on par on Kitti and Middlebury.", "We propose a novel approach for optical flow estimation , targeted at large displacements with significant oc-clusions. It consists of two steps: i) dense matching by edge-preserving interpolation from a sparse set of matches; ii) variational energy minimization initialized with the dense matches. The sparse-to-dense interpolation relies on an appropriate choice of the distance, namely an edge-aware geodesic distance. This distance is tailored to handle occlusions and motion boundaries -- two common and difficult issues for optical flow computation. We also propose an approximation scheme for the geodesic distance to allow fast computation without loss of performance. Subsequent to the dense interpolation step, standard one-level variational energy minimization is carried out on the dense matches to obtain the final flow estimation. The proposed approach, called Edge-Preserving Interpolation of Correspondences (EpicFlow) is fast and robust to large displacements. It significantly outperforms the state of the art on MPI-Sintel and performs on par on Kitti and Middlebury.", "In this paper, we propose a dense visual SLAM method for RGB-D cameras that minimizes both the photometric and the depth error over all pixels. In contrast to sparse, feature-based methods, this allows us to better exploit the available information in the image data which leads to higher pose accuracy. Furthermore, we propose an entropy-based similarity measure for keyframe selection and loop closure detection. From all successful matches, we build up a graph that we optimize using the g2o framework. We evaluated our approach extensively on publicly available benchmark datasets, and found that it performs well in scenes with low texture as well as low structure. In direct comparison to several state-of-the-art methods, our approach yields a significantly lower trajectory error. We release our software as open-source." ] }
1708.05543
2747753476
In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.
In our paper, in order to filter out moving objects from the lidar data and the images, we need to explicitly detect them. A laser-based moving objects detection algorithm has been proposed by Petrovskaya and Thrun @cite_35 to detect a moving vehicles using model-based vehicle fitting algorithm; the method performs well, but it needs models for the objects. Xiao @cite_33 and the Vallet @cite_17 model the physical scanning mechanism of lidar using Dempster-Shafer Theory (DST), evaluating the occupancy of a scan and comparing the consistency among scans. A further improvement of these algorithms has been proposed by Postica @cite_9 where the authors include an image-based validation step which sorts out many false positive. Pure image-based moving objects detection has been investigated in static camera videos (see @cite_13 ), also for the jittering case @cite_31 , however it is still a very open problem when dealing with moving cameras.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_9", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "2527478282", "1910014366", "2113859792", "2791003324" ], "abstract": [ "Detecting moving objects in dynamic scenes from sequences of lidar scans is an important task in object tracking, mapping, localization, and navigation. Many works focus on changes detection in previously observed scenes, while a very limited amount of literature addresses moving objects detection. The state-of-the-art method exploits Dempster-Shafer Theory to evaluate the occupancy of a lidar scan and to discriminate points belonging to the static scene from moving ones. In this paper we improve both speed and accuracy of this method by discretizing the occupancy representation, and by removing false positives through visual cues. Many false positives lying on the ground plane are also removed thanks to a novel ground plane removal algorithm. Efficiency is improved through an octree indexing strategy. Experimental evaluation against the KITTI public dataset shows the effectiveness of our approach, both qualitatively and quantitatively with respect to the state- of-the-art.", "We present a new approach to detection and tracking of moving objects with a 2D laser scanner for autonomous driving applications. Objects are modelled with a set of rigidly attached sample points along their boundaries whose positions are initialized with and updated by raw laser measurements, thus allowing a non-parametric representation that is capable of representing objects independent of their classes and shapes. Detection and tracking of such object models are handled in a theoretically principled manner as a Bayes filter where the motion states and shape information of all objects are represented as a part of a joint state which includes in addition the pose of the sensor and geometry of the static part of the world. We derive the prediction and observation models for the evolution of the joint state, and describe how the knowledge of the static local background helps in identifying dynamic objects from static ones in a principled and straightforward way. Dealing with raw laser points poses a significant challenge to data association. We propose a hierarchical approach, and present a new variant of the well-known Joint Compatibility Branch and Bound algorithm to respect and take advantage of the constraints of the problem introduced through correlations between observations. Finally, we calibrate the system systematically on real world data containing 7,500 labelled object examples and validate on 6,000 test cases. We demonstrate its performance over an existing industry standard targeted at the same problem domain as well as a classical approach to model-free object tracking.", "Thanks to the development of Mobile mapping systems (MMS), street object recognition, classification, modelling and related studies have become hot topics recently. There has been increasing interest in detecting changes between mobile laser scanning (MLS) point clouds in complex urban areas. A method based on the consistency between the occupancies of space computed from different datasets is proposed. First occupancy of scan rays (empty, occupied, unknown) are defined while considering the accuracy of measurement and registration. Then the occupancy of scan rays are fused using the Weighted Dempster‐Shafer theory (WDST). Finally, the consistency between different datasets is obtained by comparing the occupancy at points from one dataset with the fused occupancy of neighbouring rays from the other dataset. Change detection results are compared with a conventional point to triangle (PTT) distance method. Changes at point level are detected fully automatically. The proposed approach allows to detect changes at large scales in urban scenes with fine detail and more importantly, distinguish real changes from occlusions.", "This paper addresses the problem of vehicle detection using Deep Convolutional Neural Network (ConvNet) and 3D-LIDAR data with application in advanced driver assistance systems and autonomous driving. A vehicle detection system based on the Hypothesis Generation (HG) and Verification (HV) paradigms is proposed. The data inputted to the system is a point cloud obtained from a 3D-LIDAR mounted on board an instrumented vehicle, which is transformed to a Dense-depth Map (DM). The proposed solution starts by removing ground points followed by point cloud segmentation. Then, segmented obstacles (object hypotheses) are projected onto the DM. Bounding boxes are fitted to the segmented objects as vehicle hypotheses (the HG step). Finally, the bounding boxes are used as inputs to a ConvNet to classify verify the hypotheses of belonging to the category ‘vehicle’ (the HV step). In this paper, we present an evaluation of ConvNet using LIDAR-based DMs and also the impact of domain-specific data augmentation on vehicle detection performance. To train and to evaluate the proposed vehicle detection system, the KITTI Benchmark Suite was used." ] }
1708.05582
2749333275
This paper presents models for detecting agreement disagreement in online discussions. In this work we show that by using a Siamese inspired architecture to encode the discussions, we no longer need to rely on hand-crafted features to exploit the meta thread structure. We evaluate our model on existing online discussion corpora - ABCD, IAC and AWTP. Experimental results on ABCD dataset show that by fusing lexical and word embedding features, our model achieves the state of the art performance of 0.804 average F1 score. We also show that the model trained on ABCD dataset performs competitively on relatively smaller annotated datasets (IAC and AWTP).
Previous work in this field focused a lot on spoken dialogues. @cite_13 @cite_27 @cite_16 used spurt level agreement annotations from the ICSI corpus @cite_8 . @cite_30 presents detection of agreements in multi-party conversations using the AMI meeting corpus @cite_28 . @cite_31 presents a conditional random field based approach for detecting agreement disagreement between speakers in English broadcast conversations
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_28", "@cite_27", "@cite_31", "@cite_16", "@cite_13" ], "mid": [ "2161354826", "1558643924", "1569447338", "1591607137" ], "abstract": [ "We present Conditional Random Fields based approaches for detecting agreement disagreement between speakers in English broadcast conversation shows. We develop annotation approaches for a variety of linguistic phenomena. Various lexical, structural, durational, and prosodic features are explored. We compare the performance when using features extracted from automatically generated annotations against that when using human annotations. We investigate the efficacy of adding prosodic features on top of lexical, structural, and durational features. Since the training data is highly imbalanced, we explore two sampling approaches, random downsampling and ensemble downsampling. Overall, our approach achieves 79.2 (precision), 50.5 (recall), 61.7 (F1) for agreement detection and 69.2 (precision), 46.9 (recall), and 55.9 (F1) for disagreement detection, on the English broadcast conversation data.", "This paper provides a progress report on ICSI s Meeting Project, including both the data collected and annotated as part of the pro-ject, as well as the research lines such materials support. We include a general description of the official ICSI Meeting Corpus , as currently available through the Linguistic Data Consortium, discuss some of the existing and planned annotations which augment the basic transcripts provided there, and describe several research efforts that make use of these materials. The corpus supports wide-ranging efforts, from low-level processing of the audio signal (including automatic speech transcription, speaker tracking, and work on far-field acoustics) to higher-level analyses of meeting structure, content, and interactions (such as topic and sentence segmentation, and automatic detection of dialogue acts and meeting hot spots ).", "To support multi-disciplinary research in the AMI (Augmented Multi-party Interaction) project, a 100 hour corpus of meetings is being collected. This corpus is being recorded in several instrumented rooms equipped with a variety of microphones, video cameras, electronic pens, presentation slide capture and white-board capture devices. As well as real meetings, the corpus contains a significant proportion of scenario-driven meetings, which have been designed to elicit a rich range of realistic behaviors. To facilitate research, the raw data are being annotated at a number of levels including speech transcriptions, dialogue acts and summaries. The corpus is being distributed using a web server designed to allow convenient browsing and download of multimedia content and associated annotations. This article first overviews AMI research themes, then discusses corpus design, as well as data collection, annotation and distribution.", "We have collected a corpus of data from natural meetings that occurred at the International Computer Science Institute (ICSI) in Berkeley, California over the last three years. The corpus contains audio recorded simultaneously from head-worn and table-top microphones, word-level transcripts of meetings, and various metadata on participants, meetings, and hardware. Such a corpus supports work in automatic speech recognition, noise robustness, dialog modeling, prosody, rich transcription, information retrieval, and more. We present details on the contents of the corpus, as well as rationales for the decisions that led to its configuration. The corpus were delivered to the Linguistic Data Consortium (LDC)." ] }
1708.05582
2749333275
This paper presents models for detecting agreement disagreement in online discussions. In this work we show that by using a Siamese inspired architecture to encode the discussions, we no longer need to rely on hand-crafted features to exploit the meta thread structure. We evaluate our model on existing online discussion corpora - ABCD, IAC and AWTP. Experimental results on ABCD dataset show that by fusing lexical and word embedding features, our model achieves the state of the art performance of 0.804 average F1 score. We also show that the model trained on ABCD dataset performs competitively on relatively smaller annotated datasets (IAC and AWTP).
Recently, researchers have turned their attention towards (dis)agreement detection in online discussions. The prior work was geared towards performing 2-way classification of agreement disagreement. @cite_49 used various sentiment, emotional and durational features to detect local and global (dis)agreement in discussion forums. @cite_38 performed (dis)agreement on annotated posts from the Internet Argument Corpus (IAC) @cite_37 . They investigated various manual labelled features, which are however difficult to reproduce as they are not annotated in other datasets. To benchmark the results, we've also incorporated the IAC corpus in our experiments. Quite recently, @cite_6 proposed a 3-way classification by exploiting meta-thread structures and accommodation between participants. They also proposed a naturally occurring dataset ABCD (Agreement by Create Debaters) which was about 25 times larger than prior existing corpus. We've trained our classifier on this larger dataset. @cite_9 proposed (dis)agreement detection with an isotonic Conditional Random Fields (isotonic CRF) based sequential model. @cite_46 proposed features motivated by theoretical predictions to perform (dis)agreement detection. However, they've used hand-crafted patterns as features and these features miss few real world scenarios reducing the performance of the classifier.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_9", "@cite_6", "@cite_49", "@cite_46" ], "mid": [ "2250454469", "2964079262", "2161354826", "2251147786" ], "abstract": [ "We study the problem of agreement and disagreement detection in online discussions. An isotonic Conditional Random Fields (isotonic CRF) based sequential model is proposed to make predictions on sentence- or segment-level. We automatically construct a socially-tuned lexicon that is bootstrapped from existing general-purpose sentiment lexicons to further improve the performance. We evaluate our agreement and disagreement tagging model on two disparate online discussion corpora -- Wikipedia Talk pages and online debates. Our model is shown to outperform the state-of-the-art approaches in both datasets. For example, the isotonic CRF model achieves F1 scores of 0.74 and 0.67 for agreement and disagreement detection, when a linear chain CRF obtains 0.58 and 0.56 for the discussions on Wikipedia Talk pages.", "We study the problem of agreement and disagreement detection in online discussions. An isotonic Conditional Random Fields (isotonic CRF) based sequential model is proposed to make predictions on sentence- or segment-level. We automatically construct a socially-tuned lexicon that is bootstrapped from existing general-purpose sentiment lexicons to further improve the performance. We evaluate our agreement and disagreement tagging model on two disparate online discussion corpora ‐ Wikipedia Talk pages and online debates. Our model is shown to outperform the state-of-the-art approaches in both datasets. For example, the isotonic CRF model achieves F1 scores of 0.74 and 0.67 for agreement and disagreement detection, when a linear chain CRF obtains 0.58 and 0.56 for the discussions on Wikipedia Talk pages.", "We present Conditional Random Fields based approaches for detecting agreement disagreement between speakers in English broadcast conversation shows. We develop annotation approaches for a variety of linguistic phenomena. Various lexical, structural, durational, and prosodic features are explored. We compare the performance when using features extracted from automatically generated annotations against that when using human annotations. We investigate the efficacy of adding prosodic features on top of lexical, structural, and durational features. Since the training data is highly imbalanced, we explore two sampling approaches, random downsampling and ensemble downsampling. Overall, our approach achieves 79.2 (precision), 50.5 (recall), 61.7 (F1) for agreement detection and 69.2 (precision), 46.9 (recall), and 55.9 (F1) for disagreement detection, on the English broadcast conversation data.", "We investigate the novel task of online dispute detection and propose a sentiment analysis solution to the problem: we aim to identify the sequence of sentence-level sentiments expressed during a discussion and to use them as features in a classifier that predicts the DISPUTE NON-DISPUTE label for the discussion as a whole. We evaluate dispute detection approaches on a newly created corpus of Wikipedia Talk page disputes and find that classifiers that rely on our sentiment tagging features outperform those that do not. The best model achieves a very promising F1 score of 0.78 and an accuracy of 0.80." ] }
1708.05582
2749333275
This paper presents models for detecting agreement disagreement in online discussions. In this work we show that by using a Siamese inspired architecture to encode the discussions, we no longer need to rely on hand-crafted features to exploit the meta thread structure. We evaluate our model on existing online discussion corpora - ABCD, IAC and AWTP. Experimental results on ABCD dataset show that by fusing lexical and word embedding features, our model achieves the state of the art performance of 0.804 average F1 score. We also show that the model trained on ABCD dataset performs competitively on relatively smaller annotated datasets (IAC and AWTP).
(Dis)agreement detection is related to other similar NLP tasks like stance detection and argument mining but is not exactly the same. Stance detection is the task of identifying whether the author of the text is in favor or against or neutral towards a target, while argument mining focuses on tasks like automatic extraction of arguments from free text, argument proposition classification and argumentative parsing @cite_24 @cite_3 . Recently there are studies on how people back up their stances when arguing where comments are classified as either attacking or supporting a set of pre-defined arguments @cite_39 . These tasks (stance detection, argument mining) are not independent but have some common features because of which they are benefited by common building blocks like sentiment detection, textual entailment and sentence similarity @cite_39 @cite_32 .
{ "cite_N": [ "@cite_24", "@cite_32", "@cite_3", "@cite_39" ], "mid": [ "2250730878", "2786918876", "2347127863", "2437771934" ], "abstract": [ "Argumentation mining and stance classification were recently introduced as interesting tasks in text mining. In this paper, a novel framework for argument tagging based on topic modeling is proposed. Unlike other machine learning approaches for argument tagging which often require large set of labeled data, the proposed model is minimally supervised and merely a one-to-one mapping between the pre-defined argument set and the extracted topics is required. These extracted arguments are subsequently exploited for stance classification. Additionally, a manuallyannotated corpus for stance classification and argument tagging of online news comments is introduced and made available. Experiments on our collected corpus demonstrate the benefits of using topic-modeling for argument tagging. We show that using Non-Negative Matrix Factorization instead of Latent Dirichlet Allocation achieves better results for argument classification, close to the results of a supervised classifier. Furthermore, the statistical model that leverages automatically-extracted arguments as features for stance classification shows promising results.", "The task of stance detection is to determine whether someone is in favor or against a certain topic. A person may express the same stance towards a topic using positive or negative words. In this paper, several features and classifiers are explored to find out the combination that yields the best performance for stance detection. Due to the large number of features, ReliefF feature selection method was used to reduce the large dimensional feature space and improve the generalization capabilities. Experimental analyses were performed on five datasets, and the obtained results revealed that a majority vote classifier of the three classifiers: Random Forest, linear SVM and Gaussian Naive Bayes classifiers can be adopted for stance detection task.", "We can often detect from a person’s utterances whether he or she is in favor of or against a given target entity—one’s stance toward the target. However, a person may express the same stance toward a target by using negative or positive language. Here for the first time we present a dataset of tweet–target pairs annotated for both stance and sentiment. The targets may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. Partitions of this dataset were used as training and test sets in a SemEval-2016 shared task competition. We propose a simple stance detection system that outperforms submissions from all 19 teams that participated in the shared task. Additionally, access to both stance and sentiment annotations allows us to explore several research questions. We show that although knowing the sentiment expressed by a tweet is beneficial for stance classification, it alone is not sufficient. Finally, we use additional unlabeled data through distant supervision techniques and word embeddings to further improve stance classification.", "Stance detection is the task of classifying the attitude expressed in a text towards a target such as Hillary Clinton to be \"positive\", negative\" or \"neutral\". Previous work has assumed that either the target is mentioned in the text or that training data for every target is given. This paper considers the more challenging version of this task, where targets are not always mentioned and no training data is available for the test targets. We experiment with conditional LSTM encoding, which builds a representation of the tweet that is dependent on the target, and demonstrate that it outperforms encoding the tweet and the target independently. Performance is improved further when the conditional model is augmented with bidirectional encoding. We evaluate our approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving performance second best only to a system trained on semi-automatically labelled tweets for the test target. When such weak supervision is added, our approach achieves state-of-the-art results." ] }
1708.05587
2749420821
We study models of weighted exponential random graphs in the large network limit. These models have recently been proposed to model weighted network data arising from a host of applications including socio-econometric data such as migration flows and neuroscience desmarais2012statistical . Analogous to fundamental results derived for standard (unweighted) exponential random graph models in the work of Chatterjee and Diaconis, we derive limiting results for the structure of these models as @math , complementing the results in the work of yin2016phase,demuse2017phase in the context of finitely supported base measures. We also derive sufficient conditions for continuity of functionals in the specification of the model including conditions on nodal covariates. Finally we include a number of open problems to spur further understanding of this model especially in the context of applications.
Weighted exponential random graph models were theoretically analyzed in @cite_14 when the base measure is supported on a bounded interval and in @cite_18 the authors analyzed the phase transition phenomenon for a class of base measures supported on @math . In @cite_14 the no-phase transition" result for standard normal base measure was proved for directed edge-two-star model. Motivated by applications @cite_6 we extend this work when the base measure is supported on the whole real line. We showed for general base measure the model does not suffer degeneracy in high-temperature" regime. Also, via an explicit calculation we have showed for standard normal distribution the undirected edge-two-star model does not admit a phase transition. Finally under certain assumptions we established continuity of homomorphism densities of node-weighted graphs in cut-metric. We have only begun an analysis of this model and for the sake of concreteness, after the general setting of the main result, explore the ramifications for a few base measures. Other examples of bases measures of relevance from applications including count data can be found in @cite_6 . It would be interesting to explore these specific models and rigorously understand degeneracy (or lack thereof) for various specifications motivated by domain applications.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_6" ], "mid": [ "2138359578", "2028633127", "2622403352", "1804927206" ], "abstract": [ "The “classical” random graph models, in particular G(n,p), are “homogeneous,” in the sense that the degrees (for example) tend to be concentrated around a typical value. Many graphs arising in the real world do not have this property, having, for example, power-law degree distributions. Thus there has been a lot of recent interest in defining and studying “inhomogeneous” random graph models. One of the most studied properties of these new models is their “robustness”, or, equivalently, the “phase transition” as an edge density parameter is varied. For G(n,p), p = c n, the phase transition at c = 1 has been a central topic in the study of random graphs for well over 40 years. Many of the new inhomogeneous models are rather complicated; although there are exceptions, in most cases precise questions such as determining exactly the critical point of the phase transition are approachable only when there is independence between the edges. Fortunately, some models studied have this property already, and others can be approximated by models with independence. Here we introduce a very general model of an inhomogeneous random graph with (conditional) independence between the edges, which scales so that the number of edges is linear in the number of vertices. This scaling corresponds to the p = c n scaling for G(n,p) used to study the phase transition; also, it seems to be a property of many large real-world graphs. Our model includes as special cases many models previously studied. We show that, under one very weak assumption (that the expected number of edges is “what it should be”), many properties of the model can be determined, in particular the critical point of the phase transition, and the size of the giant component above the transition. We do this by relating our random graphs to branching processes, which are much easier to analyze. We also consider other properties of the model, showing, for example, that when there is a giant component, it is “stable”: for a typical random graph, no matter how we add or delete o(n) edges, the size of the giant component does not change by more than o(n). © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 31, 3–122, 2007", "Random graph models with limited choice have been studied extensively with the goal of understanding the mechanism of the emergence of the giant component. One of the standard models are the Achlioptas random graph processes on a fixed set of (n ) vertices. Here at each step, one chooses two edges uniformly at random and then decides which one to add to the existing configuration according to some criterion. An important class of such rules are the bounded-size rules where for a fixed (K 1 ), all components of size greater than (K ) are treated equally. While a great deal of work has gone into analyzing the subcritical and supercritical regimes, the nature of the critical scaling window, the size and complexity (deviation from trees) of the components in the critical regime and nature of the merging dynamics has not been well understood. In this work we study such questions for general bounded-size rules. Our first main contribution is the construction of an extension of Aldous’s standard multiplicative coalescent process which describes the asymptotic evolution of the vector of sizes and surplus of all components. We show that this process, referred to as the standard augmented multiplicative coalescent (AMC) is ‘nearly’ Feller with a suitable topology on the state space. Our second main result proves the convergence of suitably scaled component size and surplus vector, for any bounded-size rule, to the standard AMC. This result is new even for the classical Erdős–Renyi setting. The key ingredients here are a precise analysis of the asymptotic behavior of various susceptibility functions near criticality and certain bounds from (The barely subcritical regime. Arxiv preprint, 2012) on the size of the largest component in the barely subcritical regime.", "Conventionally used exponential random graphs cannot directly model weighted networks as the underlying probability space consists of simple graphs only. Since many substantively important networks are weighted, this limitation is especially problematic. We extend the existing exponential framework by proposing a generic common distribution for the edge weights. Minimal assumptions are placed on the distribution, that is, it is non-degenerate and supported on the unit interval. By doing so, we recognize the essential properties associated with near-degeneracy and universality in edge-weighted exponential random graphs.", "We study an evolving spatial network in which sequentially arriving vertices are joined to existing vertices at random according to a rule that combines preference according to degree with preference according to spatial proximity. We investigate phase transitions in graph structure as the relative weighting of these two components of the attachment rule is varied. Previous work of one of the authors showed that when the geometric component is weak, the limiting degree sequence of the resulting graph coincides with that of the standard Barab 'asi--Albert preferential attachment model. We show that at the other extreme, in the case of a sufficiently strong geometric component, the limiting degree sequence coincides with that of a purely geometric model, the on-line nearest-neighbour graph, which is of interest in its own right and for which we prove some extensions of known results. We also show the presence of an intermediate regime, in which the behaviour differs significantly from both the on-line nearest-neighbour graph and the Barab 'asi--Albert model; in this regime, we obtain a stretched exponential upper bound on the degree sequence. Our results lend some mathematical support to simulation studies of Manna and Sen, while proving that the power law to stretched exponential phase transition occurs at a different point from the one conjectured by those authors." ] }
1708.05775
2750321818
FJRW theory is a formulation of physical Landau-Ginzburg models with a rich algebraic structure, rooted in enumerative geometry. As a consequence of a major physical conjecture, called the Landau-Ginzburg Calabi-Yau correspondence, several birational morphisms of Calabi-Yau orbifolds should correspond to isomorphisms in FJRW theory. In this paper it is shown that not only does this claim prove to be the case, but is a special case of a wider FJRW isomorphism theorem, which in turn allows for a proof of mirror symmetry for a new class of cases in the Landau-Ginzburg setting. We also obtain several interesting geometric applications regarding the Chen-Ruan cohomology of certain Calabi-Yau orbifolds.
Their result relies on the assumption that @math . In order to understand this restriction better, consider that there are two possible weight systems for @math yielding an elliptic curve and 44 possible weight systems for @math yielding a K3 surface with involution. Recall in this construction, we require our polynomials to be of the form . Only 48 of the 88 possible combinations of weight systems satisfy the gcd condition imposed in @cite_41 . In this article we generalize this result in two ways. We remove the restriction on gcd's, and we extend the construction to all dimensions.
{ "cite_N": [ "@cite_41" ], "mid": [ "2952976587", "2016576580", "91979434", "2168002876" ], "abstract": [ "We consider the Exact-Weight-H problem of finding a (not necessarily induced) subgraph H of weight 0 in an edge-weighted graph G. We show that for every H, the complexity of this problem is strongly related to that of the infamous k-Sum problem. In particular, we show that under the k-Sum Conjecture, we can achieve tight upper and lower bounds for the Exact-Weight-H problem for various subgraphs H such as matching, star, path, and cycle. One interesting consequence is that improving on the O(n^3) upper bound for Exact-Weight-4-Path or Exact-Weight-5-Path will imply improved algorithms for 3-Sum, 5-Sum, All-Pairs Shortest Paths and other fundamental problems. This is in sharp contrast to the minimum-weight and (unweighted) detection versions, which can be solved easily in time O(n^2). We also show that a faster algorithm for any of the following three problems would yield faster algorithms for the others: 3-Sum, Exact-Weight-3-Matching, and Exact-Weight-3-Star.", "In their paper on the ''chasm at depth four'', Agrawal and Vinay have shown that polynomials in m variables of degree O(m) which admit arithmetic circuits of size 2^o^(^m^) also admit arithmetic circuits of depth four and size 2^o^(^m^). This theorem shows that for problems such as arithmetic circuit lower bounds or black-box derandomization of identity testing, the case of depth four circuits is in a certain sense the general case. In this paper we show that smaller depth four circuits can be obtained if we start from polynomial size arithmetic circuits. For instance, we show that if the permanent of nxn matrices has circuits of size polynomial in n, then it also has depth 4 circuits of size n^O^(^n^l^o^g^n^). If the original circuit uses only integer constants of polynomial size, then the same is true for the resulting depth four circuit. These results have potential applications to lower bounds and deterministic identity testing, in particular for sums of products of sparse univariate polynomials. We also use our techniques to reprove two results on: -the existence of nontrivial boolean circuits of constant depth for languages in LOGCFL; -reduction to polylogarithmic depth for arithmetic circuits of polynomial size and polynomially bounded degree.", "Recently Bogdanov and Viola (FOCS 2007) and Lovett (ECCC-07) constructed pseudorandom generators that fool degree k polynomials over F2 for an arbitrary constant k. We show that such generators can also be used to fool branching programs of width 2 and polynomial length that read k bits of inputs at a time. This model generalizes polynomials of degree k over F2 and includes some other interesting classes of functions, for instance k-DNF. The constructions of Bogdanov and Viola and Lovett consist of adding a constant number of independent copies of a generator that fools linear functions (an -biased set). It is natural to ask, in light of our first result, whether such generators can fool branching programs of width larger than 2. Our second result is a lower bound showing", "We give the first representation-independent hardness result for agnostically learning halfspaces with respect to the Gaussian distribution. We reduce from the problem of learning sparse parities with noise with respect to the uniform distribution on the hypercube (sparse LPN), a notoriously hard problem in theoretical computer science and show that any algorithm for agnostically learning halfspaces requires n (log (1 )) time under the assumption that k-sparse LPN requires n ( k) time, ruling out a polynomial time algorithm for the problem. As far as we are aware, this is the first representation-independent hardness result for supervised learning when the underlying distribution is restricted to be a Gaussian. We also show that the problem of agnostically learning sparse polynomials with respect to the Gaussian distribution in polynomial time is as hard as PAC learning DNFs on the uniform distribution in polynomial time. This complements the surprising result of Andoni et. al. [1] who show that sparse polynomials are learnable under random Gaussian noise in polynomial time. Taken together, these results show the inherent diculty of designing supervised learning algorithms in Euclidean space even in the presence of strong distributional assumptions. Our results use a novel embedding of random labeled examples from the uniform distribution on the Boolean hypercube into random labeled examples from the Gaussian distribution that allows us to relate the hardness of learning problems on two dierent domains and distributions. 1998 ACM Subject Classification F.2.0. Analysis of Algorithms and Problem Complexity" ] }
1708.05775
2750321818
FJRW theory is a formulation of physical Landau-Ginzburg models with a rich algebraic structure, rooted in enumerative geometry. As a consequence of a major physical conjecture, called the Landau-Ginzburg Calabi-Yau correspondence, several birational morphisms of Calabi-Yau orbifolds should correspond to isomorphisms in FJRW theory. In this paper it is shown that not only does this claim prove to be the case, but is a special case of a wider FJRW isomorphism theorem, which in turn allows for a proof of mirror symmetry for a new class of cases in the Landau-Ginzburg setting. We also obtain several interesting geometric applications regarding the Chen-Ruan cohomology of certain Calabi-Yau orbifolds.
In @cite_0 , the last author has considered exactly the form of mirror symmetry we propose here with the restriction that the defining polynomials must be Fermat type. In fact, he was able to show that for the mirror pairs we consider here, there is a mirror map relating the FJRW invariants of the A--model to the Picard--Fuchs equations of the B--model. In @cite_33 he also gave an LG CY correspondence relating the FJRW invariants of the pair @math to the corresponding Gromov--Witten invariants of the corresponding Borcea--Voisin orbifold. Although these results are broader in scope, the restriction to Fermat type polynomials is significant, reducing the number of weight systems from which one can select a K3 surface to 10 (from the 48 mentioned above). Furthermore, there is no general method of proof for a state space isomorphism provided there. However, we expect results regarding the FJRW invariants, Picard--Fuchs equations and GW invariants to hold in general, and the state space isomorphism we establish here is the first step to such results. This will be the topic of future work.
{ "cite_N": [ "@cite_0", "@cite_33" ], "mid": [ "2265209761", "1990891135", "2605842359", "2170546552" ], "abstract": [ "In the early 1990s, Borcea-Voisin orbifolds were some of the ear- liest examples of Calabi-Yau threefolds shown to exhibit mirror symmetry. However, their quantum theory has been poorly investigated. We study this in the context of the gauged linear sigma model, which in their case encom- passes Gromov-Witten theory and its three companions (FJRW theory and two mixed theories). For certain Borcea-Voisin orbifolds of Fermat type, we calculate all four genus zero theories explicitly. Furthermore, we relate the I-functions of these theories by analytic continuation and symplectic transfor- mation. In particular, the relation between the Gromov-Witten and FJRW theories can be viewed as an example of the Landau-Ginzburg Calabi-Yau correspondence for complete intersections of toric varieties.", "We describe a correspondence between the Donaldson–Thomas invariants enumerating D0–D6 bound states on a Calabi–Yau 3-fold and certain Gromov–Witten invariants counting rational curves in a family of blowups of weighted projective planes. This is a variation on a correspondence found by Gross–Pandharipande, with D0–D6 bound states replacing representations of generalised Kronecker quivers. We build on a small part of the theories developed by Joyce–Song and Kontsevich–Soibelman for wall-crossing formulae and by Gross–Pandharipande–Siebert for factorisations in the tropical vertex group. Along the way we write down an explicit formula for the BPS state counts which arise up to rank 3 and prove their integrality. We also compare with previous “noncommutative DT invariants” computations in the physics literature.", "We prove a homological mirror symmetry equivalence between an @math -brane category for the pair of pants, computed as a wrapped microlocal sheaf category, and a @math -brane category for a mirror LG model, understood as a category of matrix factorizations. The equivalence improves upon prior results in two ways: it intertwines evident affine Weyl group symmetries on both sides, and it exhibits the relation of wrapped microlocal sheaves along different types of Lagrangian skeleta for the same hypersurface. The equivalence proceeds through the construction of a combinatorial realization of the @math -model via arboreal singularities. The constructions here represent the start of a program to generalize to higher dimensions many of the structures which have appeared in topological approaches to Fukaya categories of surfaces.", "Mulmuley [Mul12a] recently gave an explicit version of Noether’s Normalization lemma for ring of invariants of matrices under simultaneous conjugation, under the conjecture that there are deterministic black-box algorithms for polynomial identity testing (PIT). He argued that this gives evidence that constructing such algorithms for PIT is beyond current techniques. In this work, we show this is not the case. That is, we improve Mulmuley’s reduction and correspondingly weaken the conjecture regarding PIT needed to give explicit Noether Normalization. We then observe that the weaker conjecture has recently been nearly settled by the authors ([FS12]), who gave quasipolynomial size hitting sets for the class of read-once oblivious algebraic branching programs (ROABPs). This gives the desired explicit Noether Normalization unconditionally, up to quasipolynomial factors. As a consequence of our proof we give a deterministic parallel polynomial-time algorithm for deciding if two matrix tuples have intersecting orbit closures, under simultaneous conjugation. We also study the strength of conjectures that Mulmuley requires to obtain similar results as ours. We prove that his conjectures are stronger, in the sense that the computational model he needs PIT algorithms for is equivalent to the well-known algebraic branching program (ABP) model, which is provably stronger than the ROABP model. Finally, we consider the depth-3 diagonal circuit model as defined by Saxena [Sax08], as PIT algorithms for this model also have implications in Mulmuley’s work. Previous work (such as [ASS12] and [FS12]) have given quasipolynomial size hitting sets for this model. In this work, we give a much simpler construction of such hitting sets, using techniques of Shpilka and Volkovich [SV09]." ] }
1708.05932
2750338112
In phase retrieval we want to recover an unknown signal @math from @math quadratic measurements of the form @math where @math are known sensing vectors and @math is measurement noise. We ask the following weak recovery question: what is the minimum number of measurements @math needed to produce an estimator @math that is positively correlated with the signal @math ? We consider the case of Gaussian vectors @math . We prove that - in the high-dimensional limit - a sharp phase transition takes place, and we locate the threshold in the regime of vanishingly small noise. For @math no estimator can do significantly better than random and achieve a strictly positive correlation. For @math a simple spectral estimator achieves a positive correlation. Surprisingly, numerical simulations with the same spectral estimator demonstrate promising performance with realistic sensing matrices. Spectral methods are used to initialize non-convex optimization algorithms in phase retrieval, and our approach can boost the performance in this setting as well. Our impossibility result is based on classical information-theory arguments. The spectral algorithm computes the leading eigenvector of a weighted empirical covariance matrix. We obtain a sharp characterization of the spectral properties of this random matrix using tools from free probability and generalizing a recent result by Lu and Li. Both the upper and lower bound generalize beyond phase retrieval to measurements @math produced according to a generalized linear model. As a byproduct of our analysis, we compare the threshold of the proposed spectral method with that of a message passing algorithm.
The performance of the spectral methods for phase retrieval was first considered in @cite_8 . In the present notation, @cite_8 uses @math and proves that there exists a constant @math such that weak recovery can be achieved for @math . The same paper also gives an iterative procedure to improve over the spectral method, but the bottleneck is in the spectral step. The sample complexity of weak recovery using spectral methods was improved to @math in @cite_53 and then to @math in @cite_64 , for some constants @math and @math . Both of these papers also prove guarantees for exact recovery by suitable descent algorithms. The guarantees on the spectral initialization are proved by matrix concentration inequalities, a technique that typically does not return exact threshold values.
{ "cite_N": [ "@cite_53", "@cite_64", "@cite_8" ], "mid": [ "1512386892", "2963131734", "2539873326", "2083420433" ], "abstract": [ "The paper considers the phase retrieval problem in N-dimensional complex vector spaces. It provides two sets of deterministic measurement vectors which guarantee signal recovery for all signals, excluding only a specific subspace and a union of subspaces, respectively. A stable analytic reconstruction procedure of low complexity is given. Additionally it is proven that signal recovery from these measurements can be solved exactly via a semidefinite program. A practical implementation with 4 deterministic diffraction patterns is provided and some numerical experiments with noisy measurements complement the analytic approach.", "Recovering an unknown complex signal from the magnitude of linear combinations of the signal is referred to as phase retrieval. We present an exact performance analysis of a recently proposed convex-optimization-formulation for this problem, known as PhaseMax. Standard convex-relaxation-based methods in phase retrieval resort to the idea of “lifting” which makes them computationally inefficient, since the number of unknowns is effectively squared. In contrast, PhaseMax is a novel convex relaxation that does not increase the number of unknowns. Instead it relies on an initial estimate of the true signal which must be externally provided. In this paper, we investigate the required number of measurements for exact recovery of the signal in the large system limit and when the linear measurement matrix is random with iid standard normal entries. If @math denotes the dimension of the unknown complex signal and @math the number of phaseless measurements, then in the large system limit, @math measurements is necessary and sufficient to recover the signal with high probability, where @math is the angle between the initial estimate and the true signal. Our result indicates a sharp phase transition in the asymptotic regime which matches the empirical result in numerical simulations.", "We prove that low-rank matrices can be recovered efficiently from a small number of measurements that are sampled from orbits of a certain matrix group. As a special case, our theory makes statements about the phase retrieval problem. Here, the task is to recover a vector given only the amplitudes of its inner product with a small number of vectors from an orbit. Variants of the group in question have appeared under different names in many areas of mathematics. In coding theory and quantum information, it is the complex Clifford group; in time-frequency analysis the oscillator group; and in mathematical physics the metaplectic group. It affords one particularly small and highly structured orbit that includes and generalizes the discrete Fourier basis: While the Fourier vectors have coefficients of constant modulus and phases that depend linearly on their index, the vectors in said orbit have phases with a quadratic dependence. In quantum information, the orbit is used extensively and is known as the set of stabilizer states. We argue that due to their rich geometric structure and their near-optimal recovery properties, stabilizer states form an ideal model for structured measurements for phase retrieval. Our results hold for @math measurements, where the oversampling factor k varies between @math and @math depending on the orbit. The reconstruction is stable towards both additive noise and deviations from the assumption of low rank. If the matrices of interest are in addition positive semidefinite, reconstruction may be performed by a simple constrained least squares regression. Our proof methods could be adapted to cover orbits of other groups.", "Reconstruction of signals from measurements of their spectral intensities, also known as the phase retrieval problem, is of fundamental importance in many scientific fields. In this paper we present a novel framework, denoted as vectorial phase retrieval, for reconstruction of pairs of signals from spectral intensity measurements of the two signals and of their interference. We show that this new framework can alleviate some of the theoretical and computational challenges associated with classical phase retrieval from a single signal. First, we prove that for compactly supported signals, in the absence of measurement noise, this new setup admits a unique solution. Next, we present a statistical analysis of vectorial phase retrieval and derive a computationally efficient algorithm to solve it. Finally, we illustrate via simulations, that our algorithm can accurately reconstruct signals even at considerable noise levels." ] }
1708.05932
2750338112
In phase retrieval we want to recover an unknown signal @math from @math quadratic measurements of the form @math where @math are known sensing vectors and @math is measurement noise. We ask the following weak recovery question: what is the minimum number of measurements @math needed to produce an estimator @math that is positively correlated with the signal @math ? We consider the case of Gaussian vectors @math . We prove that - in the high-dimensional limit - a sharp phase transition takes place, and we locate the threshold in the regime of vanishingly small noise. For @math no estimator can do significantly better than random and achieve a strictly positive correlation. For @math a simple spectral estimator achieves a positive correlation. Surprisingly, numerical simulations with the same spectral estimator demonstrate promising performance with realistic sensing matrices. Spectral methods are used to initialize non-convex optimization algorithms in phase retrieval, and our approach can boost the performance in this setting as well. Our impossibility result is based on classical information-theory arguments. The spectral algorithm computes the leading eigenvector of a weighted empirical covariance matrix. We obtain a sharp characterization of the spectral properties of this random matrix using tools from free probability and generalizing a recent result by Lu and Li. Both the upper and lower bound generalize beyond phase retrieval to measurements @math produced according to a generalized linear model. As a byproduct of our analysis, we compare the threshold of the proposed spectral method with that of a message passing algorithm.
In @cite_46 , the authors introduce the PhaseMax relaxation and prove an exact recovery result for phase retrieval, which depends on the correlation between the true signal and the initial estimate given to the algorithm. The same idea was independently proposed in @cite_23 . Furthermore, the analysis in @cite_23 allows to use the same set of measurements for both initialization and convex programming, whereas the analysis in @cite_46 requires fresh extra measurements for convex programming. By using our spectral method to obtain the initial estimate, it should be possible to improve the existing upper bounds on the number of samples needed for exact recovery.
{ "cite_N": [ "@cite_46", "@cite_23" ], "mid": [ "2963131734", "2765477068", "2543875952", "1512386892" ], "abstract": [ "Recovering an unknown complex signal from the magnitude of linear combinations of the signal is referred to as phase retrieval. We present an exact performance analysis of a recently proposed convex-optimization-formulation for this problem, known as PhaseMax. Standard convex-relaxation-based methods in phase retrieval resort to the idea of “lifting” which makes them computationally inefficient, since the number of unknowns is effectively squared. In contrast, PhaseMax is a novel convex relaxation that does not increase the number of unknowns. Instead it relies on an initial estimate of the true signal which must be externally provided. In this paper, we investigate the required number of measurements for exact recovery of the signal in the large system limit and when the linear measurement matrix is random with iid standard normal entries. If @math denotes the dimension of the unknown complex signal and @math the number of phaseless measurements, then in the large system limit, @math measurements is necessary and sufficient to recover the signal with high probability, where @math is the angle between the initial estimate and the true signal. Our result indicates a sharp phase transition in the asymptotic regime which matches the empirical result in numerical simulations.", "A recently proposed convex formulation of the phase retrieval problem estimates the unknown signal by solving a simple linear program. This new scheme, known as PhaseMax, is computationally efficient compared to standard convex relaxation methods based on lifting techniques. In this paper, we present an exact performance analysis of PhaseMax under Gaussian measurements in the large system limit. In contrast to previously known performance bounds in the literature, our results are asymptotically exact and they also reveal a sharp phase transition phenomenon. Furthermore, the geometrical insights gained from our analysis led us to a novel nonconvex formulation of the phase retrieval problem and an accompanying iterative algorithm based on successive linearization and maximization over a polytope. This new algorithm, which we call PhaseLamp, has provably superior recovery performance over the original PhaseMax method.", "We consider the recovery of a (real- or complex-valued) signal from magnitude-only measurements, known as phase retrieval. We formulate phase retrieval as a convex optimization problem, which we call PhaseMax. Unlike other convex methods that use semidefinite relaxation and lift the phase retrieval problem to a higher dimension, PhaseMax is a \"non-lifting\" relaxation that operates in the original signal dimension. We show that the dual problem to PhaseMax is Basis Pursuit, which implies that phase retrieval can be performed using algorithms initially designed for sparse signal recovery. We develop sharp lower bounds on the success probability of PhaseMax for a broad range of random measurement ensembles, and we analyze the impact of measurement noise on the solution accuracy. We use numerical results to demonstrate the accuracy of our recovery guarantees, and we showcase the efficacy and limits of PhaseMax in practice.", "The paper considers the phase retrieval problem in N-dimensional complex vector spaces. It provides two sets of deterministic measurement vectors which guarantee signal recovery for all signals, excluding only a specific subspace and a union of subspaces, respectively. A stable analytic reconstruction procedure of low complexity is given. Additionally it is proven that signal recovery from these measurements can be solved exactly via a semidefinite program. A practical implementation with 4 deterministic diffraction patterns is provided and some numerical experiments with noisy measurements complement the analytic approach." ] }
1708.05932
2750338112
In phase retrieval we want to recover an unknown signal @math from @math quadratic measurements of the form @math where @math are known sensing vectors and @math is measurement noise. We ask the following weak recovery question: what is the minimum number of measurements @math needed to produce an estimator @math that is positively correlated with the signal @math ? We consider the case of Gaussian vectors @math . We prove that - in the high-dimensional limit - a sharp phase transition takes place, and we locate the threshold in the regime of vanishingly small noise. For @math no estimator can do significantly better than random and achieve a strictly positive correlation. For @math a simple spectral estimator achieves a positive correlation. Surprisingly, numerical simulations with the same spectral estimator demonstrate promising performance with realistic sensing matrices. Spectral methods are used to initialize non-convex optimization algorithms in phase retrieval, and our approach can boost the performance in this setting as well. Our impossibility result is based on classical information-theory arguments. The spectral algorithm computes the leading eigenvector of a weighted empirical covariance matrix. We obtain a sharp characterization of the spectral properties of this random matrix using tools from free probability and generalizing a recent result by Lu and Li. Both the upper and lower bound generalize beyond phase retrieval to measurements @math produced according to a generalized linear model. As a byproduct of our analysis, we compare the threshold of the proposed spectral method with that of a message passing algorithm.
As previously mentioned, our analysis of spectral methods builds on the recent work of Lu and Li @cite_38 that compute the exact spectral threshold for a matrix of the form ) with @math . Here we generalize this result to signed pre-processing functions @math , and construct a function of this type that achieves the information-theoretic threshold for phase retrieval. Our proof indeed implies that non-negative pre-processing functions lead to an unavoidable gap with respect to the ideal threshold.
{ "cite_N": [ "@cite_38" ], "mid": [ "2593262341", "2145080587", "2170510856", "2208906386" ], "abstract": [ "We study a spectral initialization method that serves a key role in recent work on estimating signals in nonconvex settings. Previous analysis of this method focuses on the phase retrieval problem and provides only performance bounds. In this paper, we consider arbitrary generalized linear sensing models and present a precise asymptotic characterization of the performance of the method in the high-dimensional limit. Our analysis also reveals a phase transition phenomenon that depends on the ratio between the number of samples and the signal dimension. When the ratio is below a minimum threshold, the estimates given by the spectral method are no better than random guesses drawn from a uniform distribution on the hypersphere, thus carrying no information; above a maximum threshold, the estimates become increasingly aligned with the target signal. The computational complexity of the method, as measured by the spectral gap, is also markedly different in the two phases. Worked examples and numerical results are provided to illustrate and verify the analytical predictions. In particular, simulations show that our asymptotic formulas provide accurate predictions for the actual performance of the spectral method even at moderate signal dimensions.", "Phase retrieval seeks to recover a signal @math x ? C p from the amplitude @math | A x | of linear measurements @math A x ? C n . We cast the phase retrieval problem as a non-convex quadratic program over a complex phase vector and formulate a tractable relaxation (called PhaseCut) similar to the classical MaxCut semidefinite program. We solve this problem using a provably convergent block coordinate descent algorithm whose structure is similar to that of the original greedy algorithm in Gerchberg and Saxton (Optik 35:237---246, 1972), where each iteration is a matrix vector product. Numerical results show the performance of this approach over three different phase retrieval problems, in comparison with greedy phase retrieval algorithms and matrix completion formulations.", "In this paper we introduce a novel linear precoding technique. The approach used for the design of the precoding matrix is general and the resulting algorithm can address several optimization criteria with an arbitrary number of antennas at the user terminals. We have achieved this by designing the precoding matrices in two steps. In the first step we minimize the overlap of the row spaces spanned by the effective channel matrices of different users using a new cost function. In the next step, we optimize the system performance with respect to specific optimization criteria assuming a set of parallel single- user MIMO channels. By combining the closed form solution with Tomlinson-Harashima precoding we reach the maximum sum-rate capacity when the total number of antennas at the user terminals is less or equal to the number of antennas at the base station. By iterating the closed form solution with appropriate power loading we are able to extract the full diversity in the system and reach the maximum sum-rate capacity in case of high multi-user interference. Joint processing over a group of multi-user MIMO channels in different frequency and time slots yields maximum diversity regardless of the level of multi-user interference.", "Recently, a number of researchers have proposed spectral algorithms for learning models of dynamical systems—for example, Hidden Markov Models (HMMs), Partially Observable Markov Decision Processes (POMDPs), and Transformed Predictive State Representations (TPSRs). These algorithms are attractive since they are statistically consistent and not subject to local optima. However, they are batch methods: they need to store their entire training data set in memory at once and operate on it as a large matrix, and so they cannot scale to extremely large data sets (either many examples or many features per example). In turn, this restriction limits their ability to learn accurate models of complex systems. To overcome these limitations, we propose a new online spectral algorithm, which uses tricks such as incremental Singular Value Decomposition (SVD) and random projections to scale to much larger data sets and more complex systems than previous methods. We demonstrate the new method on an inertial measurement prediction task and a high-bandwidth video mapping task and we illustrate desirable behaviors such as \"closing the loop,\" where the latent state representation changes suddenly as the learner recognizes that it has returned to a previously known place." ] }
1708.05827
2746391148
We introduce a general framework for visual forecasting, which directly imitates visual sequences without additional supervision. As a result, our model can be applied at several semantic levels and does not require any domain knowledge or handcrafted features. We achieve this by formulating visual forecasting as an inverse reinforcement learning (IRL) problem, and directly imitate the dynamics in natural sequences from their raw pixel values. The key challenge is the high-dimensional and continuous state-action space that prohibits the application of previous IRL algorithms. We address this computational bottleneck by extending recent progress in model-free imitation with trainable deep feature representations, which (1) bypasses the exhaustive state-action pair visits in dynamic programming by using a dual formulation and (2) avoids explicit state sampling at gradient computation using a deep feature reparametrization. This allows us to apply IRL at scale and directly imitate the dynamics in high-dimensional continuous visual sequences from the raw pixel values. We evaluate our approach at three different level-of-abstraction, from low level pixels to higher level semantics: future frame generation, action anticipation, visual story forecasting. At all levels, our approach outperforms existing methods.
There has been growing interest in developing computational models of human activities that can extrapolate unseen information and predict future unobserved activities @cite_50 @cite_22 @cite_8 @cite_34 @cite_38 @cite_0 @cite_5 @cite_53 . Some of the existing approaches @cite_50 @cite_22 @cite_38 @cite_44 @cite_0 @cite_5 tried to generate realistic future frames using generative adversarial networks @cite_20 . Unlike these methods, we emphasize longer-term sequential dynamics in videos using inverse reinforcement learning. Other line of work attempted to infer the action or human trajectories that will occur in the subsequent time-step based on previous observation @cite_28 @cite_48 @cite_19 @cite_11 @cite_32 . Our model directly imitates the natural sequence from the pixel-level and assumes no domain knowledge.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_8", "@cite_28", "@cite_48", "@cite_53", "@cite_32", "@cite_0", "@cite_44", "@cite_19", "@cite_50", "@cite_5", "@cite_34", "@cite_20", "@cite_11" ], "mid": [ "2896588340", "2751683986", "2963253230", "2793079679" ], "abstract": [ "We explore an approach to forecasting human motion in a few milliseconds given an input 3D skeleton sequence based on a recurrent encoder-decoder framework. Current approaches suffer from the problem of prediction discontinuities and may fail to predict human-like motion in longer time horizons due to error accumulation. We address these critical issues by incorporating local geometric structure constraints and regularizing predictions with plausible temporal smoothness and continuity from a global perspective. Specifically, rather than using the conventional Euclidean loss, we propose a novel frame-wise geodesic loss as a geometrically meaningful, more precise distance measurement. Moreover, inspired by the adversarial training mechanism, we present a new learning procedure to simultaneously validate the sequence-level plausibility of the prediction and its coherence with the input sequence by introducing two global recurrent discriminators. An unconditional, fidelity discriminator and a conditional, continuity discriminator are jointly trained along with the predictor in an adversarial manner. Our resulting adversarial geometry-aware encoder-decoder (AGED) model significantly outperforms state-of-the-art deep learning based approaches on the heavily benchmarked H3.6M dataset in both short-term and long-term predictions.", "Predicting the future from a sequence of video frames has been recently a sought after yet challenging task in the field of computer vision and machine learning. Although there have been efforts for tracking using motion trajectories and flow features, the complex problem of generating unseen frames has not been studied extensively. In this paper, we deal with this problem using convolutional models within a multi-stage Generative Adversarial Networks (GAN) framework. The proposed method uses two stages of GANs to generate a crisp and clear set of future frames. Although GANs have been used in the past for predicting the future, none of the works consider the relation between subsequent frames in the temporal dimension. Our main contribution lies in formulating two objective functions based on the Normalized Cross Correlation (NCC) and the Pairwise Contrastive Divergence (PCD) for solving this problem. This method, coupled with the traditional L1 loss, has been experimented with three real-world video datasets, viz. Sports-1M, UCF-101 and the KITTI. Performance analysis reveals superior results over the recent state-of-the-art methods.", "We propose a hierarchical approach for making long-term predictions of future frames. To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions. Long-term video prediction is difficult to perform by recurrently observing the predicted frames because the small errors in pixel space exponentially amplify as predictions are made deeper into the future. Our approach prevents pixel-level error propagation from happening by removing the need to observe the predicted frames. Our model is built with a combination of LSTM and analogy-based encoder-decoder convolutional neural networks, which independently predict the video structure and generate the future frames, respectively. In experiments, our model is evaluated on the Human 3.6M and Penn Action datasets on the task of long-term pixel-level video prediction of humans performing actions and demonstrate significantly better results than the state-of-the-art.", "Although adversarial samples of deep neural networks (DNNs) have been intensively studied on static images, their extensions in videos are never explored. Compared with images, attacking a video needs to consider not only spatial cues but also temporal cues. Moreover, to improve the imperceptibility as well as reduce the computation cost, perturbations should be added on as fewer frames as possible, i.e., adversarial perturbations are temporally sparse. This further motivates the propagation of perturbations, which denotes that perturbations added on the current frame can transfer to the next frames via their temporal interactions. Thus, no (or few) extra perturbations are needed for these frames to misclassify them. To this end, we propose an l2,1-norm based optimization algorithm to compute the sparse adversarial perturbations for videos. We choose the action recognition as the targeted task, and networks with a CNN+RNN architecture as threat models to verify our method. Thanks to the propagation, we can compute perturbations on a shortened version video, and then adapt them to the long version video to fool DNNs. Experimental results on the UCF101 dataset demonstrate that even only one frame in a video is perturbed, the fooling rate can still reach 59.7 ." ] }
1708.05827
2746391148
We introduce a general framework for visual forecasting, which directly imitates visual sequences without additional supervision. As a result, our model can be applied at several semantic levels and does not require any domain knowledge or handcrafted features. We achieve this by formulating visual forecasting as an inverse reinforcement learning (IRL) problem, and directly imitate the dynamics in natural sequences from their raw pixel values. The key challenge is the high-dimensional and continuous state-action space that prohibits the application of previous IRL algorithms. We address this computational bottleneck by extending recent progress in model-free imitation with trainable deep feature representations, which (1) bypasses the exhaustive state-action pair visits in dynamic programming by using a dual formulation and (2) avoids explicit state sampling at gradient computation using a deep feature reparametrization. This allows us to apply IRL at scale and directly imitate the dynamics in high-dimensional continuous visual sequences from the raw pixel values. We evaluate our approach at three different level-of-abstraction, from low level pixels to higher level semantics: future frame generation, action anticipation, visual story forecasting. At all levels, our approach outperforms existing methods.
Reinforcement learning (RL) achieves remarkable success in multiple domains ranging from robotics @cite_14 , computer vision @cite_45 @cite_13 @cite_3 and natural language processing @cite_23 @cite_41 . In the RL setting, the reward function that the agent aims to maximize is given as signal for training, where the goal is to learn a behavior that maximizes the expected reward. On the other hand, we work on the inverse reinforcement learning (IRL) problem, where the reward function must be discovered from demonstrated behavior @cite_51 @cite_27 @cite_2 . This is inspired by recent progress of IRL in computer vision @cite_12 @cite_48 @cite_19 @cite_43 @cite_37 @cite_35 . Nonetheless, these frameworks require heavy use of domain knowledge to construct the handcrafted features that are important to the task. Unlike these approaches, we aim to generalize IRL to natural sequential data without annotations.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_14", "@cite_41", "@cite_48", "@cite_3", "@cite_19", "@cite_27", "@cite_45", "@cite_43", "@cite_23", "@cite_2", "@cite_51", "@cite_13", "@cite_12" ], "mid": [ "2287850282", "2059517962", "2344349469", "2891076394" ], "abstract": [ "Inverse reinforcement learning (IRL) allows autonomous agents to learn to solve complex tasks from successful demonstrations. However, in many settings, e.g., when a human learns the task by trial and error, failed demonstrations are also readily available. In addition, in some tasks, purposely generating failed demonstrations may be easier than generating successful ones. Since existing IRL methods cannot make use of failed demonstrations, in this paper we propose inverse reinforcement learning from failure (IRLF) which exploits both successful and failed demonstrations. Starting from the state-of-the-art maximum causal entropy IRL method, we propose a new constrained optimisation formulation that accommodates both types of demonstrations while remaining convex. We then derive update rules for learning reward functions and policies. Experiments on both simulated and real-robot data demonstrate that IRLF converges faster and generalises better than maximum causal entropy IRL, especially when few successful demonstrations are available.", "Inverse Reinforcement Learning (IRL) is an approach for domain-reward discovery from demonstration, where an agent mines the reward function of a Markov decision process by observing an expert acting in the domain. In the standard setting, it is assumed that the expert acts (nearly) optimally, and a large number of trajectories, i.e., training examples are available for reward discovery (and consequently, learning domain behavior). These are not practical assumptions: trajectories are often noisy, and there can be a paucity of examples. Our novel approach incorporates advice-giving into the IRL framework to address these issues. Inspired by preference elicitation, a domain expert provides advice on states and actions (features) by stating preferences over them. We evaluate our approach on several domains and show that with small amounts of targeted preference advice, learning is possible from noisy demonstrations, and requires far fewer trajectories compared to simply learning from trajectories alone.", "Reinforcement Learning (RL) struggles in problems with delayed rewards, and one approach is to segment the task into sub-tasks with incremental rewards. We propose a framework called Hierarchical Inverse Reinforcement Learning (HIRL), which is a model for learning sub-task structure from demonstrations. HIRL decomposes the task into sub-tasks based on transitions that are consistent across demonstrations. These transitions are defined as changes in local linearity w.r.t to a kernel function. Then, HIRL uses the inferred structure to learn reward functions local to the sub-tasks but also handle any global dependencies such as sequentiality. We have evaluated HIRL on several standard RL benchmarks: Parallel Parking with noisy dynamics, Two-Link Pendulum, 2D Noisy Motion Planning, and a Pinball environment. In the parallel parking task, we find that rewards constructed with HIRL converge to a policy with an 80 success rate in 32 fewer time-steps than those constructed with Maximum Entropy Inverse RL (MaxEnt IRL), and with partial state observation, the policies learned with IRL fail to achieve this accuracy while HIRL still converges. We further find that that the rewards learned with HIRL are robust to environment noise where they can tolerate 1 stdev. of random perturbation in the poses in the environment obstacles while maintaining roughly the same convergence rate. We find that HIRL rewards can converge up-to 6x faster than rewards constructed with IRL.", "The reinforcement learning (RL) community has made great strides in designing algorithms capable of exceeding human performance on specific tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequentialdecision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learning system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribution of each task to the agent’s updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy - with a single set of weights - that exceeds median human performance. To our knowledge, this was the first time a single agent surpassed human-level performance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab." ] }
1708.05827
2746391148
We introduce a general framework for visual forecasting, which directly imitates visual sequences without additional supervision. As a result, our model can be applied at several semantic levels and does not require any domain knowledge or handcrafted features. We achieve this by formulating visual forecasting as an inverse reinforcement learning (IRL) problem, and directly imitate the dynamics in natural sequences from their raw pixel values. The key challenge is the high-dimensional and continuous state-action space that prohibits the application of previous IRL algorithms. We address this computational bottleneck by extending recent progress in model-free imitation with trainable deep feature representations, which (1) bypasses the exhaustive state-action pair visits in dynamic programming by using a dual formulation and (2) avoids explicit state sampling at gradient computation using a deep feature reparametrization. This allows us to apply IRL at scale and directly imitate the dynamics in high-dimensional continuous visual sequences from the raw pixel values. We evaluate our approach at three different level-of-abstraction, from low level pixels to higher level semantics: future frame generation, action anticipation, visual story forecasting. At all levels, our approach outperforms existing methods.
Our extension of generative adversarial imitation learning @cite_51 is related to recent progress in generative adversarial networks (GAN) @cite_20 . While there has been multiple works on applying GAN to image and video @cite_10 @cite_31 @cite_38 @cite_16 , we extend it to long-term prediction of natural visual sequences and directly imitate the high-dimensional continuous sequence.
{ "cite_N": [ "@cite_38", "@cite_51", "@cite_31", "@cite_16", "@cite_10", "@cite_20" ], "mid": [ "2808763756", "2598991778", "2771088323", "2963966654" ], "abstract": [ "Generative Adversarial Network (GAN) is a prominent generative model that are widely used in various applications. Recent studies have indicated that it is possible to obtain fake face images with a high visual quality based on this novel model. If those fake faces are abused in image tampering, it would cause some potential moral, ethical and legal problems. In this paper, therefore, we first propose a Convolutional Neural Network (CNN) based method to identify fake face images generated by the current best method [20], and provide experimental evidences to show that the proposed method can achieve satisfactory results with an average accuracy over 99.4 . In addition, we provide comparative results evaluated on some variants of the proposed CNN architecture, including the high pass filter, the number of the layer groups and the activation function, to further verify the rationality of our method.", "In this work, we present the Text Conditioned Auxiliary Classifier Generative Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network (GAN) for synthesizing images from their text descriptions. Former approaches have tried to condition the generative process on the textual data; but allying it to the usage of class information, known to diversify the generated samples and improve their structural coherence, has not been explored. We trained the presented TAC-GAN model on the Oxford-102 dataset of flowers, and evaluated the discriminability of the generated images with Inception-Score, as well as their diversity using the Multi-Scale Structural Similarity Index (MS-SSIM). Our approach outperforms the state-of-the-art models, i.e., its inception score is 3.45, corresponding to a relative increase of 7.8 compared to the recently introduced StackGan. A comparison of the mean MS-SSIM scores of the training and generated samples per class shows that our approach is able to generate highly diverse images with an average MS-SSIM of 0.14 over all generated classes.", "In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14 on the CUB dataset and 170.25 on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.", "In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different sub-regions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14 on the CUB dataset and 170.25 on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image." ] }
1906.00588
2947756088
Neural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough, but also the uncertainty (i.e. risk, or confidence) of that prediction must be estimated. Standard NNs, which are most often used in such tasks, do not provide any such information. Existing approaches try to solve this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not perform as well as standard NNs. In this paper, a new framework called RIO is developed that makes it possible to estimate uncertainty in any pretrained standard NN. RIO models prediction residuals using Gaussian Process with a composite input output kernel. The residual prediction and I O kernel are theoretically motivated and the framework is evaluated in twelve real-world datasets. It is found to provide reliable estimates of the uncertainty, reduce the error of the point predictions, and scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient in building real-world applications of NNs.
There has been significant interest in combining NNs with probabilistic Bayesian models. An early approach was Bayesian Neural Networks, in which a prior distribution is defined on the weights and biases of a NN, and a posterior distribution is then inferred from the training data @cite_28 @cite_9 . Traditional variational inference techniques have been applied to the learning procedure of Bayesian NN, but with limited success @cite_30 @cite_21 @cite_1 . By using a more advanced variational inference method, new approximations for Bayesian NNs were achieved that provided similar performance as dropout NNs @cite_3 . However, the main drawbacks of Bayesian NNs remain: prohibitive computational cost and difficult implementation procedure compared to standard NNs.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_9", "@cite_21", "@cite_1", "@cite_3" ], "mid": [ "2949496227", "2897001865", "2907176385", "582134693" ], "abstract": [ "Variational Bayesian neural networks (BNNs) perform variational inference over weights, but it is difficult to specify meaningful priors and approximate posteriors in a high-dimensional weight space. We introduce functional variational Bayesian neural networks (fBNNs), which maximize an Evidence Lower BOund (ELBO) defined directly on stochastic processes, i.e. distributions over functions. We prove that the KL divergence between stochastic processes equals the supremum of marginal KL divergences over all finite sets of inputs. Based on this, we introduce a practical training objective which approximates the functional ELBO using finite measurement sets and the spectral Stein gradient estimator. With fBNNs, we can specify priors entailing rich structures, including Gaussian processes and implicit stochastic processes. Empirically, we find fBNNs extrapolate well using various structured priors, provide reliable uncertainty estimates, and scale to large datasets.", "Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal with uncertainty when learning from finite data. Among approaches to realize probabilistic inference in deep neural networks, variational Bayes (VB) is theoretically grounded, generally applicable, and computationally efficient. With wide recognition of potential advantages, why is it that variational Bayes has seen very limited practical use for BNNs in real applications? We argue that variational inference in neural networks is fragile: successful implementations require careful initialization and tuning of prior variances, as well as controlling the variance of Monte Carlo gradient estimates. We fix VB and turn it into a robust inference tool for Bayesian neural networks. We achieve this with two innovations: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel empirical Bayes procedure for automatically selecting prior variances. Combining these two innovations, the resulting method is highly efficient and robust. On the application of heteroscedastic regression we demonstrate strong predictive performance over alternative approaches.", "As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modeling uncertainty is one of the key features of Bayesian methods. Using Bernoulli dropout with sampling at prediction time has recently been proposed as an efficient and well performing variational inference method for DNNs. However, sampling from other multiplicative noise based variational distributions has not been investigated in depth. We evaluated Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (dropconnect). We tested the calibration of the probabilistic predictions of Bayesian convolutional neural networks (CNNs) on MNIST and CIFAR-10. Sampling at prediction time increased the calibration of the DNNs' probabalistic predictions. Sampling weights, whether Gaussian or Bernoulli, led to more robust representation of uncertainty compared to sampling of units. However, using either Gaussian or Bernoulli dropout led to increased test set classification accuracy. Based on these findings we used both Bernoulli dropout and Gaussian dropconnect concurrently, which we show approximates the use of a spike-and-slab variational distribution without increasing the number of learned parameters. We found that spike-and-slab sampling had higher test set performance than Gaussian dropconnect and more robustly represented its uncertainty compared to Bernoulli dropout.", "Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning." ] }
1906.00402
2947069245
In dealing with constrained multi-objective optimization problems (CMOPs), a key issue of multi-objective evolutionary algorithms (MOEAs) is to balance the convergence and diversity of working populations.
The push and pull search (PPS) framework was introduced by @cite_15 . Unlike other constraint handling mechanisms, the search process of PPS is divided into two different stages: the push search and the pull search, and follows a procedure of "push first and pull second", by which the working population is pushed toward the unconstrained PF without considering any constraints in the push stage, and a constraint handling mechanism is used to pull the working population to the constrained PF in the pull stage.
{ "cite_N": [ "@cite_15" ], "mid": [ "2963115819", "2963649943", "2561500936", "2200550062" ], "abstract": [ "Abstract This paper proposes a push and pull search (PPS) framework for solving constrained multi-objective optimization problems (CMOPs). To be more specific, the proposed PPS divides the search process into two different stages: push and pull search stages. In the push stage, a multi-objective evolutionary algorithm (MOEA) is used to explore the search space without considering any constraints, which can help to get across infeasible regions very quickly and to approach the unconstrained Pareto front. Furthermore, the landscape of CMOPs with constraints can be probed and estimated in the push stage, which can be utilized to conduct the parameter setting for the constraint-handling approaches to be applied in the pull stage. Then, a modified form of a constrained multi-objective evolutionary algorithm (CMOEA), with improved epsilon constraint-handling, is applied to pull the infeasible individuals achieved in the push stage to the feasible and non-dominated regions. To evaluate the performance regarding convergence and diversity, a set of benchmark CMOPs and a real-world optimization problem are used to test the proposed PPS (PPS-MOEA D) and state-of-the-art CMOEAs, including MOEA D-IEpsilon, MOEA D-Epsilon, MOEA D-CDP, MOEA D-SR, C-MOEA D and NSGA-II-CDP. The comprehensive experimental results show that the proposed PPS-MOEA D achieves significantly better performance than the other six CMOEAs on most of the tested problems, which indicates the superiority of the proposed PPS method for solving CMOPs.", "This paper considers the problem of distributed optimization over time-varying graphs. For the case of undirected graphs, we introduce a distributed algorithm, referred to as DIGing, based on a combination of a distributed inexact gradient method and a gradient tracking technique. The DIGing algorithm uses doubly stochastic mixing matrices and employs fixed step-sizes and, yet, drives all the agents' iterates to a global and consensual minimizer. When the graphs are directed, in which case the implementation of doubly stochastic mixing matrices is unrealistic, we construct an algorithm that incorporates the push-sum protocol into the DIGing structure, thus obtaining the Push-DIGing algorithm. Push-DIGing uses column stochastic matrices and fixed step-sizes, but it still converges to a global and consensual minimizer. Under the strong convexity assumption, we prove that the algorithms converge at R-linear (geometric) rates as long as the step-sizes do not exceed some upper bounds. We establish explicit est...", "We describe a simple push-pull optimization (PPO) algorithm for blue-noise sampling by enforcing spatial constraints on given point sets. Constraints can be a minimum distance between samples, a maximum distance between an arbitrary point and the nearest sample, and a maximum deviation of a sample's capacity (area of Voronoi cell) from the mean capacity. All of these constraints are based on the topology emerging from Delaunay triangulation, and they can be combined for improved sampling quality and efficiency. In addition, our algorithm offers flexibility for trading-off between different targets, such as noise and aliasing. We present several applications of the proposed algorithm, including anti-aliasing, stippling, and non-obtuse remeshing. Our experimental results illustrate the efficiency and the robustness of the proposed approach. Moreover, we demonstrate that our remeshing quality is superior to the current state-of-the-art approaches.", "There has been an increased growth in a number of applications that naturally generate large volumes of uncertain data. By the advent of such applications, the support of advanced analysis queries such as the skyline and its variant operators for big uncertain data has become important. In this paper, we propose the effective parallel algorithms using MapReduce to process the probabilistic skyline queries for uncertain data modeled by both discrete and continuous models. We present three filtering methods to identify probabilistic non-skyline objects in advance. We next develop a single MapReduce phase algorithm PS-QP-MR by utilizing space partitioning based on a variant of quadtrees to distribute the instances of objects effectively and the enhanced algorithm PS-QPF-MR by applying the three filtering methods additionally. We also propose the workload balancing technique to balance the workload of reduce functions based on the number of machines available. Finally, we present the brute-force algorithms PS-BR-MR and PS-BRF-MR with partitioning randomly and applying the filtering methods. In our experiments, we demonstrate the efficiency and scalability of PS-QPF-MR compared to the other algorithms." ] }
1906.00402
2947069245
In dealing with constrained multi-objective optimization problems (CMOPs), a key issue of multi-objective evolutionary algorithms (MOEAs) is to balance the convergence and diversity of working populations.
Employing a number of sub-populations to solve problems in a collaborative way @cite_26 is a widely used approach, which can help an algorithm balance its convergence and diversity. One of the most popular methods is the M2M population decomposition approach @cite_41 , which decomposes a multi-objective optimization problems into a number of simple multi-objective optimization subproblems in the initialization, then solves these sub-problems simultaneously in a coordinated manner. For this purpose, @math unit vectors @math in @math are chosen in the first octant of the objective space. Then @math is divided into @math subregions @math , where @math is where @math is the acute angle between @math and @math . Therefore, the population is decomposed into @math sub-populations, each sub-population searches for a different multi-objective subproblem. Subproblem @math is defined as:
{ "cite_N": [ "@cite_41", "@cite_26" ], "mid": [ "2058142975", "2277973662", "2083503884", "2094131428" ], "abstract": [ "This letter suggests an approach for decomposing a multiobjective optimization problem (MOP) into a set of simple multiobjective optimization subproblems. Using this approach, it proposes MOEA D-M2M, a new version of multiobjective optimization evolutionary algorithm-based decomposition. This proposed algorithm solves these subproblems in a collaborative way. Each subproblem has its own population and receives computational effort at each generation. In such a way, population diversity can be maintained, which is critical for solving some MOPs. Experimental studies have been conducted to compare MOEA D-M2M with classic MOEA D and NSGA-II. This letter argues that population diversity is more important than convergence in multiobjective evolutionary algorithms for dealing with some MOPs. It also explains why MOEA D-M2M performs better.", "This paper introduces a parallel and distributed algorithm for solving the following minimization problem with linear constraints: @math minimizef1(x1)+ź+fN(xN)subject toA1x1+ź+ANxN=c,x1źX1,ź,xNźXN,where @math Nź2, @math fi are convex functions, @math Ai are matrices, and @math Xi are feasible sets for variable @math xi. Our algorithm extends the alternating direction method of multipliers (ADMM) and decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This paper shows that the classic ADMM can be extended to the N-block Jacobi fashion and preserve convergence in the following two cases: (i) matrices @math Ai are mutually near-orthogonal and have full column-rank, or (ii) proximal terms are added to the N subproblems (but without any assumption on matrices @math Ai). In the latter case, certain proximal terms can let the subproblem be solved in more flexible and efficient ways. We show that @math źxk+1-xkźM2 converges at a rate of o(1 k) where M is a symmetric positive semi-definte matrix. Since the parameters used in the convergence analysis are conservative, we introduce a strategy for automatically tuning the parameters to substantially accelerate our algorithm in practice. We implemented our algorithm (for the case ii above) on Amazon EC2 and tested it on basis pursuit problems with >300 GB of distributed data. This is the first time that successfully solving a compressive sensing problem of such a large scale is reported.", "Many-objective problems (MAPs) have put forward a number of challenges to classical Pareto-dominance based multi-objective evolutionary algorithms (MOEAs) for the past few years. Recently, researchers have suggested that MOEA D (multi-objective evolutionary algorithm based on decomposition) can work for MAPs. However, there exist two difficulties in applying MOEA D to solve MAPs directly. One is that the number of constructed weight vectors is not arbitrary and the weight vectors are mainly distributed on the boundary of weight space for MAPs. The other is that the relationship between the optimal solution of subproblem and its weight vector is nonlinear for the Tchebycheff decomposition approach used by MOEA D. To deal with these two difficulties, we propose an improved MOEA D with uniform decomposition measurement and the modified Tchebycheff decomposition approach (MOEA D-UDM) in this paper. Firstly, a novel weight vectors initialization method based on the uniform decomposition measurement is introduced to obtain uniform weight vectors in any amount, which is one of great merits to use our proposed algorithm. The modified Tchebycheff decomposition approach, instead of the Tchebycheff decomposition approach, is used in MOEA D-UDM to alleviate the inconsistency between the weight vector of subproblem and the direction of its optimal solution in the Tchebycheff decomposition approach. The proposed MOEA D-UDM is compared with two state-of-the-art MOEAs, namely MOEA D and UMOEA D on a number of MAPs. Experimental results suggest that the proposed MOEA D-UDM outperforms or performs similarly to the other compared algorithms in terms of hypervolume and inverted generational distance metrics on different types of problems. The effects of uniform weight vector initializing method and the modified Tchebycheff decomposition are also studied separately.", "Decomposition-based methods are an increasingly popular choice for a posteriori multi-objective optimization. However the ability of such methods to describe a trade-off surface depends on the choice of weighting vectors defining the set of subproblems to be solved. Recent adaptive approaches have sought to progressively modify the weighting vectors to obtain a desirable distribution of solutions. This paper argues that adaptation imposes a non-negligible cost -- in terms of convergence -- on decomposition-based algorithms. To test this hypothesis, the process of adaptation is abstracted and then subjected to experimentation on established problems involving between three and 11 conflicting objectives. The results show that adaptive approaches require longer traversals through objectivespace than fixed-weight approaches. Since fixed weights cannot, in general, be specified in advance, it is concluded that the new wave of decomposition-based methods offer no immediate panacea to the well-known conflict between convergence and distribution afflicting Pareto-based a posteriori methods." ] }
1906.00642
2947251021
As an important semi-supervised learning task, positive-unlabeled (PU) learning aims to learn a binary classifier only from positive and unlabeled data. In this article, we develop a novel PU learning framework, called discriminative adversarial networks, which contains two discriminative models represented by deep neural networks. One model @math predicts the conditional probability of the positive label for a given sample, which defines a Bayes classifier after training, and the other model @math distinguishes labeled positive data from those identified by @math . The two models are simultaneously trained in an adversarial way like generative adversarial networks, and the equilibrium can be achieved when the output of @math is close to the exact posterior probability of the positive class. In contrast with existing deep PU learning approaches, DAN does not require the class prior estimation, and its consistency can be proved under very general conditions. Numerical experiments demonstrate the effectiveness of the proposed framework.
An important idea of DAN is to approximate @math by matching @math and @math , which has in fact been investigated in literature (see, e.g., @cite_14 @cite_24 @cite_35 @cite_12 @cite_20 ). However, the direct approximation based on ) involves the probability density estimation and is difficult for high-dimensional applications. In @cite_12 @cite_20 , by modeling the ratio between @math and @math as a linear combination of basis functions, this problem is transformed into a quadratic programming problem. But the approximation results cannot meet the requirement for classification, and are only applicable to estimation of the class prior of @math . One main contribution of our approach compared to the previous works is that we find a general and effective way to optimize the model of @math by adversarial training.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_24", "@cite_12", "@cite_20" ], "mid": [ "2089135543", "2001663593", "1554201860", "2007104311" ], "abstract": [ "We give efficient algorithms for volume sampling, i.e., for picking @math -subsets of the rows of any given matrix with probabilities proportional to the squared volumes of the simplices defined by them and the origin (or the squared volumes of the parallelepipeds defined by these subsets of rows). In other words, we can efficiently sample @math -subsets of @math with probabilities proportional to the corresponding @math by @math principal minors of any given @math by @math positive semi definite matrix. This solves an open problem from the monograph on spectral algorithms by Kannan and Vempala (see Section @math of KV , also implicit in BDM, DRVW ). Our first algorithm for volume sampling @math -subsets of rows from an @math -by- @math matrix runs in @math arithmetic operations (where @math is the exponent of matrix multiplication) and a second variant of it for @math -approximate volume sampling runs in @math arithmetic operations, which is almost linear in the size of the input (i.e., the number of entries) for small @math . Our efficient volume sampling algorithms imply the following results for low-rank matrix approximation: (1) Given @math , in @math arithmetic operations we can find @math of its rows such that projecting onto their span gives a @math -approximation to the matrix of rank @math closest to @math under the Frobenius norm. This improves the @math -approximation of Boutsidis, Drineas and Mahoney BDM and matches the lower bound shown in DRVW . The method of conditional expectations gives a algorithm with the same complexity. The running time can be improved to @math at the cost of losing an extra @math in the approximation factor. (2) The same rows and projection as in the previous point give a @math -approximation to the matrix of rank @math closest to @math under the spectral norm. In this paper, we show an almost matching lower bound of @math , even for @math .", "par>We prove some non-approximability results for restrictions of basic combinatorial optimization problems to instances of bounded “degreeror bounded “width.” Specifically: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approximate to within a factor @math , unless @math . H stad [18] proved that the problem is approximable to within a factor @math in polynomial time, and that is hard to approximate to within a factor @math . Our result uses a new randomized reduction from general instances of Max 3SAT to bounded-occurrences instances. The randomized reduction applies to other Max SNP problems as well. We observe that the Set Cover problem on instances where each set has size at most B is hard to approximate to within a factor @math unless @math . The result follows from an appropriate setting of parameters in Feige's reduction [11]. This is essentially tight in light of the existence of @math -approximate algorithms [20, 23, 9] We present a new PCP construction, based on applying parallel repetition to the inner verifier,'' and we provide a tight analysis for it. Using the new construction, and some modifications to known reductions from PCP to Hitting Set, we prove that Hitting Set with sets of size B is hard to approximate to within a factor @math . The problem can be approximated to within a factor B [19], and it is the Vertex Cover problem for B =2. The relationship between hardness of approximation and set size seems to have not been explored before. We observe that the Independent Set problem on graphs having degree at most B is hard to approximate to within a factor @math , unless P = NP . This follows from a comination of results by Clementi and Trevisan [28] and Reingold, Vadhan and Wigderson [27]. It had been observed that the problem is hard to approximate to within a factor @math unless P = NP [1]. An algorithm achieving factor @math is also known [21, 2, 30, 16 .", "Given a matrix A e ℝ m ×n of rank r, and an integer k < r, the top k singular vectors provide the best rank-k approximation to A. When the columns of A have specific meaning, it is desirable to find (provably) \"good\" approximations to A k which use only a small number of columns in A. Proposed solutions to this problem have thus far focused on randomized algorithms. Our main result is a simple greedy deterministic algorithm with guarantees on the performance and the number of columns chosen. Specifically, our greedy algorithm chooses c columns from A with @math such that @math where C gr is the matrix composed of the c columns, @math is the pseudo-inverse of C gr ( @math is the best reconstruction of A from C gr), and ¼(A) is a measure of the coherence in the normalized columns of A. The running time of the algorithm is O(SVD(A k) + mnc) where SVD(A k) is the running time complexity of computing the first k singular vectors of A. To the best of our knowledge, this is the first deterministic algorithm with performance guarantees on the number of columns and a (1 + e) approximation ratio in Frobenius norm. The algorithm is quite simple and intuitive and is obtained by combining a generalization of the well known sparse approximation problem from information theory with an existence result on the possibility of sparse approximation. Tightening the analysis along either of these two dimensions would yield improved results.", "In this paper we study the approximation algorithms for a class of discrete quadratic optimization problems in the Hermitian complex form. A special case of the problem that we study corresponds to the max-3-cut model used in a recent paper of Goemans and Williamson J. Comput. System Sci., 68 (2004), pp. 442-470]. We first develop a closed-form formula to compute the probability of a complex-valued normally distributed bivariate random vector to be in a given angular region. This formula allows us to compute the expected value of a randomized (with a specific rounding rule) solution based on the optimal solution of the complex semidefinite programming relaxation problem. In particular, we present an @math -approximation algorithm, and then study the limit of that model, in which the problem remains NP-hard. We show that if the objective is to maximize a positive semidefinite Hermitian form, then the randomization-rounding procedure guarantees a worst-case performance ratio of @math , which is better than the ratio of @math for its counterpart in the real case due to Nesterov. Furthermore, if the objective matrix is real-valued positive semidefinite with nonpositive off-diagonal elements, then the performance ratio improves to 0.9349." ] }
1906.00642
2947251021
As an important semi-supervised learning task, positive-unlabeled (PU) learning aims to learn a binary classifier only from positive and unlabeled data. In this article, we develop a novel PU learning framework, called discriminative adversarial networks, which contains two discriminative models represented by deep neural networks. One model @math predicts the conditional probability of the positive label for a given sample, which defines a Bayes classifier after training, and the other model @math distinguishes labeled positive data from those identified by @math . The two models are simultaneously trained in an adversarial way like generative adversarial networks, and the equilibrium can be achieved when the output of @math is close to the exact posterior probability of the positive class. In contrast with existing deep PU learning approaches, DAN does not require the class prior estimation, and its consistency can be proved under very general conditions. Numerical experiments demonstrate the effectiveness of the proposed framework.
It is also interesting to compare DAN to GenPU, a GAN based PU learning method @cite_23 , since they share the similar adversarial training architecture. In DAN, the discriminative model @math plays the role of the generative model in GAN by approximating positive data distribution in an implicit way, and can be efficiently trained together with @math . In contrast, GenPU is much more time-consuming and easily suffers from mode collapse as stated in @cite_23 due to that it contains three generators and two discriminators. (Notice that the penalty factor @math cannot be applied to GenPU for the probability densities of samples given by generators are unknown.) Furthermore, the consistency of the GenPU needs the assumptions that class prior is given and there is no overlapping between positive and negative data distributions, which are not necessary for DAN.
{ "cite_N": [ "@cite_23" ], "mid": [ "2949895856", "2787223504", "2964268978", "2523469089" ], "abstract": [ "In this work, we consider the task of classifying binary positive-unlabeled (PU) data. The existing discriminative learning based PU models attempt to seek an optimal reweighting strategy for U data, so that a decent decision boundary can be found. However, given limited P data, the conventional PU models tend to suffer from overfitting when adapted to very flexible deep neural networks. In contrast, we are the first to innovate a totally new paradigm to attack the binary PU task, from perspective of generative learning by leveraging the powerful generative adversarial networks (GAN). Our generative positive-unlabeled (GenPU) framework incorporates an array of discriminators and generators that are endowed with different roles in simultaneously producing positive and negative realistic samples. We provide theoretical analysis to justify that, at equilibrium, GenPU is capable of recovering both positive and negative data distributions. Moreover, we show GenPU is generalizable and closely related to the semi-supervised classification. Given rather limited P data, experiments on both synthetic and real-world dataset demonstrate the effectiveness of our proposed framework. With infinite realistic and diverse sample streams generated from GenPU, a very flexible classifier can then be trained using deep neural networks.", "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines." ] }
1906.00424
2947626630
Unilateral contracts, such as terms of service, play a substantial role in modern digital life. However, few users read these documents before accepting the terms within, as they are too long and the language too complicated. We propose the task of summarizing such legal documents in plain English, which would enable users to have a better understanding of the terms they are accepting. We propose an initial dataset of legal text snippets paired with summaries written in plain English. We verify the quality of these summaries manually and show that they involve heavy abstraction, compression, and simplification. Initial experiments show that unsupervised extractive summarization methods do not perform well on this task due to the level of abstraction and style differences. We conclude with a call for resource and technique development for simplification and style transfer for legal language.
The dataset we present summarizes contracts in plain English . While there is no precise definition of plain English, the general philosophy is to make a text readily accessible for as many English speakers as possible. @cite_21 @cite_20 . Guidelines for plain English often suggest a preference for words with Saxon etymologies rather than a Latin Romance etymologies, the use of short words, sentences, and paragraphs, etc. https: plainlanguage.gov guidelines @cite_20 @cite_18 . In this respect, the proposed task involves some level of , as we will discuss in . However, existing resources for text simplification target literacy reading levels @cite_29 or learners of English as a second language @cite_25 . Additionally, these models are trained using Wikipedia or news articles, which are quite different from legal documents. These systems are trained without access to sentence-aligned parallel corpora; they only require semantically similar texts @cite_8 @cite_30 @cite_3 . To the best of our knowledge, however, there is no existing dataset to facilitate the transfer of legal language to plain English.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_8", "@cite_29", "@cite_21", "@cite_3", "@cite_25", "@cite_20" ], "mid": [ "1489181569", "2914120296", "2251796964", "2099031744" ], "abstract": [ "Researchers in both machine translation (e.g., 1990) and bilingual lexicography (e.g., Klavans and Tzoukermann 1990) have recently become interested in studying bilingual corpora, bodies of text such as the Canadian Hansards (parliamentary proceedings), which are available in multiple languages (such as French and English). One useful step is to align the sentences, that is, to identify correspondences between sentences in one language and sentences in the other language.This paper will describe a method and a program (align) for aligning sentences based on a simple statistical model of character lengths. The program uses the fact that longer sentences in one language tend to be translated into longer sentences in the other language, and that shorter sentences tend to be translated into shorter sentences. A probabilistic score is assigned to each proposed correspondence of sentences, based on the scaled difference of lengths of the two sentences (in characters) and the variance of this difference. This probabilistic score is used in a dynamic programming framework to find the maximum likelihood alignment of sentences.It is remarkable that such a simple approach works as well as it does. An evaluation was performed based on a trilingual corpus of economic reports issued by the Union Bank of Switzerland (UBS) in English, French, and German. The method correctly aligned all but 4 of the sentences. Moreover, it is possible to extract a large subcorpus that has a much smaller error rate. By selecting the best-scoring 80 of the alignments, the error rate is reduced from 4 to 0.7 . There were more errors on the English-French subcorpus than on the English-German subcorpus, showing that error rates will depend on the corpus considered; however, both were small enough to hope that the method will be useful for many language pairs.To further research on bilingual corpora, a much larger sample of Canadian Hansards (approximately 90 million words, half in English and and half in French) has been aligned with the align program and will be available through the Data Collection Initiative of the Association for Computational Linguistics (ACL DCI). In addition, in order to facilitate replication of the align program, an appendix is provided with detailed c-code of the more difficult core of the align program.", "Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9 accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.", "Corpus-based approaches to machine translation (MT) rely on the availability of parallel corpora. To produce user-acceptable translation outputs, such systems need high quality data to be efficiently trained, optimized and evaluated. However, building high quality dataset is a relatively expensive task. In this paper, we describe the data collection and analysis of a large database of 10.881 SMT translation output hypotheses manually corrected. These post-editions were collected using Amazon’s Mechanical Turk, following some ethical guidelines. A complete analysis of the collected data pointed out a high quality of the corrections with more than 87 of the collected post-editions that improve hypotheses and more than 94 of the crowdsourced post-editions which are at least of professional quality. We also post-edited 1,500 gold-standard reference translations (of bilingual parallel corpora generated by professional) and noticed that 72 of these translations needed to be corrected during post-edition. We computed a proximity measure between the different kind of translations and pointed out that reference translations are as far from the hypotheses as from the corrected hypotheses (i.e. the post-editions). In light of these last findings, we discuss the adequation of text-based generated reference translations to train setence-to-sentence based SMT systems.", "Due to the globalization on the Web, many companies and institutions need to efficiently organize and search repositories containing multilingual documents. The management of these heterogeneous text collections increases the costs significantly because experts of different languages are required to organize these collections. Cross-language text categorization can provide techniques to extend existing automatic classification systems in one language to new languages without requiring additional intervention of human experts. In this paper, we propose a learning algorithm based on the EM scheme which can be used to train text classifiers in a multilingual environment. In particular, in the proposed approach, we assume that a predefined category set and a collection of labeled training data is available for a given language L sub 1 . A classifier for a different language L sub 2 is trained by translating the available labeled training set for L sub 1 to L sub 2 and by using an additional set of unlabeled documents from L sub 2 . This technique allows us to extract correct statistical properties of the language L sub 2 which are not completely available in automatically translated examples, because of the different characteristics of language L sub 1 and of the approximation of the translation process. Our experimental results show that the performance of the proposed method is very promising when applied on a test document set extracted from newsgroups in English and Italian." ] }
1906.00360
2947973288
Modern smartphones have all the sensing capabilities required for accurate and robust navigation and tracking. In specific environments some data streams may be absent, less reliable, or flat out wrong. In particular, the GNSS signal can become flawed or silent inside buildings or in streets with tall buildings. In this application paper, we aim to advance the current state-of-the-art in motion estimation using inertial measurements in combination with partial GNSS data on standard smartphones. We show how iterative estimation methods help refine the positioning path estimates in retrospective use cases that can cover both fixed-interval and fixed-lag scenarios. We compare estimation results provided by global iterated Kalman filtering methods to those of a visual-inertial tracking scheme (Apple ARKit). The practical applicability is demonstrated on real-world use cases on empirical data acquired from both smartphones and tablet devices.
The classical inertial navigation literature is extensive (see the books @cite_3 @cite_27 @cite_18 @cite_15 , for example) but is mainly focused on navigation of large vehicles with relatively high quality inertial sensors. Even though the theory is solid and general, practice has shown that a lot of hand-tailoring of methods is needed to actually get working systems. Since we focus on navigation approaches using consumer-grade sensors in small mobile devices, the literature survey below concentrates on recent work in that area.
{ "cite_N": [ "@cite_27", "@cite_18", "@cite_3", "@cite_15" ], "mid": [ "2963887447", "1493051473", "2918642789", "2140924050" ], "abstract": [ "Building a complete inertial navigation system using the limited quality data provided by current smartphones has been regarded challenging, if not impossible. This paper shows that by careful crafting and accounting for the weak information in the sensor samples, smartphones are capable of pure inertial navigation. We present a probabilistic approach for orientation and use-case free inertial odometry, which is based on double-integrating rotated accelerations. The strength of the model is in learning additive and multiplicative IMU biases online. We are able to track the phone position, velocity, and pose in realtime and in a computationally lightweight fashion by solving the inference with an extended Kalman filter. The information fusion is completed with zero-velocity updates (if the phone remains stationary), altitude correction from barometric pressure readings (if available), and pseudo-updates constraining the momentary speed. We demonstrate our approach using an iPad and iPhone in several indoor dead-reckoning applications and in a measurement tool setup.", "This book offers a guide for avionics system engineers who want to compare the performance of the various types of inertial navigation systems. The author emphasizes systems used on or near the surface of the planet, but says the principles can be applied to craft in space or underwater with a little tinkering. Part of the material is adapted from the authors doctoral dissertation, but much is from his lecture notes for a one-semester graduate course in inertial navigation systems for students who were already adept in classical mechanics, kinematics, inertial instrument theory, and inertial platform mechanization. This book was first published in 1971 but no revision has been necessary so far because the earth's spin is being so much more stable than its magnetic field.", "Navigation research is attracting renewed interest with the advent of learning-based methods. However, this new line of work is largely disconnected from well-established classic navigation approaches. In this paper, we take a step towards coordinating these two directions of research. We set up classic and learning-based navigation systems in common simulated environments and thoroughly evaluate them in indoor spaces of varying complexity, with access to different sensory modalities. Additionally, we measure human performance in the same environments. We find that a classic pipeline, when properly tuned, can perform very well in complex cluttered environments. On the other hand, learned systems can operate more robustly with a limited sensor suite. Overall, both approaches are still far from human-level performance.", "Vision-aided inertial navigation systems (V-INSs) can provide precise state estimates for the 3-D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved by combining inertial measurements from an inertial measurement unit (IMU) with visual observations from a camera under the assumption that the rigid transformation between the two sensors is known. Errors in the IMU-camera extrinsic calibration process cause biases that reduce the estimation accuracy and can even lead to divergence of any estimator processing the measurements from both sensors. In this paper, we present an extended Kalman filter for precisely determining the unknown transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlation of the IMU measurements and provide a figure of merit (covariance) for the estimated transformation. The proposed method does not require any special hardware (such as spin table or 3-D laser scanner) except a calibration target. Furthermore, we employ the observability rank criterion based on Lie derivatives and prove that the nonlinear system describing the IMU-camera calibration process is observable. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy." ] }
1906.00360
2947973288
Modern smartphones have all the sensing capabilities required for accurate and robust navigation and tracking. In specific environments some data streams may be absent, less reliable, or flat out wrong. In particular, the GNSS signal can become flawed or silent inside buildings or in streets with tall buildings. In this application paper, we aim to advance the current state-of-the-art in motion estimation using inertial measurements in combination with partial GNSS data on standard smartphones. We show how iterative estimation methods help refine the positioning path estimates in retrospective use cases that can cover both fixed-interval and fixed-lag scenarios. We compare estimation results provided by global iterated Kalman filtering methods to those of a visual-inertial tracking scheme (Apple ARKit). The practical applicability is demonstrated on real-world use cases on empirical data acquired from both smartphones and tablet devices.
Besides SHS and VIO approaches, there are also pure inertial navigation approaches which estimate the full motion trajectory in 3D by using foot-mounted consumer-grade inertial sensors @cite_28 @cite_0 . With foot-mounted sensors the inertial navigation problem is considerably easier than in the general case since the drift can be constrained by using zero-velocity updates, which are detected on each step when the foot touches the ground and the sensor is stationary. However, automatic zero-velocity updates are not applicable for handheld or flying devices, and the approach is not suitable to large-scale consumer use since the current solutions do not work well when the movement happens without steps ( , in a trolley or escalator). In addition, the type of shoes and sensor placement in the foot may affect the robustness and accuracy of estimation. A prominent example in this class of approaches is the OpenShoe project @cite_0 @cite_2 , which actually uses several pairs of accelerometers and gyroscopes to estimate the step-by-step PDR.
{ "cite_N": [ "@cite_28", "@cite_0", "@cite_2" ], "mid": [ "2538522345", "2800595980", "2738820790", "2056358962" ], "abstract": [ "In recent years there have been excellent results in visual-inertial odometry techniques, which aim to compute the incremental motion of the sensor with high accuracy and robustness. However, these approaches lack the capability to close loops and trajectory estimation accumulates drift even if the sensor is continually revisiting the same place. In this letter, we present a novel tightly coupled visual-inertial simultaneous localization and mapping system that is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas. While our approach can be applied to any camera configuration, we address here the most general problem of a monocular camera, with its well-known scale ambiguity. We also propose a novel IMU initialization method, which computes the scale, the gravity direction, the velocity, and gyroscope and accelerometer biases, in a few seconds with high accuracy. We test our system in the 11 sequences of a recent micro-aerial vehicle public dataset achieving a typical scale factor error of 1 and centimeter precision. We compare to the state-of-the-art in visual-inertial odometry in sequences with revisiting, proving the better accuracy of our method due to map reuse and no drift accumulation.", "In this paper, we propose a novel robocentric formulation of the visual-inertial navigation system (VINS) within a sliding-window filtering framework and design an efficient, lightweight, robocentric visual-inertial odometry (R-VIO) algorithm for consistent motion tracking even in challenging environments using only a monocular camera and a 6-axis IMU. The key idea is to deliberately reformulate the VINS with respect to a moving local frame, rather than a fixed global frame of reference as in the standard world-centric VINS, in order to obtain relative motion estimates of higher accuracy for updating global poses. As an immediate advantage of this robocentric formulation, the proposed R-VIO can start from an arbitrary pose, without the need to align the initial orientation with the global gravitational direction. More importantly, we analytically show that the linearized robocentric VINS does not undergo the observability mismatch issue as in the standard world-centric counterpart which was identified in the literature as the main cause of estimation inconsistency. Additionally, we investigate in-depth the special motions that degrade the performance in the world-centric formulation and show that such degenerate cases can be easily compensated in the proposed robocentric formulation, without resorting to additional sensors as in the world-centric formulation, thus leading to better robustness. The proposed R-VIO algorithm has been extensively tested through both Monte Carlo simulations and real-world experiments with different sensor platforms navigating in different environments, and shown to achieve better (or competitive at least) performance than the state-of-the-art VINS, in terms of consistency, accuracy and efficiency.", "We present PennCOSYVIO, a new challenging Visual Inertial Odometry (VIO) benchmark with synchronized data from a VI-sensor (stereo camera and IMU), two Project Tango hand-held devices, and three GoPro Hero 4 cameras. Recorded at UPenn's Singh center, the 150m long path of the hand-held rig crosses from outdoors to indoors and includes rapid rotations, thereby testing the abilities of VIO and Simultaneous Localization and Mapping (SLAM) algorithms to handle changes in lighting, different textures, repetitive structures, and large glass surfaces. All sensors are synchronized and intrinsically and extrinsically calibrated. We demonstrate the accuracy with which ground-truth poses can be obtained via optic localization off of fiducial markers. The data set can be found at https: daniilidis-group.github.io penncosyvio .", "In this paper, we focus on the problem of motion tracking in unknown environments using visual and inertial sensors. We term this estimation task visual-inertial odometry (VIO), in analogy to the well-known visual-odometry problem. We present a detailed study of extended Kalman filter (EKF)-based VIO algorithms, by comparing both their theoretical properties and empirical performance. We show that an EKF formulation where the state vector comprises a sliding window of poses (the multi-state-constraint Kalman filter (MSCKF)) attains better accuracy, consistency, and computational efficiency than the simultaneous localization and mapping (SLAM) formulation of the EKF, in which the state vector contains the current pose and the features seen by the camera. Moreover, we prove that both types of EKF approaches are inconsistent, due to the way in which Jacobians are computed. Specifically, we show that the observability properties of the EKF's linearized system models do not match those of the underlying system, which causes the filters to underestimate the uncertainty in the state estimates. Based on our analysis, we propose a novel, real-time EKF-based VIO algorithm, which achieves consistent estimation by (i) ensuring the correct observability properties of its linearized system model, and (ii) performing online estimation of the camera-to-inertial measurement unit (IMU) calibration parameters. This algorithm, which we term MSCKF 2.0, is shown to achieve accuracy and consistency higher than even an iterative, sliding-window fixed-lag smoother, in both Monte Carlo simulations and real-world testing." ] }
1906.00360
2947973288
Modern smartphones have all the sensing capabilities required for accurate and robust navigation and tracking. In specific environments some data streams may be absent, less reliable, or flat out wrong. In particular, the GNSS signal can become flawed or silent inside buildings or in streets with tall buildings. In this application paper, we aim to advance the current state-of-the-art in motion estimation using inertial measurements in combination with partial GNSS data on standard smartphones. We show how iterative estimation methods help refine the positioning path estimates in retrospective use cases that can cover both fixed-interval and fixed-lag scenarios. We compare estimation results provided by global iterated Kalman filtering methods to those of a visual-inertial tracking scheme (Apple ARKit). The practical applicability is demonstrated on real-world use cases on empirical data acquired from both smartphones and tablet devices.
On the more technical side, we apply iterative filtering methods in this paper. Kalman filters and smoothers (see, , @cite_19 for an excellent overview of non-linear filtering) are recursive estimation schemes and thus iterative already per definition. Iterated filtering often refers to local ( inner-loop') iterations (over a single sample period). They are used together with extended Kalman filtering as a kind of fixed-point iteration to work the extended Kalman update towards a better linearization point (see, , @cite_17 ). The resulting iterated extended Kalman filter and iterated linearized filter-smoother can provide better performance if the system non-linearities are suitable. We however, are interested in iterative re-linearization of the dynamics and passing information over the state history for extended periods. Thus we focus on so-called global ( outer-loop') schemes, which are based on iteratively re-running of the entire forward--backward pass in the filter smoother. These methods relate directly to other iterative global linearization schemes like the so-called Laplace approximation in statistics machine learning (see, , @cite_16 ) or Newton iteration based methods (see, , @cite_29 and references therein).
{ "cite_N": [ "@cite_19", "@cite_29", "@cite_16", "@cite_17" ], "mid": [ "1749494163", "88520345", "2160337655", "2605635402" ], "abstract": [ "This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital operation performed in the Kalman filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. In this paper, the algorithms are further developed and illustrated with a number of additional examples.", "Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book's practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include MATLAB computations, and the numerous end-of-chapter exercises include computational assignments. MATLAB GNU Octave source code is available for download at www.cambridge.org sarkka, promoting hands-on work with the methods.", "Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system. Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods based on point mass (or \"particle\") representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the standard EKF through an illustrative example.", "We establish a full relationship between Kalman filtering and Amari's natural gradient in statistical learning. Namely, using an online natural gradient descent on data log-likelihood to evaluate the parameter of a probabilistic model from a series of observations, is exactly equivalent to using an extended Kalman filter to estimate the parameter (assumed to have constant dynamics). In the i.i.d. case, this relation is a consequence of the \"information filter\" phrasing of the extended Kalman filter. In the recurrent (state space, non-i.i.d.) case, we prove that the joint Kalman filter over states and parameters is a natural gradient on top of real-time recurrent learning (RTRL), a classical algorithm to train recurrent models. This exact algebraic correspondence provides relevant settings for natural gradient hyperparameters such as learning rates or initialization and regularization of the Fisher information matrix." ] }
1906.00360
2947973288
Modern smartphones have all the sensing capabilities required for accurate and robust navigation and tracking. In specific environments some data streams may be absent, less reliable, or flat out wrong. In particular, the GNSS signal can become flawed or silent inside buildings or in streets with tall buildings. In this application paper, we aim to advance the current state-of-the-art in motion estimation using inertial measurements in combination with partial GNSS data on standard smartphones. We show how iterative estimation methods help refine the positioning path estimates in retrospective use cases that can cover both fixed-interval and fixed-lag scenarios. We compare estimation results provided by global iterated Kalman filtering methods to those of a visual-inertial tracking scheme (Apple ARKit). The practical applicability is demonstrated on real-world use cases on empirical data acquired from both smartphones and tablet devices.
In this paper, we take a general INS approach, without assuming legged or otherwise constrained motion, and compensate the limitations of low quality IMUs by fusing them with GNSS position fixes, which may be potentially sparse and infrequent containing large gaps in signal reception. As mentioned, there are relatively few general INS approaches for consumer-grade devices. We build upon the recent work @cite_13 , which shows relatively good path estimation results by utilizing online learning of sensor biases and manually provided loop closures or position fixes. We improve their approach in the following two ways which greatly increase the practical applicability in certain use cases: we utilize automatic GNSS based position measurements, which do not require additional manoeuvres or cooperation from the user; and we apply iterative path reconstruction methods, which provide improved accuracy in the presence of long interruptions in GNSS signal reception.
{ "cite_N": [ "@cite_13" ], "mid": [ "1920612112", "2029841124", "1989137771", "2162136131" ], "abstract": [ "The majority of Intelligent Transportation System (ITS) applications require an estimate of position, often generated through the fusion of satellite based positioning (such as GPS) with on-board inertial systems. To make the position estimates consistent it is necessary to understand the noise distribution of the information used in the estimation algorithm. For GNSS position information the noise distribution is commonly approximated as zero mean with Gaussian distribution, with the standard deviation used as an algorithm tuning parameter. A major issue with satellite based positioning is the well known problem of multipath which can introduce a non-linear and non-Gaussian error distribution for the position estimate. This paper introduces a novel algorithm that compares the noise distribution of the GNSS information with the more consistent noise distribution of the local egocentric sensors to effectively reject GNSS data that is inconsistent. The results presented in this paper show how the gating of the GNSS information in a strong multipath environment can maintain consistency in the position filter and dramatically improve the position estimate. This is particularly important when sharing information from different vehicles as in the case of cooperative perception due to the requirement to align information from various sources.", "Next generation driver assistance systems require precise self localization. Common approaches using global navigation satellite systems (GNSSs) suffer from multipath and shadowing effects often rendering this solution insufficient. In urban environments this problem becomes even more pronounced. Herein we present a system for six degrees of freedom (DOF) ego localization using a mono camera and an inertial measurement unit (IMU). The camera image is processed to yield a rough position estimate using a previously computed landmark map. Thereafter IMU measurements are fused with the position estimate for a refined localization update. Moreover, we present the mapping pipeline required for the creation of landmark maps. Finally, we present experiments on real world data. The accuracy of the system is evaluated by computing two independent ego positions of the same trajectory from two distinct cameras and investigating these estimates for consistency. A mean localization accuracy of 10 cm is achieved on a 10 km sequence in an inner city scenario.", "Global Navigation Satellite Systems (GNSS) can be used for navigation purposes in vehicular environments. However, the limited accuracy of GNSS makes it unsuitable for applications such as vehicle collision avoidance. Improving the positioning accuracy in vehicular networks, Cooperative Positioning (CP) algorithms have emerged. CP algorithms are based on data communication among vehicles and estimation of the distance between the nodes of the network. Among the variety of radio ranging techniques, Received Signal Strength (RSS) is very popular due to its simplicity and lower cost compared to other methods like Time of Arrival (TOA), and Time Difference of Arrival (TDOA). The main drawback of RSS- based ranging is its inaccuracy, which mostly originates from the uncertainty of the path loss exponent. Without knowing the environment path loss exponent, which is a time-varying parameter in the mobile networks, RSS is effectively useless for distance estimation. There are many approaches and techniques proposed in the literature for dynamic estimation of the path loss exponent within a certain environment. Most of these methods are not functional for mobile applications or their efficiency decreases dramatically with increasing mobility of the nodes. In this paper, we propose a method for dynamic estimation of the path loss exponent and distance based on the Doppler Effect and RSS. Since this method is fundamentally based on the Doppler Effect, it can be implemented within networks with mobile nodes. The higher the mobility of the nodes, the better performance of the proposed technique. This contribution is important because vehicles will be equipped with Dedicated Short Range Communication (DSRC) in the near future.", "A new scan that matches an aided Inertial Navigation System (INS) with a low-cost LiDAR is proposed as an alternative to GNSS-based navigation systems in GNSS-degraded or -denied environments such as indoor areas, dense forests, or urban canyons. In these areas, INS-based Dead Reckoning (DR) and Simultaneous Localization and Mapping (SLAM) technologies are normally used to estimate positions as separate tools. However, there are critical implementation problems with each standalone system. The drift errors of velocity, position, and heading angles in an INS will accumulate over time, and on-line calibration is a must for sustaining positioning accuracy. SLAM performance is poor in featureless environments where the matching errors can significantly increase. Each standalone positioning method cannot offer a sustainable navigation solution with acceptable accuracy. This paper integrates two complementary technologies—INS and LiDAR SLAM—into one navigation frame with a loosely coupled Extended Kalman Filter (EKF) to use the advantages and overcome the drawbacks of each system to establish a stable long-term navigation process. Static and dynamic field tests were carried out with a self-developed Unmanned Ground Vehicle (UGV) platform—NAVIS. The results prove that the proposed approach can provide positioning accuracy at the centimetre level for long-term operations, even in a featureless indoor environment." ] }
1906.00452
2947132063
Data imbalance remains one of the most widespread problems affecting contemporary machine learning. The negative effect data imbalance can have on the traditional learning algorithms is most severe in combination with other dataset difficulty factors, such as small disjuncts, presence of outliers and insufficient number of training observations. Said difficulty factors can also limit the applicability of some of the methods of dealing with data imbalance, in particular the neighborhood-based oversampling algorithms based on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate some of the limitations of the neighborhood-based methods. In this paper we examine the possibility of utilizing the concept of mutual class potential, used to guide the oversampling process in RBO, in the undersampling procedure. Conducted computational complexity analysis indicates a significantly reduced time complexity of the proposed Radial-Based Undersampling algorithm, and the results of the performed experimental study indicate its usefulness, especially on difficult datasets.
The most fundamental choice during the design of both oversampling and undersampling algorithms for handling data imbalance is the question of defining the regions of interest: the areas in which either the new instances are to be placed, in case of oversampling, or from which the existing instances are to be removed, in case of undersampling. Besides the random approaches, probably the most prevalent paradigm for the oversampling are the neighborhood-based methods originating from Synthetic Minority Over-sampling Technique (SMOTE) @cite_42 . The regions of interest of SMOTE are located between any given minority observation and its closest minority neighbors: SMOTE synthesizes new instances by interpolating the observation and one of its, randomly selected, nearest neighbors.
{ "cite_N": [ "@cite_42" ], "mid": [ "1991181258", "2087240369", "2132791018", "2756182389" ], "abstract": [ "Classification using class-imbalanced data is biased in favor of the majority class. The bias is even larger for high-dimensional data, where the number of variables greatly exceeds the number of samples. The problem can be attenuated by undersampling or oversampling, which produce class-balanced data. Generally undersampling is helpful, while random oversampling is not. Synthetic Minority Oversampling TEchnique (SMOTE) is a very popular oversampling method that was proposed to improve random oversampling but its behavior on high-dimensional data has not been thoroughly investigated. In this paper we investigate the properties of SMOTE from a theoretical and empirical point of view, using simulated and real high-dimensional data. While in most cases SMOTE seems beneficial with low-dimensional data, it does not attenuate the bias towards the classification in the majority class for most classifiers when data are high-dimensional, and it is less effective than random undersampling. SMOTE is beneficial for k-NN classifiers for high-dimensional data if the number of variables is reduced performing some type of variable selection; we explain why, otherwise, the k-NN classification is biased towards the minority class. Furthermore, we show that on high-dimensional data SMOTE does not change the class-specific mean values while it decreases the data variability and it introduces correlation between samples. We explain how our findings impact the class-prediction for high-dimensional data. In practice, in the high-dimensional setting only k-NN classifiers based on the Euclidean distance seem to benefit substantially from the use of SMOTE, provided that variable selection is performed before using SMOTE; the benefit is larger if more neighbors are used. SMOTE for k-NN without variable selection should not be used, because it strongly biases the classification towards the minority class.", "Imbalanced learning problems contain an unequal distribution of data samples among different classes and pose a challenge to any classifier as it becomes hard to learn the minority class samples. Synthetic oversampling methods address this problem by generating the synthetic minority class samples to balance the distribution between the samples of the majority and minority classes. This paper identifies that most of the existing oversampling methods may generate the wrong synthetic minority samples in some scenarios and make learning tasks harder. To this end, a new method, called Majority Weighted Minority Oversampling TEchnique (MWMOTE), is presented for efficiently handling imbalanced learning problems. MWMOTE first identifies the hard-to-learn informative minority class samples and assigns them weights according to their euclidean distance from the nearest majority class samples. It then generates the synthetic samples from the weighted informative minority class samples using a clustering approach. This is done in such a way that all the generated samples lie inside some minority class cluster. MWMOTE has been evaluated extensively on four artificial and 20 real-world data sets. The simulation results show that our method is better than or comparable with some other existing methods in terms of various assessment metrics, such as geometric mean (G-mean) and area under the receiver operating curve (ROC), usually known as area under curve (AUC).", "In recent years, mining with imbalanced data sets receives more and more attentions in both theoretical and practical aspects. This paper introduces the importance of imbalanced data sets and their broad application domains in data mining, and then summarizes the evaluation metrics and the existing methods to evaluate and solve the imbalance problem. Synthetic minority over-sampling technique (SMOTE) is one of the over-sampling methods addressing this problem. Based on SMOTE method, this paper presents two new minority over-sampling methods, borderline-SMOTE1 and borderline-SMOTE2, in which only the minority examples near the borderline are over-sampled. For the minority class, experiments show that our approaches achieve better TP rate and F-value than SMOTE and random over-sampling methods.", "Application of conditional Generative Adversarial Networks as oversampling method.Generates minority class samples by recovering the training data distribution.Outperforms various standard oversampling algorithms.Performance advantage of the proposed method remains stable with higher imbalance ratios. Learning from imbalanced datasets is a frequent but challenging task for standard classification algorithms. Although there are different strategies to address this problem, methods that generate artificial data for the minority class constitute a more general approach compared to algorithmic modifications. Standard oversampling methods are variations of the SMOTE algorithm, which generates synthetic samples along the line segment that joins minority class samples. Therefore, these approaches are based on local information, rather on the overall minority class distribution. Contrary to these algorithms, in this paper the conditional version of Generative Adversarial Networks (cGAN) is used to approximate the true data distribution and generate data for the minority class of various imbalanced datasets. The performance of cGAN is compared against multiple standard oversampling algorithms. We present empirical results that show a significant improvement in the quality of the generated data when cGAN is used as an oversampling algorithm." ] }
1906.00452
2947132063
Data imbalance remains one of the most widespread problems affecting contemporary machine learning. The negative effect data imbalance can have on the traditional learning algorithms is most severe in combination with other dataset difficulty factors, such as small disjuncts, presence of outliers and insufficient number of training observations. Said difficulty factors can also limit the applicability of some of the methods of dealing with data imbalance, in particular the neighborhood-based oversampling algorithms based on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate some of the limitations of the neighborhood-based methods. In this paper we examine the possibility of utilizing the concept of mutual class potential, used to guide the oversampling process in RBO, in the undersampling procedure. Conducted computational complexity analysis indicates a significantly reduced time complexity of the proposed Radial-Based Undersampling algorithm, and the results of the performed experimental study indicate its usefulness, especially on difficult datasets.
Another family of methods that can be distinguished are the cluster-based undersampling algorithms, notably the methods proposed by Yen and Lee @cite_36 , which use clustering to select the most representative subset of data. Finally, as has been originally demonstrated by @cite_11 , undersampling algorithms are well-suited for forming classifier ensembles, an idea that was further extended in form of evolutionary undersampling @cite_26 and boosting @cite_0 .
{ "cite_N": [ "@cite_36", "@cite_26", "@cite_0", "@cite_11" ], "mid": [ "2735835382", "2462401346", "2103346566", "1981081578" ], "abstract": [ "Abstract As one of the most challenging and attractive problems in the pattern recognition and machine intelligence field, imbalanced classification has received a large amount of research attention for many years. In binary classification tasks, one class usually tends to be underrepresented when it consists of far fewer patterns than the other class, which results in undesirable classification results, especially for the minority class. Several techniques, including resampling, boosting and cost-sensitive methods have been proposed to alleviate this problem. Recently, some ensemble methods that focus on combining individual techniques to obtain better performance have been observed to present better classification performance on the minority class. In this paper, we propose a novel ensemble framework called Adaptive Ensemble Undersampling-Boost for imbalanced learning. Our proposal combines the Ensemble of Undersampling (EUS) technique, Real Adaboost, cost-sensitive weight modification, and adaptive boundary decision strategy to build a hybrid algorithm. The superiority of our method over other state-of-the-art ensemble methods is demonstrated by experiments on 18 real world data sets with various data distributions and different imbalance ratios. Given the experimental results and further analysis, our proposal is proven to be a promising alternative that can be applied to various imbalanced classification domains.", "Class imbalance problems, where the number of samples in each class is unequal, is prevalent in numerous real world machine learning applications. Traditional methods which are biased toward the majority class are ineffective due to the relative severity of misclassifying rare events. This paper proposes a novel evolutionary cluster-based oversampling ensemble framework, which combines a novel cluster-based synthetic data generation method with an evolutionary algorithm (EA) to create an ensemble. The proposed synthetic data generation method is based on contemporary ideas of identifying oversampling regions using clusters. The novel use of EA serves a twofold purpose of optimizing the parameters of the data generation method while generating diverse examples leveraging on the characteristics of EAs, reducing overall computational cost. The proposed method is evaluated on a set of 40 imbalance datasets obtained from the University of California, Irvine, database, and outperforms current state-of-the-art ensemble algorithms tackling class imbalance problems.", "Several pruning strategies that can be used to reduce the size and increase the accuracy of bagging ensembles are analyzed. These heuristics select subsets of complementary classifiers that, when combined, can perform better than the whole ensemble. The pruning methods investigated are based on modifying the order of aggregation of classifiers in the ensemble. In the original bagging algorithm, the order of aggregation is left unspecified. When this order is random, the generalization error typically decreases as the number of classifiers in the ensemble increases. If an appropriate ordering for the aggregation process is devised, the generalization error reaches a minimum at intermediate numbers of classifiers. This minimum lies below the asymptotic error of bagging. Pruned ensembles are obtained by retaining a fraction of the classifiers in the ordered ensemble. The performance of these pruned ensembles is evaluated in several benchmark classification tasks under different training conditions. The results of this empirical investigation show that ordered aggregation can be used for the efficient generation of pruned ensembles that are competitive, in terms of performance and robustness of classification, with computationally more costly methods that directly select optimal or near-optimal subensembles.", "This paper presents a detailed empirical study of 12 generative approaches to text clustering, obtained by applying four types of document-to-cluster assignment strategies (hard, stochastic, soft and deterministic annealing (DA) based assignments) to each of three base models, namely mixtures of multivariate Bernoulli, multinomial, and von Mises-Fisher (vMF) distributions. A large variety of text collections, both with and without feature selection, are used for the study, which yields several insights, including (a) showing situations wherein the vMF-centric approaches, which are based on directional statistics, fare better than multinomial model-based methods, and (b) quantifying the trade-off between increased performance of the soft and DA assignments and their increased computational demands. We also compare all the model-based algorithms with two state-of-the-art discriminative approaches to document clustering based, respectively, on graph partitioning (CLUTO) and a spectral coclustering method. Overall, DA and CLUTO perform the best but are also the most computationally expensive. The vMF models provide good performance at low cost while the spectral coclustering algorithm fares worse than vMF-based methods for a majority of the datasets." ] }
1906.00452
2947132063
Data imbalance remains one of the most widespread problems affecting contemporary machine learning. The negative effect data imbalance can have on the traditional learning algorithms is most severe in combination with other dataset difficulty factors, such as small disjuncts, presence of outliers and insufficient number of training observations. Said difficulty factors can also limit the applicability of some of the methods of dealing with data imbalance, in particular the neighborhood-based oversampling algorithms based on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate some of the limitations of the neighborhood-based methods. In this paper we examine the possibility of utilizing the concept of mutual class potential, used to guide the oversampling process in RBO, in the undersampling procedure. Conducted computational complexity analysis indicates a significantly reduced time complexity of the proposed Radial-Based Undersampling algorithm, and the results of the performed experimental study indicate its usefulness, especially on difficult datasets.
Despite the abundance of different strategies of dealing with data imbalance, it often remains unclear under what conditions a given method is expected to guarantee a satisfactory performance. Furthermore, taking into the account the no free lunch theorem @cite_39 it is unreasonable to expect that any single method will be able to achieve a state-of-the-art performance on every provided dataset. Identifying the areas of applicability, conditions under which the method is expected to be more likely to achieve a good performance, is therefore desirable both from the point of view of a practitioner, who can use that information to narrow down the range of methods appropriate for a problem at hand, as well as a theoretician, who can use that insight in the process of developing novel methods.
{ "cite_N": [ "@cite_39" ], "mid": [ "1897981530", "2247194987", "2787223504", "2127508398" ], "abstract": [ "The No Free Lunch theorems are often used to argue that domain specific knowledge is required to design successful algorithms. We use algorithmic information theory to argue the case for a universal bias allowing an algorithm to succeed in all interesting problem domains. Additionally, we give a new algorithm for off-line classification, inspired by Solomonoff induction, with good performance on all structured problems under reasonable assumptions. This includes a proof of the efficacy of the well-known heuristic of randomly selecting training data in the hope of reducing misclassification rates.", "In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.", "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word). For a probabilistic model for uncertainty we show that even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the maximum welfare. To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a fair share of a player, and design mechanisms that are Ω(1)-fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest. In our mechanisms the machine is sometimes left idle even though there are jobs that want to use it. We show that this unfavorable aspect is unavoidable, unless one gives up other favorable aspects (e.g., give up Ω(1)-fairness). We also consider a qualitative approach to uncertainty as an alternative to the probabilistic quantitative model. In the qualitative approach we break away from solution concepts such as dominant strategies (they are no longer well defined), and instead suggest an axiomatic approach, which amounts to listing desirable properties for mechanisms. We provide a mechanism that satisfies these properties." ] }
1906.00452
2947132063
Data imbalance remains one of the most widespread problems affecting contemporary machine learning. The negative effect data imbalance can have on the traditional learning algorithms is most severe in combination with other dataset difficulty factors, such as small disjuncts, presence of outliers and insufficient number of training observations. Said difficulty factors can also limit the applicability of some of the methods of dealing with data imbalance, in particular the neighborhood-based oversampling algorithms based on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate some of the limitations of the neighborhood-based methods. In this paper we examine the possibility of utilizing the concept of mutual class potential, used to guide the oversampling process in RBO, in the undersampling procedure. Conducted computational complexity analysis indicates a significantly reduced time complexity of the proposed Radial-Based Undersampling algorithm, and the results of the performed experimental study indicate its usefulness, especially on difficult datasets.
In the context of the imbalanced data classification, one of the criteria that can influence the applicability of different resampling strategies are the characteristics of the minority class distribution. Napierała and Stefanowski @cite_10 proposed a method of categorization of different types of minority objects that capture these characteristics. Their approach uses a 5-neighborhood to identify the nearest neighbors of a given object, and afterwards assigns to it a category based on the proportion of neighbors from the same class: in case of 4 or 5 neighbors from the same class, in case of 2 to 3 neighbors, in case of 1 neighbor, and when there are no neighbors from the same class. The percentage of the minority objects from different categories can be then used to describe the character of the entire dataset: an example of datasets with a large proportion of different minority object types was presented in Figure . Note that the imbalance ratio of the dataset does not determine the type of the minority objects it consists of, which was demonstrated in the above example.
{ "cite_N": [ "@cite_10" ], "mid": [ "1581587400", "752888290", "2087240369", "2148143831" ], "abstract": [ "In classification, when the distribution of the training data among classes is uneven, the learning algorithm is generally dominated by the feature of the majority classes. The features in the minority classes are normally difficult to be fully recognized. In this paper, a method is proposed to enhance the classification accuracy for the minority classes. The proposed method combines Synthetic Minority Over-sampling Technique (SMOTE) and Complementary Neural Network (CMTNN) to handle the problem of classifying imbalanced data. In order to demonstrate that the proposed technique can assist classification of imbalanced data, several classification algorithms have been used. They are Artificial Neural Network (ANN), k-Nearest Neighbor (k-NN) and Support Vector Machine (SVM). The benchmark data sets with various ratios between the minority class and the majority class are obtained from the University of California Irvine (UCI) machine learning repository. The results show that the proposed combination techniques can improve the performance for the class imbalance problem.", "Many real-world applications reveal difficulties in learning classifiers from imbalanced data. Although several methods for improving classifiers have been introduced, the identification of conditions for the efficient use of the particular method is still an open research problem. It is also worth to study the nature of imbalanced data, characteristics of the minority class distribution and their influence on classification performance. However, current studies on imbalanced data difficulty factors have been mainly done with artificial datasets and their conclusions are not easily applicable to the real-world problems, also because the methods for their identification are not sufficiently developed. In our paper, we capture difficulties of class distribution in real datasets by considering four types of minority class examples: safe, borderline, rare and outliers. First, we confirm their occurrence in real data by exploring multidimensional visualizations of selected datasets. Then, we introduce a method for an identification of these types of examples, which is based on analyzing a class distribution in a local neighbourhood of the considered example. Two ways of modeling this neighbourhood are presented: with k-nearest examples and with kernel functions. Experiments with artificial datasets show that these methods are able to re-discover simulated types of examples. Next contributions of this paper include carrying out a comprehensive experimental study with 26 real world imbalanced datasets, where (1) we identify new data characteristics basing on the analysis of types of minority examples; (2) we demonstrate that considering the results of this analysis allow to differentiate classification performance of popular classifiers and pre-processing methods and to evaluate their areas of competence. Finally, we highlight directions of exploiting the results of our analysis for developing new algorithms for learning classifiers and pre-processing methods.", "Imbalanced learning problems contain an unequal distribution of data samples among different classes and pose a challenge to any classifier as it becomes hard to learn the minority class samples. Synthetic oversampling methods address this problem by generating the synthetic minority class samples to balance the distribution between the samples of the majority and minority classes. This paper identifies that most of the existing oversampling methods may generate the wrong synthetic minority samples in some scenarios and make learning tasks harder. To this end, a new method, called Majority Weighted Minority Oversampling TEchnique (MWMOTE), is presented for efficiently handling imbalanced learning problems. MWMOTE first identifies the hard-to-learn informative minority class samples and assigns them weights according to their euclidean distance from the nearest majority class samples. It then generates the synthetic samples from the weighted informative minority class samples using a clustering approach. This is done in such a way that all the generated samples lie inside some minority class cluster. MWMOTE has been evaluated extensively on four artificial and 20 real-world data sets. The simulation results show that our method is better than or comparable with some other existing methods in terms of various assessment metrics, such as geometric mean (G-mean) and area under the receiver operating curve (ROC), usually known as area under curve (AUC).", "An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of \"normal\" examples with only a small percentage of \"abnormal\" or \"interesting\" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of oversampling the minority (abnormal)cla ss and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space)tha n only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space)t han varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC)and the ROC convex hull strategy." ] }
1906.00535
2947458343
There is a high demand for high-quality Non-Player Characters (NPCs) in video games. Hand-crafting their behavior is a labor intensive and error prone engineering process with limited controls exposed to the game designers. We propose to create such NPC behaviors interactively by training an agent in the target environment using imitation learning with a human in the loop. While traditional behavior cloning may fall short of achieving the desired performance, we show that interactivity can substantially improve it with a modest amount of human efforts. The model we train is a multi-resolution ensemble of Markov models, which can be used as is or can be further "compressed" into a more compact model for inference on consumer devices. We illustrate our approach on an example in OpenAI Gym, where a human can help to quickly train an agent with only a handful of interactive demonstrations. We also outline our experiments with NPC training for a first-person shooter game currently in development.
Using human demonstrations helps training artificial agents in many applications and in particular in video games @cite_18 , @cite_13 , @cite_19 . Off-policy human demonstrations are easier to use and are abundant in player telemetry data. Supervised behavior cloning, imitation learning (IL), apprenticeship learning (e.g., @cite_7 ) and generative adversarial imitation learning (GAIL) @cite_15 allow for the reproduction of a teacher style and achievement of a reasonable level of performance in the game environment. Unfortunately, an agent trained using IL is usually unable to effectively generalize to previously underexplored states or to extrapolate stylistic elements of the human player to new states.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_19", "@cite_15", "@cite_13" ], "mid": [ "158183001", "2802726207", "2604382266", "2174786457" ], "abstract": [ "Imitation Learning (IL) is a popular approach for teaching behavior policies to agents by demonstrating the desired target policy. While the approach has lead to many successes, IL often requires a large set of demonstrations to achieve robust learning, which can be expensive for the teacher. In this paper, we consider a novel approach to improve the learning efficiency of IL by providing a shaping reward function in addition to the usual demonstrations. Shaping rewards are numeric functions of states (and possibly actions) that are generally easily specified, and capture general principles of desired behavior, without necessarily completely specifying the behavior. Shaping rewards have been used extensively in reinforcement learning, but have been seldom considered for IL, though they are often easy to specify. Our main contribution is to propose an IL approach that learns from both shaping rewards and demonstrations. We demonstrate the effectiveness of the approach across several IL problems, even when the shaping reward is not fully consistent with the demonstrations.", "Humans often learn how to perform tasks via imitation: they observe others perform a task, and then very quickly infer the appropriate actions to take based on their observations. While extending this paradigm to autonomous agents is a well-studied problem in general, there are two particular aspects that have largely been overlooked: (1) that the learning is done from observation only (i.e., without explicit action information), and (2) that the learning is typically done very quickly. In this work, we propose a two-phase, autonomous imitation learning technique called behavioral cloning from observation (BCO), that aims to provide improved performance with respect to both of these aspects. First, we allow the agent to acquire experience in a self-supervised fashion. This experience is used to develop a model which is then utilized to learn a particular task by observing an expert perform that task without the knowledge of the specific actions taken. We experimentally compare BCO to imitation learning methods, including the state-of-the-art, generative adversarial imitation learning (GAIL) technique, and we show comparable task performance in several different simulation domains while exhibiting increased learning speed after expert trajectories become available.", "Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years; however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations, without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction, and computer games, to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this article, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications, and highlight current and future research directions.", "The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. Towards this goal, we define a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains. This method, termed \"Actor-Mimic\", exploits the use of deep reinforcement learning and model compression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of generalizing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods." ] }
1906.00535
2947458343
There is a high demand for high-quality Non-Player Characters (NPCs) in video games. Hand-crafting their behavior is a labor intensive and error prone engineering process with limited controls exposed to the game designers. We propose to create such NPC behaviors interactively by training an agent in the target environment using imitation learning with a human in the loop. While traditional behavior cloning may fall short of achieving the desired performance, we show that interactivity can substantially improve it with a modest amount of human efforts. The model we train is a multi-resolution ensemble of Markov models, which can be used as is or can be further "compressed" into a more compact model for inference on consumer devices. We illustrate our approach on an example in OpenAI Gym, where a human can help to quickly train an agent with only a handful of interactive demonstrations. We also outline our experiments with NPC training for a first-person shooter game currently in development.
Direct inclusion of a human in the control loop can potentially alleviate the problem of limited generalization. Dataset Aggregation, DAGGER @cite_8 , allows for an effective way of doing that when a human provides consistent optimal input, which may not be realistic in many environments. Another way of such inclusion of online human input is shared autonomy, which is an active research area with multiple applications, e.g., @cite_21 , @cite_11 , etc. The shared autonomy approach @cite_10 naturally extends to policy blending @cite_16 and allows to train DQN agents cooperating with a human in complex environments effectively. The applications of including a human in the training loop to the fields of robotics and self-driving cars are too numerous to cover here, but they mostly address the optimality aspect of the target policy while here we also aim to preserve stylistic elements of organic human gameplay.
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2786110872", "2964342357", "1993466996", "2558518165" ], "abstract": [ "In shared autonomy, user input is combined with semi-autonomous control to achieve a common goal. The goal is often unknown ex-ante, so prior work enables agents to infer the goal from user input and assist with the task. Such methods tend to assume some combination of knowledge of the dynamics of the environment, the user's policy given their goal, and the set of possible goals the user might target, which limits their application to real-world scenarios. We propose a deep reinforcement learning framework for model-free shared autonomy that lifts these assumptions. We use human-in-the-loop reinforcement learning with neural network function approximation to learn an end-to-end mapping from environmental observation and user input to agent action, with task reward as the only form of supervision. Controlled studies with users (n = 16) and synthetic pilots playing a video game and flying a real quadrotor demonstrate the ability of our algorithm to assist users with real-time control tasks in which the agent cannot directly access the user's private information through observations, but receives a reward signal and user input that both depend on the user's intent. The agent learns to assist the user without access to this private information, implicitly inferring it from the user's input. This allows the assisted user to complete the task more effectively than the user or an autonomous agent could on their own. This paper is a proof of concept that illustrates the potential for deep reinforcement learning to enable flexible and practical assistive systems.", "For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised \"practice\" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals in a real-world physical system, and substantially outperforms prior techniques.", "I l l u S t r a t I o n b y a l I C I a k u b I S t a a n D r I J b o r y S a S S o C I a t e S in This arTiCle we consider the question: How should autonomous systems be analyzed? in particular, we describe how the confluence of developments in two areas—autonomous systems architectures and formal verification for rational agents—can provide the basis for the formal verification of autonomous systems behaviors. We discuss an approach to this question that involves: 1. Modeling the behavior and describing the interface (input output) to an agent in charge of making decisions within the system; 2. Model checking the agent within an unrestricted environment representing the “real world” and those parts of the systems external to the agent, in order to establish some property, j; 3. Utilizing theorems or analysis of the environment, in the form of logical statements (where necessary), to derive properties of the larger system; and 4. if the agent is refined, modify (1), but if environmental properties are clarified, modify (3). Autonomous systems are now being deployed in safety, mission, or business critical scenarios, which means a thorough analysis of the choices the core software might make becomes crucial. But, should the analysis and verification of autonomous software be treated any differently than traditional software used in critical situations? Or is there something new going on here? Autonomous systems are systems that decide for themselves what to do and when to do it. Such systems might seem futuristic, but they are closer than we might think. Modern household, business, and industrial systems increasingly incorporate autonomy. There are many examples, all varying in the degree of autonomy used, from almost pure human control to fully autonomous activities with minimal human interaction. Application areas are broad, ranging from healthcare monitoring to autonomous vehicles. But what are the reasons for this increase in autonomy? Typically, autonomy is used in systems that: 1. must be deployed in remote environments where direct human control is infeasible; 2. must be deployed in hostile environments where it is dangerous for humans to be nearby, and so difficult for humans to assess the possibilities; 3. involve activity that is too lengthy Verifying autonomous systems Doi:10.1145 2494558", "A number of recent approaches to policy learning in 2D game domains have been successful going directly from raw input images to actions. However when employed in complex 3D environments, they typically suffer from challenges related to partial observability, combinatorial exploration spaces, path planning, and a scarcity of rewarding scenarios. Inspired from prior work in human cognition that indicates how humans employ a variety of semantic concepts and abstractions (object categories, localisation, etc.) to reason about the world, we build an agent-model that incorporates such abstractions into its policy-learning framework. We augment the raw image input to a Deep Q-Learning Network (DQN), by adding details of objects and structural elements encountered, along with the agent's localisation. The different components are automatically extracted and composed into a topological representation using on-the-fly object detection and 3D-scene reconstruction.We evaluate the efficacy of our approach in Doom, a 3D first-person combat game that exhibits a number of challenges discussed, and show that our augmented framework consistently learns better, more effective policies." ] }
1906.00580
2947218620
Language style transfer has attracted more and more attention in the past few years. Recent researches focus on improving neural models targeting at transferring from one style to the other with labeled data. However, transferring across multiple styles is often very useful in real-life applications. Previous researches of language style transfer have two main deficiencies: dependency on massive labeled data and neglect of mutual influence among different style transfer tasks. In this paper, we propose a multi-agent style transfer system (MAST) for addressing multiple style transfer tasks with limited labeled data, by leveraging abundant unlabeled data and the mutual benefit among the multiple styles. A style transfer agent in our system not only learns from unlabeled data by using techniques like denoising auto-encoder and back-translation, but also learns to cooperate with other style transfer agents in a self-organization manner. We conduct our experiments by simulating a set of real-world style transfer tasks with multiple versions of the Bible. Our model significantly outperforms the other competitive methods. Extensive results and analysis further verify the efficacy of our proposed system.
The need to leverage unlabeled data draws a lot of interests of NMT researchers. Researches like @cite_26 @cite_20 , @cite_17 , and @cite_24 propose methods to build semi-supervised or unsupervised models. However, these techniques mainly designed for NMT tasks, and they haven't been widely used for style transfer tasks. Some unsupervised approaches @cite_4 @cite_22 try addressing style transfer problems by using GAN @cite_8 . But their architecture shows drawbacks in content preservation @cite_6 . In this paper, we follow the ideas of Sennrich's work to propose a semi-supervised method for leveraging unlabeled data of both source side and target side.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_8", "@cite_6", "@cite_24", "@cite_20", "@cite_17" ], "mid": [ "2949257576", "2891844856", "2585635281", "2151575489" ], "abstract": [ "The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at this https URL", "Transferring representations from large supervised tasks to downstream tasks has shown promising results in AI fields such as Computer Vision and Natural Language Processing (NLP). In parallel, the recent progress in Machine Translation (MT) has enabled one to train multilingual Neural MT (NMT) systems that can translate between multiple languages and are also capable of performing zero-shot translation. However, little attention has been paid to leveraging representations learned by a multilingual NMT system to enable zero-shot multilinguality in other NLP tasks. In this paper, we demonstrate a simple framework, a multilingual Encoder-Classifier, for cross-lingual transfer learning by reusing the encoder from a multilingual NMT system and stitching it with a task-specific classifier component. Our proposed model achieves significant improvements in the English setup on three benchmark tasks - Amazon Reviews, SST and SNLI. Further, our system can perform classification in a new language for which no classification data was seen during training, showing that zero-shot classification is possible and remarkably competitive. In order to understand the underlying factors contributing to this finding, we conducted a series of analyses on the effect of the shared vocabulary, the training data type for NMT, classifier complexity, encoder representation power, and model generalization on zero-shot performance. Our results provide strong evidence that the representations learned from multilingual NMT systems are widely applicable across languages and tasks.", "The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market- 1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at https: github.com layumi Person-reID_GAN.", "Category models for objects or activities typically rely on supervised learning requiring sufficiently large training sets. Transferring knowledge from known categories to novel classes with no or only a few labels is far less researched even though it is a common scenario. In this work, we extend transfer learning with semi-supervised learning to exploit unlabeled instances of (novel) categories with no or only a few labeled instances. Our proposed approach Propagated Semantic Transfer combines three techniques. First, we transfer information from known to novel categories by incorporating external knowledge, such as linguistic or expert-specified information, e.g., by a mid-level layer of semantic attributes. Second, we exploit the manifold structure of novel classes. More specifically we adapt a graph-based learning algorithm - so far only used for semi-supervised learning -to zero-shot and few-shot learning. Third, we improve the local neighborhood in such graph structures by replacing the raw feature-based representation with a mid-level object- or attribute-based representation. We evaluate our approach on three challenging datasets in two different applications, namely on Animals with Attributes and ImageNet for image classification and on MPII Composites for activity recognition. Our approach consistently outperforms state-of-the-art transfer and semi-supervised approaches on all datasets." ] }
1906.00580
2947218620
Language style transfer has attracted more and more attention in the past few years. Recent researches focus on improving neural models targeting at transferring from one style to the other with labeled data. However, transferring across multiple styles is often very useful in real-life applications. Previous researches of language style transfer have two main deficiencies: dependency on massive labeled data and neglect of mutual influence among different style transfer tasks. In this paper, we propose a multi-agent style transfer system (MAST) for addressing multiple style transfer tasks with limited labeled data, by leveraging abundant unlabeled data and the mutual benefit among the multiple styles. A style transfer agent in our system not only learns from unlabeled data by using techniques like denoising auto-encoder and back-translation, but also learns to cooperate with other style transfer agents in a self-organization manner. We conduct our experiments by simulating a set of real-world style transfer tasks with multiple versions of the Bible. Our model significantly outperforms the other competitive methods. Extensive results and analysis further verify the efficacy of our proposed system.
The core inspiration for our proposed system comes from the idea of multi-agent system design. A P2P self-organization system @cite_11 have been successfully applied in practical security systems. They design policies for agents to choose useful neighbors to produce better predictions. It enlightens us to build style transfer systems. Researches on reinforcement learning in text generation tasks @cite_3 also show the practicability to regard text generation models as agents with a large action space.
{ "cite_N": [ "@cite_3", "@cite_11" ], "mid": [ "2782698114", "2096145798", "103885025", "1978708265" ], "abstract": [ "The ability to learn optimal control policies in systems where action space is defined by sentences in natural language would allow many interesting real-world applications such as automatic optimisation of dialogue systems. Text-based games with multiple endings and rewards are a promising platform for this task, since their feedback allows us to employ reinforcement learning techniques to jointly learn text representations and control policies. We present a general text game playing agent, testing its generalisation and transfer learning performance and showing its ability to play multiple games at once. We also present pyfiction, an open-source library for universal access to different text games that could, together with our agent that implements its interface, serve as a baseline for future research.", "In the framework of fully cooperative multi-agent systems, independent (non-communicative) agents that learn by reinforcement must overcome several difficulties to manage to coordinate. This paper identifies several challenges responsible for the non-coordination of independent agents: Pareto-selection, non-stationarity, stochasticity, alter-exploration and shadowed equilibria. A selection of multi-agent domains is classified according to those challenges: matrix games, Boutilier's coordination game, predators pursuit domains and a special multi-state game. Moreover, the performance of a range of algorithms for independent reinforcement learners is evaluated empirically. Those algorithms are Q-learning variants: decentralized Q-learning, distributed Q-learning, hysteretic Q-learning, recursive frequency maximum Q-value and win-or-learn fast policy hill climbing. An overview of the learning algorithms' strengths and weaknesses against each challenge concludes the paper and can serve as a basis for choosing the appropriate algorithm for a new domain. Furthermore, the distilled challenges may assist in the design of new learning algorithms that overcome these problems and achieve higher performance in multi-agent applications.", "Reinforcement Learning was originally developed for Markov Decision Processes (MDPs). It allows a single agent to learn a policy that maximizes a possibly delayed reward signal in a stochastic stationary environment. It guarantees convergence to the optimal policy, provided that the agent can sufficiently experiment and the environment in which it is operating is Markovian. However, when multiple agents apply reinforcement learning in a shared environment, this might be beyond the MDP model. In such systems, the optimal policy of an agent depends not only on the environment, but on the policies of the other agents as well. These situations arise naturally in a variety of domains, such as: robotics, telecommunications, economics, distributed control, auctions, traffic light control, etc. In these domains multi-agent learning is used, either because of the complexity of the domain or because control is inherently decentralized. In such systems it is important that agents are capable of discovering good solutions to the problem at hand either by coordinating with other learners or by competing with them. This chapter focuses on the application reinforcement learning techniques in multi-agent systems. We describe a basic learning framework based on the economic research into game theory, and illustrate the additional complexity that arises in such systems. We also described a representative selection of algorithms for the different areas of multi-agent reinforcement learning research.", "In this paper, we study the problem of reaching a consensus among all the agents in the networked control systems (NCS) in the presence of misbehaving agents. A reputation-based resilient distributed control algorithm is first proposed for the leader-follower consensus network. The proposed algorithm embeds a resilience mechanism that includes four phases (detection, mitigation, identification, and update), into the control process in a distributed manner. At each phase, every agent only uses local and one-hop neighbors' information to identify and isolate the misbehaving agents, and even compensate their effect on the system. We then extend the proposed algorithm to the leaderless consensus network by introducing and adding two recovery schemes (rollback and excitation recovery) into the current framework to guarantee the accurate convergence of the well-behaving agents in NCS. The effectiveness of the proposed method is demonstrated through case studies in multirobot formation control and wireless sensor networks." ] }
1906.00628
2946911432
We present an efficient technique, which allows to train classification networks which are verifiably robust against norm-bounded adversarial attacks. This framework is built upon the work of , who applies the interval arithmetic to bound the activations at each layer and keeps the prediction invariant to the input perturbation. While that method is faster than competitive approaches, it requires careful tuning of hyper-parameters and a large number of epochs to converge. To speed up and stabilize training, we supply the cost function with an additional term, which encourages the model to keep the interval bounds at hidden layers small. Experimental results demonstrate that we can achieve comparable (or even better) results using a smaller number of training iterations, in a more stable fashion. Moreover, the proposed model is not so sensitive to the exact specification of the training process, which makes it easier to use by practitioners.
To speed up the training of verifiably robust models, one can bound a set of activations reachable through a norm-bounded perturbation @cite_14 @cite_35 . In @cite_24 , linear programming was used to find the convex outer bound for ReLU networks. This approach was later extended to general non-ReLU neurons @cite_33 . As an alternative, @cite_18 @cite_20 @cite_3 adapted the framework of abstract transformers' to compute an approximation to the adversarial polytope using the SGD training. This allowed to train the networks on entire regions of the input space at once. Interval bound propagation @cite_17 applied the interval arithmetic to propagate axis-aligned bounding box from layer to layer. Analogical idea was used in @cite_26 , in which the predictor and the verifier networks were trained simultaneously. While these methods are computationally appealing, they require careful tuning of hyper-parameters to provide tight bounds on the verification network. Finally, there are also hybrid methods, which combine exact and relaxed verifiers @cite_28 @cite_38 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_14", "@cite_26", "@cite_33", "@cite_38", "@cite_28", "@cite_3", "@cite_24", "@cite_20", "@cite_17" ], "mid": [ "2803392236", "2950499086", "2917875722", "2898963688" ], "abstract": [ "This paper proposes a new algorithmic framework, predictor-verifier training, to train neural networks that are verifiable, i.e., networks that provably satisfy some desired input-output properties. The key idea is to simultaneously train two networks: a predictor network that performs the task at hand,e.g., predicting labels given inputs, and a verifier network that computes a bound on how well the predictor satisfies the properties being verified. Both networks can be trained simultaneously to optimize a weighted combination of the standard data-fitting loss and a term that bounds the maximum violation of the property. Experiments show that not only is the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times (outperforming previous algorithms on small datasets like MNIST and SVHN), but it can also be scaled to produce the first known (to the best of our knowledge) verifiably robust networks for CIFAR-10.", "Neural networks have demonstrated considerable success on a wide variety of real-world problems. However, networks trained only to optimize for training accuracy can often be fooled by adversarial examples - slightly perturbed inputs that are misclassified with high confidence. Verification of networks enables us to gauge their vulnerability to such adversarial examples. We formulate verification of piecewise-linear neural networks as a mixed integer program. On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art. We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available. The computational speedup allows us to verify properties on convolutional networks with an order of magnitude more ReLUs than networks previously verified by any complete verifier. In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded @math norm @math : for this classifier, we find an adversarial example for 4.38 of samples, and a certificate of robustness (to perturbations with bounded norm) for the remainder. Across all robust training procedures and network architectures considered, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack.", "Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification. We further prove strong duality between the primal and dual problems under very mild conditions. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks. We find the exact solution does not significantly improve upon the gap between PGD and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR datasets. Our results suggest there is an inherent barrier to tight verification for the large class of methods captured by our framework. We discuss possible causes of this barrier and potential future directions for bypassing it.", "Recent work has shown that it is possible to train deep neural networks that are verifiably robust to norm-bounded adversarial perturbations. Most of these methods are based on minimizing an upper bound on the worst-case loss over all possible adversarial perturbations. While these techniques show promise, they remain hard to scale to larger networks. Through a comprehensive analysis, we show how a careful implementation of a simple bounding technique, interval bound propagation (IBP), can be exploited to train verifiably robust neural networks that beat the state-of-the-art in verified accuracy. While the upper bound computed by IBP can be quite weak for general networks, we demonstrate that an appropriate loss and choice of hyper-parameters allows the network to adapt such that the IBP bound is tight. This results in a fast and stable learning algorithm that outperforms more sophisticated methods and achieves state-of-the-art results on MNIST, CIFAR-10 and SVHN. It also allows us to obtain the first verifiably robust model on a downscaled version of ImageNet." ] }
1906.00423
2946912408
Consider a two-player zero-sum stochastic game where the transition function can be embedded in a given feature space. We propose a two-player Q-learning algorithm for approximating the Nash equilibrium strategy via sampling. The algorithm is shown to find an @math -optimal strategy using sample size linear to the number of features. To further improve its sample efficiency, we develop an accelerated algorithm by adopting techniques such as variance reduction, monotonicity preservation and two-sided strategy approximation. We prove that the algorithm is guaranteed to find an @math -optimal strategy using no more than @math samples with high probability, where @math is the number of features and @math is a discount factor. The sample, time and space complexities of the algorithm are independent of original dimensions of the game.
In the special case of MDP, there exist a large body of works on its sample complexity and sampling-based algorithms. For the tabular setting (finitely many state and actions), sample complexity of MDP with a sampling oracle has been studied in @cite_8 @cite_2 @cite_7 @cite_13 @cite_34 @cite_14 @cite_33 . Lower bounds for sample complexity have been studied in @cite_24 @cite_44 @cite_45 , where the first tight lower bound @math is obtained in @cite_24 . The first sample-optimal algorithm for finding an @math -optimal value is proposed in @cite_24 . @cite_12 gives the first algorithm that finds an @math -optimal policy using the optimal sample complexity @math for all values of @math . For solving MDP using @math linearly additive features, @cite_41 proved a lower bound of sample complexity that is @math . It also provided an algorithm that achieves this lower bound up to log factors, however, their analysis of the algorithm relies heavily on an extra anchor state'' assumption. In @cite_0 , a primal-dual method solving MDP with linear and bilinear representation of value functions and transition models is proposed for the undiscounted MDP. In @cite_18 , the sample complexity of contextual decision process is studied.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_33", "@cite_7", "@cite_8", "@cite_41", "@cite_24", "@cite_44", "@cite_0", "@cite_45", "@cite_2", "@cite_34", "@cite_13", "@cite_12" ], "mid": [ "2911793117", "2890347272", "2765892966", "2170400507" ], "abstract": [ "Consider a Markov decision process (MDP) that admits a set of state-action features, which can linearly express the process's probabilistic transition model. We propose a parametric Q-learning algorithm that finds an approximate-optimal policy using a sample size proportional to the feature dimension @math and invariant with respect to the size of the state space. To further improve its sample efficiency, we exploit the monotonicity property and intrinsic noise structure of the Bellman operator, provided the existence of anchor state-actions that imply implicit non-negativity in the feature space. We augment the algorithm using techniques of variance reduction, monotonicity preservation, and confidence bounds. It is proved to find a policy which is @math -optimal from any initial state with high probability using @math sample transitions for arbitrarily large-scale MDP with a discount factor @math . A matching information-theoretical lower bound is proved, confirming the sample optimality of the proposed method with respect to all parameters (up to polylog factors).", "In this paper we consider the problem of computing an ϵ-optimal policy of a discounted Markov Decision Process (DMDP) provided we can only access its transition function through a generative sampling model that given any state-action pair samples from the transition function in O(1) time. Given such a DMDP with states , actions , discount factor γ∈(0,1), and rewards in range [0,1] we provide an algorithm which computes an ϵ-optimal policy with probability 1−δ where both the run time spent and number of sample taken is upper bounded by O[ | || | (1−γ)3ϵ2 log( | || | (1−γ)δϵ )log( 1 (1−γ)ϵ )] . For fixed values of ϵ, this improves upon the previous best known bounds by a factor of (1−γ)−1 and matches the sample complexity lower bounds proved in azar2013minimax up to logarithmic factors. We also extend our method to computing ϵ-optimal policies for finite-horizon MDP with a generative model and provide a nearly matching sample complexity lower bound.", "Consider the problem of approximating the optimal policy of a Markov decision process (MDP) by sampling state transitions. In contrast to existing reinforcement learning methods that are based on successive approximations to the nonlinear Bellman equation, we propose a Primal-Dual @math Learning method in light of the linear duality between the value and policy. The @math learning method is model-free and makes primal-dual updates to the policy and value vectors as new data are revealed. For infinite-horizon undiscounted Markov decision process with finite state space @math and finite action space @math , the @math learning method finds an @math -optimal policy using the following number of sample transitions @math where @math is an upper bound of mixing times across all policies and @math is a parameter characterizing the range of stationary distributions across policies. The @math learning method also applies to the computational problem of MDP where the transition probabilities and rewards are explicitly given as the input. In the case where each state transition can be sampled in @math time, the @math learning method gives a sublinear-time algorithm for solving the averaged-reward MDP.", "This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 1040 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time." ] }
1906.00423
2946912408
Consider a two-player zero-sum stochastic game where the transition function can be embedded in a given feature space. We propose a two-player Q-learning algorithm for approximating the Nash equilibrium strategy via sampling. The algorithm is shown to find an @math -optimal strategy using sample size linear to the number of features. To further improve its sample efficiency, we develop an accelerated algorithm by adopting techniques such as variance reduction, monotonicity preservation and two-sided strategy approximation. We prove that the algorithm is guaranteed to find an @math -optimal strategy using no more than @math samples with high probability, where @math is the number of features and @math is a discount factor. The sample, time and space complexities of the algorithm are independent of original dimensions of the game.
As for general stochastic games, the minimax Q-learning algorithm and the friend-and-foe Q-learning algorithm is introduced in @cite_37 and @cite_15 , respectively. The Nash Q-learning algorithm is proposed for zero-sum games in @cite_4 and for general-sum games in @cite_40 @cite_23 . Also in @cite_21 , the error of approximate Q-learning is estimated. In @cite_9 , finite-sample analysis of multi-agent reinforcement learning is provided. To our best knowledge, there is no known algorithm that solves 2-TBSG using features with sample complexity analysis.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_9", "@cite_21", "@cite_40", "@cite_23", "@cite_15" ], "mid": [ "1967250398", "1788877992", "2120846115", "2548493877" ], "abstract": [ "The single-agent multi-armed bandit problem can be solved by an agent that learns the values of each action using reinforcement learning. However, the multi-agent version of the problem, the iterated normal form game, presents a more complex challenge, since the rewards available to each agent depend on the strategies of the others. We consider the behavior of value-based learning agents in this situation, and show that such agents cannot generally play at a Nash equilibrium, although if smooth best responses are used, a Nash distribution can be reached. We introduce a particular value-based learning algorithm, which we call individual Q-learning, and use stochastic approximation to study the asymptotic behavior, showing that strategies will converge to Nash distribution almost surely in 2-player zero-sum games and 2-player partnership games. Player-dependent learning rates are then considered, and it is shown that this extension converges in some games for which many algorithms, including the basic algorithm initially considered, fail to converge.", "This paper provides an analysis of error propagation in Approximate Dynamic Programming applied to zero-sum two-player Stochastic Games. We provide a novel and unified error propagation analysis in Lp-norm of three well-known algorithms adapted to Stochastic Games (namely Approximate Value Iteration, Approximate Policy Iteration and Approximate Generalized Policy Iteration). We show that we can achieve a stationary policy which is 2γe+e′ (1-γ)2 -optimal, where e is the value function approximation error and e′ is the approximate greedy operator error. In addition, we provide a practical algorithm (AGPI-Q) to solve infinite horizon γ-discounted two-player zero-sum Stochastic Games in a batch setting. It is an extension of the Fitted-Q algorithm (which solves Markov Decisions Processes from data) and can be non-parametric. Finally, we demonstrate experimentally the performance of AGPI-Q on a simultaneous two-player game, namely Alesia.", "We extend Q-learning to a noncooperative multiagent context, using the framework of general-sum stochastic games. A learning agent maintains Q-functions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Q-values. This learning protocol provably converges given certain restrictions on the stage games (defined by Q-values) that arise during learning. Experiments with a pair of two-player grid games suggest that such restrictions on the game structure are not necessarily required. Stage games encountered during learning in both grid environments violate the conditions. However, learning consistently converges in the first grid game, which has a unique equilibrium Q-function, but sometimes fails to converge in the second, which has three different equilibrium Q-functions. In a comparison of offline learning performance in both games, we find agents are more likely to reach a joint optimal path with Nash Q-learning than with a single-agent Q-learning method. When at least one agent adopts Nash Q-learning, the performance of both agents is better than using single-agent Q-learning. We have also implemented an online version of Nash Q-learning that balances exploration with exploitation, yielding improved performance.", "We consider in this chapter a class of two-player nonzero-sum stochastic games with incomplete information, which is inspired by recent applications of game theory in network security. We develop fully distributed reinforcement learning algorithms, which require for each player a minimal amount of information regarding the other player. At each time, each player can be in an active mode or in a sleep mode. If †This material is based upon work supported in part by the U.S. Air Force Office of Scientific Research (AFOSR) under grant number AFOSR MURI FA9550-09-1-0249. D R A F T October 2, 2011, 1:59am D R A F T 2 HYBRID LEARNING IN STOCHASTIC GAMES AND ITS APPLICATION IN NETWORK SECURITY a player is in an active mode, she updates her strategy and estimates of unknown quantities using a specific pure or hybrid learning pattern. The players’ intelligence and rationality are captured by the weighted linear combination of different learning patterns. We use stochastic approximation techniques to show that, under appropriate conditions, the pure or hybrid learning schemes with random updates can be studied using their deterministic ordinary differential equation (ODE) counterparts. Convergence to state-independent equilibria is analyzed for special classes of games, namely, games with two actions, and potential games. Results are applied to network security games between an intruder and an administrator, where the noncooperative behaviors are well characterized by the features of distributed hybrid learning." ] }
1906.00377
2912317488
High accuracy video label prediction (classification) models are attributed to large scale data. These data could be frame feature sequences extracted by a pre-trained convolutional-neural-network, which promote the efficiency for creating models. Unsupervised solutions such as feature average pooling, as a simple label-independent parameter-free based method, has limited ability to represent the video. While the supervised methods, like RNN, can greatly improve the recognition accuracy. However, the video length is usually long, and there are hierarchical relationships between frames across events in the video, the performance of RNN based models are decreased. In this paper, we proposes a novel video classification method based on a deep convolutional graph neural network (DCGN). The proposed method utilize the characteristics of the hierarchical structure of the video, and performed multi-level feature extraction on the video frame sequence through the graph network, obtained a video representation reflecting the event semantics hierarchically. We test our model on YouTube-8M Large-Scale Video Understanding dataset, and the result outperforms RNN based benchmarks.
Video feature sequence classification is essentially the the task of aggregating video features, that is, to aggregate @math @math -dimensional features into one @math -dimensional feature by mining statistical relationships between these @math features. The aggregated @math -dimensional feature is a highly concentrated embedding, making the classifier easy to mapping the visual embedding space into the label semantic space. It is common using recurrent neural networks, such as LSTM (Long Short-Term Memory Networks) @cite_0 @cite_6 @cite_1 and GRU (Gated recurrent units) @cite_9 @cite_4 , both are the state-of-the-art approaches for many sequence modeling tasks. However, the hidden state of RNN is dependent on previous steps, which prevent parallel computations. Moreover, LSTM or GRU use gate to solve RNN gradient vanish problem, but the sigmoid in the gates still cause gradient decay over layers in depth. It has been shown that LSTM has difficulties in converging when sequence length increase @cite_7 .There also exist end-to-end trainable order-less aggregation methods, such as DBoF(Deep Bag of Frame Pooling) @cite_2 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_2" ], "mid": [ "2808203533", "2890018557", "2580899942", "2951971882" ], "abstract": [ "As characterizing videos simultaneously from spatial and temporal cues has been shown crucial for the video analysis, the combination of convolutional neural networks and recurrent neural networks, i.e., recurrent convolution networks (RCNs), should be a native framework for learning the spatio-temporal video features. In this paper, we develop a novel sequential vector of locally aggregated descriptor (VLAD) layer, named SeqVLAD, to combine a trainable VLAD encoding process and the RCNs architecture into a whole framework. In particular, sequential convolutional feature maps extracted from successive video frames are fed into the RCNs to learn soft spatio-temporal assignment parameters, so as to aggregate not only detailed spatial information in separate video frames but also fine motion information in successive video frames. Moreover, we improve the gated recurrent unit (GRU) of RCNs by sharing the input-to-hidden parameters and propose an improved GRU-RCN architecture named shared GRU-RCN (SGRU-RCN). Thus, our SGRU-RCN has a fewer parameters and a less possibility of overfitting. In experiments, we evaluate SeqVLAD with the tasks of video captioning and video action recognition. Experimental results on Microsoft Research Video Description Corpus, Montreal Video Annotation Dataset, UCF101, and HMDB51 demonstrate the effectiveness and good performance of our method.", "Learning 3D global features by aggregating multiple views has been introduced as a successful strategy for 3D shape analysis. In recent deep learning models with end-to-end training, pooling is a widely adopted procedure for view aggregation. However, pooling merely retains the max or mean value over all views, which disregards the content information of almost all views and also the spatial information among the views. To resolve these issues, we propose Sequential Views To Sequential Labels (SeqViews2SeqLabels) as a novel deep learning model with an encoder–decoder structure based on recurrent neural networks (RNNs) with attention. SeqViews2SeqLabels consists of two connected parts, an encoder-RNN followed by a decoder-RNN, that aim to learn the global features by aggregating sequential views and then performing shape classification from the learned global features, respectively. Specifically, the encoder-RNN learns the global features by simultaneously encoding the spatial and content information of sequential views, which captures the semantics of the view sequence. With the proposed prediction of sequential labels, the decoder-RNN performs more accurate classification using the learned global features by predicting sequential labels step by step. Learning to predict sequential labels provides more and finer discriminative information among shape classes to learn, which alleviates the overfitting problem inherent in training using a limited number of 3D shapes. Moreover, we introduce an attention mechanism to further improve the discriminative ability of SeqViews2SeqLabels. This mechanism increases the weight of views that are distinctive to each shape class, and it dramatically reduces the effect of selecting the first view position. Shape classification and retrieval results under three large-scale benchmarks verify that SeqViews2SeqLabels learns more discriminative global features by more effectively aggregating sequential views than state-of-the-art methods.", "We investigate the problem of representing an entire video using CNN features for human action recognition. End-to-end learning of CNN RNNs is currently not possible for whole videos due to GPU memory limitations and so a common practice is to use sampled frames as inputs along with the video labels as supervision. However, the global video labels might not be suitable for all of the temporally local samples as the videos often contain content besides the action of interest. We therefore propose to instead treat the deep networks trained on local inputs as local feature extractors. The local features are then aggregated to form global features which are used to assign video-level labels through a second classification stage. We investigate a number of design choices for this local feature approach. Experimental results on the HMDB51 and UCF101 datasets show that a simple maximum pooling on the sparsely sampled local features leads to significant performance improvement.", "Video classification methods often divide the video into short clips, do inference on these clips independently, and then aggregate these predictions to generate the final classification result. Treating these highly-correlated clips as independent both ignores the temporal structure of the signal and carries a large computational cost: the model must process each clip from scratch. To reduce this cost, recent efforts have focused on designing more efficient clip-level network architectures. Less attention, however, has been paid to the overall framework, including how to benefit from correlations between neighboring clips and improving the aggregation strategy itself. In this paper we leverage the correlation between adjacent video clips to address the problem of computational cost efficiency in video classification at the aggregation stage. More specifically, given a clip feature representation, the problem of computing next clip's representation becomes much easier. We propose a novel recurrent architecture called FASTER for video-level classification, that combines high quality, expensive representations of clips, that capture the action in detail, and lightweight representations, which capture scene changes in the video and avoid redundant computation. We also propose a novel processing unit to learn integration of clip-level representations, as well as their temporal structure. We call this unit FAST-GRU, as it is based on the Gated Recurrent Unit (GRU). The proposed framework achieves significantly better FLOPs vs. accuracy trade-off at inference time. Compared to existing approaches, our proposed framework reduces the FLOPs by more than 10x while maintaining similar accuracy across popular datasets, such as Kinetics, UCF101 and HMDB51." ] }
1708.05482
2748618075
Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text. It is a much more difficult task compared to emotion classification. Inspired by recent advances in using deep memory networks for question answering (QA), we propose a new approach which considers emotion cause identification as a reading comprehension task in QA. Inspired by convolutional neural networks, we propose a new mechanism to store relevant context in different memory slots to model context information. Our proposed approach can extract both word level sequence features and lexical features. Performance evaluation shows that our method achieves the state-of-the-art performance on a recently released emotion cause dataset, outperforming a number of competitive baselines by at least 3.01 in F-measure.
Identifying emotion categories in text is one of the key tasks in NLP @cite_29 . Going one step further, emotion cause extraction can reveal important information about what causes a certain emotion and why there is an emotion change . In this section, we introduce related work on emotion analysis including emotion cause extraction.
{ "cite_N": [ "@cite_29" ], "mid": [ "1992605069", "1604245705", "2766095568", "2162555959" ], "abstract": [ "We develop a rule-based system that trigger emotions based on the emotional model.We extract the corresponding cause events in fine-grained emotions.We get the proportions of different cause components under different emotions.The language features and Bayesian probability are used in this paper. Emotion analysis and emotion cause extraction are key research tasks in natural language processing and public opinion mining. This paper presents a rule-based approach to emotion cause component detection for Chinese micro-blogs. Our research has important scientific values on social network knowledge discovery and data mining. It also has a great potential in analyzing the psychological processes of consumers. Firstly, this paper proposes a rule-based system underlying the conditions that trigger emotions based on an emotional model. Secondly, this paper extracts the corresponding cause events in fine-grained emotions from the results of events, actions of agents and aspects of objects. Meanwhile, it is reasonable to get the proportions of different cause components under different emotions by constructing the emotional lexicon and identifying different linguistic features, and the proposed approach is based on Bayesian probability. Finally, this paper presents the experiments on an emotion corpus of Chinese micro-blogs. The experimental results validate the feasibility of the approach. The existing problems and the further works are also present at the end.", "Emotion processing has always been a great challenge. Given the fact that an emotion is triggered by cause events and that cause events are an integral part of emotion, this paper constructs a Chinese emotion cause corpus as a first step towards automatic inference of cause-emotion correlation. The corpus focuses on five primary emotions, namely happiness, sadness, fear, anger, and surprise. It is annotated with emotion cause events based on our proposed annotation scheme. Corpus data shows that most emotions are expressed with causes, and that causes mostly occur before the corresponding emotion verbs. We also examine the correlations between emotions and cause events in terms of linguistic cues: causative verbs, perception verbs, epistemic markers, conjunctions, prepositions, and others. Results show that each group of linguistic cues serves as an indicator marking the cause events in different structures of emotional constructions. We believe that the emotion cause corpus will be the useful resource for automatic emotion cause detection as well as emotion detection and classification.", "A notably challenging problem in emotion analysis is recognizing the cause of an emotion. Although there have been a few studies on emotion cause detection, most of them work on news reports or a few of them focus on microblogs using a single-user structure (i.e., all texts in a microblog are written by the same user). In this article, we focus on emotion cause detection for Chinese microblogs using a multiple-user structure (i.e., texts in a microblog are successively written by several users). First, based on the fact that the causes of an emotion of a focused user may be provided by other users in a microblog with the multiple-user structure, we design an emotion cause annotation scheme which can deal with such a complicated case, and then provide an emotion cause corpus using the annotation scheme. Second, based on the analysis of the emotion cause corpus, we formalize two emotion cause detection tasks for microblogs (current-subtweet-based emotion cause detection and original-subtweet-based emotion cause detection). Furthermore, in order to examine the difficulty of the two emotion cause detection tasks and the contributions of texts written by different users in a microblog with the multiple-user structure, we choose two popular classification methods (SVM and LSTM) to do emotion cause detection. Our experiments show that the current-subtweet-based emotion cause detection is much more difficult than the original-subtweet-based emotion cause detection, and texts written by different users are very helpful for both emotion cause detection tasks. This study presents a pilot study of emotion cause detection which deals with Chinese microblogs using a complicated structure.", "Though data-driven in nature, emotion analysis based on latent semantic analysis still relies on some measure of expert knowledge in order to isolate the emotional keywords or keysets necessary to the construction of affective categories. This makes it vulnerable to any discrepancy between the ensuing taxonomy of affective states and the underlying domain of discourse. This paper proposes a more general strategy which leverages two distincts semantic levels, one that encapsulates the foundations of the domain considered, and one that specifically accounts for the overall affective fabric of the language. Exposing the emergent relationship between these two levels advantageously informs the emotion classification process. Empirical evidence suggests that this is a promising solution for automatic emotion detection in text." ] }
1708.05482
2748618075
Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text. It is a much more difficult task compared to emotion classification. Inspired by recent advances in using deep memory networks for question answering (QA), we propose a new approach which considers emotion cause identification as a reading comprehension task in QA. Inspired by convolutional neural networks, we propose a new mechanism to store relevant context in different memory slots to model context information. Our proposed approach can extract both word level sequence features and lexical features. Performance evaluation shows that our method achieves the state-of-the-art performance on a recently released emotion cause dataset, outperforming a number of competitive baselines by at least 3.01 in F-measure.
Existing work in emotion analysis mostly focuses on emotion classification @cite_6 @cite_23 and emotion information extraction @cite_1 . used a coarse to fine method to classify emotions in Chinese blogs. proposed a joint model to co-train a polarity classifier and an emotion classifier. proposed a Multi-task Gaussian-process based method for emotion classification. used linguistic templates to predict reader's emotions. used an unsupervised method to extract emotion feelers from Bengali blogs. There are other studies which focused on joint learning of sentiments @cite_14 @cite_18 or emotions in tweets or blogs @cite_13 @cite_28 @cite_2 @cite_17 @cite_36 , and emotion lexicon construction @cite_31 @cite_10 @cite_30 . However, the aforementioned work all focused on analysis of emotion expressions rather than emotion causes.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_28", "@cite_36", "@cite_10", "@cite_1", "@cite_6", "@cite_23", "@cite_2", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "2090987251", "1992605069", "2226884328", "2161624371" ], "abstract": [ "In this paper, we propose a data-oriented method for inferring the emotion of a speaker conversing with a dialog system from the semantic content of an utterance. We first fully automatically obtain a huge collection of emotion-provoking event instances from the Web. With Japanese chosen as a target language, about 1.3 million emotion provoking event instances are extracted using an emotion lexicon and lexical patterns. We then decompose the emotion classification task into two sub-steps: sentiment polarity classification (coarsegrained emotion classification), and emotion classification (fine-grained emotion classification). For each subtask, the collection of emotion-proviking event instances is used as labelled examples to train a classifier. The results of our experiments indicate that our method significantly outperforms the baseline method. We also find that compared with the single-step model, which applies the emotion classifier directly to inputs, our two-step model significantly reduces sentiment polarity errors, which are considered fatal errors in real dialog applications.", "We develop a rule-based system that trigger emotions based on the emotional model.We extract the corresponding cause events in fine-grained emotions.We get the proportions of different cause components under different emotions.The language features and Bayesian probability are used in this paper. Emotion analysis and emotion cause extraction are key research tasks in natural language processing and public opinion mining. This paper presents a rule-based approach to emotion cause component detection for Chinese micro-blogs. Our research has important scientific values on social network knowledge discovery and data mining. It also has a great potential in analyzing the psychological processes of consumers. Firstly, this paper proposes a rule-based system underlying the conditions that trigger emotions based on an emotional model. Secondly, this paper extracts the corresponding cause events in fine-grained emotions from the results of events, actions of agents and aspects of objects. Meanwhile, it is reasonable to get the proportions of different cause components under different emotions by constructing the emotional lexicon and identifying different linguistic features, and the proposed approach is based on Bayesian probability. Finally, this paper presents the experiments on an emotion corpus of Chinese micro-blogs. The experimental results validate the feasibility of the approach. The existing problems and the further works are also present at the end.", "Department of Computer Science and Engineering, Anna University Regional Centre, Coimbatore, India [email protected] J. Preethi Department of Computer Science and Engineering Anna University Regional Centre, Coimbatore, India [email protected] Emotions are very important in human decision handling, interaction and cognitive process. In this paper describes that recognize the human emotions from DEAP EEG dataset with different kind of methods. Audio – video based stimuli is used to extract the emotions. EEG signal is divided into different bands using discrete wavelet transformation with db8 wavelet function for further process. Statistical and energy based features are extracted from the bands, based on the features emotions are classified with feed forward neural network with weight optimized algorithm like PSO. Before that the particular band has to be selected based on the training performance of neural networks and then the emotions are classified. In this experimental result describes that the gamma and alpha bands are provides the accurate classification result with average classification rate of 90.3 of using NNRBF, 90.325 of using PNN, 96.3 of using PSO trained NN, 98.1 of using Cuckoo trained NN. At last the emotions are classified into two different groups like valence and arousal. Based on that identifies the person normal and abnormal behavioral using classified emotion.", "To identify the cause of emotion is a new challenge for researchers in nature language processing. Currently, there is no existing works on emotion cause detection from Chinese micro-blogging (Weibo) text. In this study, an emotion cause annotated corpus is firstly designed and developed through anno- tating the emotion cause expressions in Chinese Weibo Text. Up to now, an emotion cause annotated corpus which consists of the annotations for 1,333 Chinese Weibo is constructed. Based on the observations on this corpus, the characteristics of emotion cause expression are identified. Accordingly, a rule- based emotion cause detection method is developed which uses 25 manually complied rules. Furthermore, two machine learning based cause detection me- thods are developed including a classification-based method using support vec- tor machines and a sequence labeling based method using conditional random fields model. It is the largest available resources in this research area. The expe- rimental results show that the rule-based method achieves 68.30 accuracy rate. Furthermore, the method based on conditional random fields model achieved 77.57 accuracy which is 37.45 higher than the reference baseline method. These results show the effectiveness of our proposed emotion cause detection method." ] }
1708.05482
2748618075
Emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text. It is a much more difficult task compared to emotion classification. Inspired by recent advances in using deep memory networks for question answering (QA), we propose a new approach which considers emotion cause identification as a reading comprehension task in QA. Inspired by convolutional neural networks, we propose a new mechanism to store relevant context in different memory slots to model context information. Our proposed approach can extract both word level sequence features and lexical features. Performance evaluation shows that our method achieves the state-of-the-art performance on a recently released emotion cause dataset, outperforming a number of competitive baselines by at least 3.01 in F-measure.
first proposed a task on emotion cause extraction. They manually constructed a corpus from the Academia Sinica Balanced Chinese Corpus. Based on this corpus, proposed a rule based method to detect emotion causes based on manually define linguistic rules. Some studies @cite_3 @cite_15 @cite_0 extended the rule based method to informal text in Weibo text (Chinese tweets).
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_3" ], "mid": [ "2161624371", "1992605069", "2766095568", "1604245705" ], "abstract": [ "To identify the cause of emotion is a new challenge for researchers in nature language processing. Currently, there is no existing works on emotion cause detection from Chinese micro-blogging (Weibo) text. In this study, an emotion cause annotated corpus is firstly designed and developed through anno- tating the emotion cause expressions in Chinese Weibo Text. Up to now, an emotion cause annotated corpus which consists of the annotations for 1,333 Chinese Weibo is constructed. Based on the observations on this corpus, the characteristics of emotion cause expression are identified. Accordingly, a rule- based emotion cause detection method is developed which uses 25 manually complied rules. Furthermore, two machine learning based cause detection me- thods are developed including a classification-based method using support vec- tor machines and a sequence labeling based method using conditional random fields model. It is the largest available resources in this research area. The expe- rimental results show that the rule-based method achieves 68.30 accuracy rate. Furthermore, the method based on conditional random fields model achieved 77.57 accuracy which is 37.45 higher than the reference baseline method. These results show the effectiveness of our proposed emotion cause detection method.", "We develop a rule-based system that trigger emotions based on the emotional model.We extract the corresponding cause events in fine-grained emotions.We get the proportions of different cause components under different emotions.The language features and Bayesian probability are used in this paper. Emotion analysis and emotion cause extraction are key research tasks in natural language processing and public opinion mining. This paper presents a rule-based approach to emotion cause component detection for Chinese micro-blogs. Our research has important scientific values on social network knowledge discovery and data mining. It also has a great potential in analyzing the psychological processes of consumers. Firstly, this paper proposes a rule-based system underlying the conditions that trigger emotions based on an emotional model. Secondly, this paper extracts the corresponding cause events in fine-grained emotions from the results of events, actions of agents and aspects of objects. Meanwhile, it is reasonable to get the proportions of different cause components under different emotions by constructing the emotional lexicon and identifying different linguistic features, and the proposed approach is based on Bayesian probability. Finally, this paper presents the experiments on an emotion corpus of Chinese micro-blogs. The experimental results validate the feasibility of the approach. The existing problems and the further works are also present at the end.", "A notably challenging problem in emotion analysis is recognizing the cause of an emotion. Although there have been a few studies on emotion cause detection, most of them work on news reports or a few of them focus on microblogs using a single-user structure (i.e., all texts in a microblog are written by the same user). In this article, we focus on emotion cause detection for Chinese microblogs using a multiple-user structure (i.e., texts in a microblog are successively written by several users). First, based on the fact that the causes of an emotion of a focused user may be provided by other users in a microblog with the multiple-user structure, we design an emotion cause annotation scheme which can deal with such a complicated case, and then provide an emotion cause corpus using the annotation scheme. Second, based on the analysis of the emotion cause corpus, we formalize two emotion cause detection tasks for microblogs (current-subtweet-based emotion cause detection and original-subtweet-based emotion cause detection). Furthermore, in order to examine the difficulty of the two emotion cause detection tasks and the contributions of texts written by different users in a microblog with the multiple-user structure, we choose two popular classification methods (SVM and LSTM) to do emotion cause detection. Our experiments show that the current-subtweet-based emotion cause detection is much more difficult than the original-subtweet-based emotion cause detection, and texts written by different users are very helpful for both emotion cause detection tasks. This study presents a pilot study of emotion cause detection which deals with Chinese microblogs using a complicated structure.", "Emotion processing has always been a great challenge. Given the fact that an emotion is triggered by cause events and that cause events are an integral part of emotion, this paper constructs a Chinese emotion cause corpus as a first step towards automatic inference of cause-emotion correlation. The corpus focuses on five primary emotions, namely happiness, sadness, fear, anger, and surprise. It is annotated with emotion cause events based on our proposed annotation scheme. Corpus data shows that most emotions are expressed with causes, and that causes mostly occur before the corresponding emotion verbs. We also examine the correlations between emotions and cause events in terms of linguistic cues: causative verbs, perception verbs, epistemic markers, conjunctions, prepositions, and others. Results show that each group of linguistic cues serves as an indicator marking the cause events in different structures of emotional constructions. We believe that the emotion cause corpus will be the useful resource for automatic emotion cause detection as well as emotion detection and classification." ] }
1708.05509
2747543643
Automatic generation of facial images has been well studied after the Generative Adversarial Network (GAN) came out. There exists some attempts applying the GAN model to the problem of generating facial images of anime characters, but none of the existing work gives a promising result. In this work, we explore the training of GAN models specialized on an anime facial image dataset. We address the issue from both the data and the model aspect, by collecting a more clean, well-suited dataset and leverage proper, empirical application of DRAGAN. With quantitative analysis and case studies we demonstrate that our efforts lead to a stable and high-quality model. Moreover, to assist people with anime character design, we build a website (http: make.girls.moe) with our pre-trained model available online, which makes the model easily accessible to general public.
Generative Adversarial Network (GAN) @cite_12 , proposed by , shows impressive results in image generation @cite_25 , image transfer @cite_8 , super-resolution @cite_26 and many other generation tasks. The essence of GAN can be summarized as training a model and a model simultaneously, where the discriminator model tries to distinguish the real example, sampled from ground-truth images, from the samples generated by the generator. On the other hand, the generator tries to produce realistic samples that the discriminator is unable to distinguish from the ground-truth samples. Above idea can be described as an that applied to both generator and discriminator in the actual training process, which effectively encourages outputs of the generator to be similar to the original data distribution.
{ "cite_N": [ "@cite_26", "@cite_25", "@cite_12", "@cite_8" ], "mid": [ "2787223504", "2607448608", "2964268978", "2737057113" ], "abstract": [ "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "This paper describes an intuitive generalization to the Generative Adversarial Networks (GANs) to generate samples while capturing diverse modes of the true data distribution. Firstly, we propose a very simple and intuitive multi-agent GAN architecture that incorporates multiple generators capable of generating samples from high probability modes. Secondly, in order to enforce different generators to generate samples from diverse modes, we propose two extensions to the standard GAN objective function. (1) We augment the generator specific GAN objective function with a diversity enforcing term that encourage different generators to generate diverse samples using a user-defined similarity based function. (2) We modify the discriminator objective function where along with finding the real and fake samples, the discriminator has to predict the generator which generated the given fake sample. Intuitively, in order to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. Our framework is generalizable in the sense that it can be easily combined with other existing variants of GANs to produce diverse samples. Experimentally we show that our framework is able to produce high quality diverse samples for the challenging tasks such as image face generation and image-to-image translation. We also show that it is capable of learning a better feature representation in an unsupervised setting.", "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "Generative Adversarial Networks (GANs) have been shown to be able to sample impressively realistic images. GAN training consists of a saddle point optimization problem that can be thought of as an adversarial game between a generator which produces the images, and a discriminator, which judges if the images are real. Both the generator and the discriminator are commonly parametrized as deep convolutional neural networks. The goal of this paper is to disentangle the contribution of the optimization procedure and the network parametrization to the success of GANs. To this end we introduce and study Generative Latent Optimization (GLO), a framework to train a generator without the need to learn a discriminator, thus avoiding challenging adversarial optimization problems. We show experimentally that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors." ] }
1708.05509
2747543643
Automatic generation of facial images has been well studied after the Generative Adversarial Network (GAN) came out. There exists some attempts applying the GAN model to the problem of generating facial images of anime characters, but none of the existing work gives a promising result. In this work, we explore the training of GAN models specialized on an anime facial image dataset. We address the issue from both the data and the model aspect, by collecting a more clean, well-suited dataset and leverage proper, empirical application of DRAGAN. With quantitative analysis and case studies we demonstrate that our efforts lead to a stable and high-quality model. Moreover, to assist people with anime character design, we build a website (http: make.girls.moe) with our pre-trained model available online, which makes the model easily accessible to general public.
Although the training process is quiet simple, optimizing such models often lead to , in which the generator will always produce the same image. To train GANs stably, @cite_9 suggests rendering Discriminator omniscient whenever necessary. By learning a loss function to separate generated samples from their real examples, LS-GAN @cite_14 focuses on improving poor generation result and thus avoids mode collapse. More detailed discussion on the difficulty in training GAN will be in Section .
{ "cite_N": [ "@cite_9", "@cite_14" ], "mid": [ "2785967511", "2787223504", "2687693326", "2963981733" ], "abstract": [ "Despite of the success of Generative Adversarial Networks (GANs) for image generation tasks, the trade-off between image diversity and visual quality are an well-known issue. Conventional techniques achieve either visual quality or image diversity; the improvement in one side is often the result of sacrificing the degradation in the other side. In this paper, we aim to achieve both simultaneously by improving the stability of training GANs. A key idea of the proposed approach is to implicitly regularizing the discriminator using a representative feature. For that, this representative feature is extracted from the data distribution, and then transferred to the discriminator for enforcing slow updates of the gradient. Consequently, the entire training process is stabilized because the learning curve of discriminator varies slowly. Based on extensive evaluation, we demonstrate that our approach improves the visual quality and diversity of state-of-the art GANs.", "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the \"Frechet Inception Distance\" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.", "Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the Frechet Inception Distance'' (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark." ] }
1708.05509
2747543643
Automatic generation of facial images has been well studied after the Generative Adversarial Network (GAN) came out. There exists some attempts applying the GAN model to the problem of generating facial images of anime characters, but none of the existing work gives a promising result. In this work, we explore the training of GAN models specialized on an anime facial image dataset. We address the issue from both the data and the model aspect, by collecting a more clean, well-suited dataset and leverage proper, empirical application of DRAGAN. With quantitative analysis and case studies we demonstrate that our efforts lead to a stable and high-quality model. Moreover, to assist people with anime character design, we build a website (http: make.girls.moe) with our pre-trained model available online, which makes the model easily accessible to general public.
Many variants of GAN have been proposed for generating images. @cite_25 applied convolutional neural network in GAN to generate images from latent vector inputs. Instead of generating images from latent vectors, serval methods use the same adversarial idea for generating images with more meaningful input. Mirza & introduced Conditional Generative Adversarial Nets @cite_22 using the image class label as a conditional input to generate MNIST numbers in particular class. @cite_16 further employed encoded text as input to produce images that match the text description. Instead of only feeding conditional information as the input, proposed ACGAN @cite_17 , which also train the discriminator as an auxiliary classifier to predict the condition input.
{ "cite_N": [ "@cite_22", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "2596763562", "2964218010", "2963373786", "2787223504" ], "abstract": [ "Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.", "Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.", "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators." ] }
1708.05096
2750197405
For many, this is no longer a valid question and the case is considered settled with SDN NFV (Software Defined Networking Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.
In a recent and most comprehensive survey of SDN and virtualization research for LTE mobile networks @cite_51 , the authors have provided a general overview of SDN and virtualization technologies and their respective benefits. They have developed a taxonomy to survey the research space based on the elements of modern cellular systems, e.g., access network, core network, and backhaul. Within each class, the author further classified the material in terms of relevant topics, such as, resource virtualization, resource abstraction, mobility management, etc. They have also looked at the use cases in each class. It is the most comprehensive survey one could find in the radio access network research relevant to SDN. The thrust of the survey is complementary to the present paper. If the readers want better understanding about the material covered in Section , they are recommended to read @cite_51 . On the other hand, the open challenges briefly discussed at the end of @cite_51 and the relevant work under each challenge are discussed in detail in the present paper.
{ "cite_N": [ "@cite_51" ], "mid": [ "1796714434", "1609627895", "2518348455", "2075944151" ], "abstract": [ "Software-defined networking (SDN) features the decoupling of the control plane and data plane, a programmable network and virtualization, which enables network infrastructure sharing and the \"softwarization\" of the network functions. Recently, many research works have tried to redesign the traditional mobile network using two of these concepts in order to deal with the challenges faced by mobile operators, such as the rapid growth of mobile traffic and new services. In this paper, we first provide an overview of SDN, network virtualization, and network function virtualization, and then describe the current LTE mobile network architecture as well as its challenges and issues. By analyzing and categorizing a wide range of the latest research works on SDN and virtualization in LTE mobile networks, we present a general architecture for SDN and virtualization in mobile networks (called SDVMN) and then propose a hierarchical taxonomy based on the different levels of the carrier network. We also present an in-depth analysis about changes related to protocol operation and architecture when adopting SDN and virtualization in mobile networks. In addition, we list specific use cases and applications that benefit from SDVMN. Last but not least, we discuss the open issues and future research directions of SDVMN.", "Small cell networks have been broadly regarded as an imperative evolution path for the next-generation cellular networks. Dense small cell deployments will be connected to the core network by heterogeneous backhaul technologies such as fiber, microwave, high frequency wireless solutions, etc., which have their inherent limitations and impose big challenges on the operation of radio access network to meet the increasing rate demands in future networks. To address these challenges, this paper presents an efficient design considered in the iJOIN (Interworking and JOINt Design of an Open Access and Backhaul Network Architecture for Small Cells based on Cloud Networks) project with the objective of jointly optimizing backhaul and radio access network operations through the adoption of SDN (Software Defined Networking). Furthermore, based on this framework, the implementation of several intelligent management functions, including mobility management, network-wide energy optimization and data center placement, is demonstrated.", "The growing data traffic demand is forcing network operators to deploy more base stations, culminating in dense heterogeneous networks that require a high-connectivity backhaul. This scenario imposes significant challenges for current and future cellular networks, and Software Defined Networking (SDN) has been pointed as an enabling technology to overcome existing limitations. This paper shows how the OpenFlow protocol can be integrated into existing Long Term Evolution (LTE) networks to provide the required Quality of Service (QoS) in the network infrastructure. Three OpenFlow-based mechanisms are proposed: a traffic routing, an admission control function, and a traffic coexistence mechanism. Together, they can effectively control the bandwidth usage in the backhaul infrastructure, improving the QoS and ensuring a better user experience. Simulations were performed to validate the proposed mechanisms and highlight the benefits that can be achieved with the flexibility offered by the SDN technology.", "Cellular networks are currently experiencing a tremendous growth of data traffic. To cope with this demand, a close cooperation between academic researchers and industry standardization experts is necessary, which hardly exists in practice. In this paper, we try to bridge this gap between researchers and engineers by providing a review of current standard-related research efforts in wireless communication systems. Furthermore, we give an overview about our attempt in facilitating the exchange of information and results between researchers and engineers, via a common simulation platform for 3GPP long term evolution (LTE) and a corresponding webforum for discussion. Often, especially in signal processing, reproducing results of other researcher is a tedious task, because assumptions and parameters are not clearly specified, which hamper the consideration of the state-of-the-art research in the standardization process. Also, practical constraints, impairments imposed by technological restrictions and well-known physical phenomena, e.g., signaling overhead, synchronization issues, channel fading, are often disregarded by researchers, because of simplicity and mathematical tractability. Hence, evaluating the relevance of research results under practical conditions is often difficult. To circumvent these problems, we developed a standard-compliant opensource simulation platform for LTE that enables reproducible research in a well-defined environment. We demonstrate that innovative research under the confined framework of a real-world standard is possible, sometimes even encouraged. With examples of our research work, we investigate on the potential of several important research areas under typical practical conditions, and highlight consistencies as well as differences between theory and practice." ] }
1708.05096
2750197405
For many, this is no longer a valid question and the case is considered settled with SDN NFV (Software Defined Networking Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.
Another recent survey @cite_120 briefly surveys all technologies and applications associated with 5G. The survey also touches upon SDN and only superficially covers some research work under the theme. A more in-depth analysis of some of SDN-based mobile network architectures, i.e., @cite_34 @cite_91 @cite_146 @cite_137 @cite_152 @cite_111 @cite_118 @cite_28 @cite_188 , are presented in @cite_49 in terms of ideas presented in the proposals and their limitations. The survey in @cite_115 looks at the proposals for softwarization and cloudification of cellular networks in terms of optimization and provisions for energy harvesting for sustainable future. The gaps in the technologies are also identified. All of the above mentioned surveys, however, have a broader scope than just SDN-based mobile network architecture and they have only looked at some SDN papers appropriate for the major theme of their survey papers.
{ "cite_N": [ "@cite_188", "@cite_118", "@cite_115", "@cite_91", "@cite_152", "@cite_28", "@cite_120", "@cite_137", "@cite_111", "@cite_146", "@cite_49", "@cite_34" ], "mid": [ "2610494282", "2896316144", "2256188983", "2343448572" ], "abstract": [ "The tremendous growth in communication technology is shaping a hyper-connected network where billions or connected devices are producing a huge volume of data. Cellular and mobile network is a major contributor towards this technology shift and require new architectural paradigm to provide low latency, high performance in a resource constrained environment. 5G technology deployment with fully IP-based connectivity is anticipated by 2020. However, there is no standard established for 5G technology and many efforts are being made to establish a unified 5G stander. In this context, variant technology such as Software Defined Network (SDN) and Network Function virtualization (NFV) are the best candidate. SDN dissociate control plane from data plane and network management is done on the centralized control plane. In this paper, a survey on state of the art on the 5G integration with the SDN is presented. A comprehensive review is presented for the different integrated architectures of 5G wireless network and the generalized solutions over the period 2010–2016. This comparative analysis of the existing solutions of SDN-based cellular network (5G) implementations provides an easy and concise view of the emerging trends by 2020.", "As a crucial step moving towards the next generation of super-fast wireless networks, recently the fifth-generation (5G) mobile wireless networks have received a plethora of research attention and efforts from both the academia and industry. The 5G mobile wireless networks are expected to provision distinct delay-bounded quality of service (QoS) guarantees for a wide range of multimedia services, applications, and users with extremely diverse requirements. However, how to efficiently support multimedia services over 5G wireless networks has imposed many new challenging issues not encountered before in the fourth-generation wireless networks. To overcome these new challenges, we propose a novel network-function virtualization and mobile-traffic offloading based software-defined network (SDN) architecture for heterogeneous statistical QoS provisioning over 5G multimedia mobile wireless networks. Specifically, we develop the novel SDN architecture to scalably virtualize wireless resources and physical infrastructures, based on user’s locations and requests, into three types of virtual wireless networks: virtual networks without offloading, virtual networks with WiFi offloading, and virtual networks with device-to-device offloading. We derive the optimal transmit power allocation schemes to maximize the aggregate effective capacity, overall spectrum efficiency, and other related performances for these three types of virtual wireless networks. We also derive the scalability improvements of our proposed three integrated virtual networks. Finally, we validate and evaluate our developed schemes through numerical analyses, showing significant performance improvements as compared with other existing schemes.", "The fifth generation (5G) mobile networks are envisioned to support the deluge of data traffic with reduced energy consumption and improved quality of service (QoS) provision. To this end, key enabling technologies, such as heterogeneous networks (HetNets), massive multiple-input multiple-output (MIMO), and millimeter wave (mmWave) techniques, have been identified to bring 5G to fruition. Regardless of the technology adopted, a user association mechanism is needed to determine whether a user is associated with a particular base station (BS) before data transmission commences. User association plays a pivotal role in enhancing the load balancing, the spectrum efficiency, and the energy efficiency of networks. The emerging 5G networks introduce numerous challenges and opportunities for the design of sophisticated user association mechanisms. Hence, substantial research efforts are dedicated to the issues of user association in HetNets, massive MIMO networks, mmWave networks, and energy harvesting networks. We introduce a taxonomy as a framework for systematically studying the existing user association algorithms. Based on the proposed taxonomy, we then proceed to present an extensive overview of the state-of-the-art in user association algorithms conceived for HetNets, massive MIMO, mmWave, and energy harvesting networks. Finally, we summarize the challenges as well as opportunities of user association in 5G and provide design guidelines and potential solutions for sophisticated user association mechanisms.", "The vision of next generation 5G wireless communications lies in providing very high data rates (typically of Gbps order), extremely low latency, manifold increase in base station capacity, and significant improvement in users’ perceived quality of service (QoS), compared to current 4G LTE networks. Ever increasing proliferation of smart devices, introduction of new emerging multimedia applications, together with an exponential rise in wireless data (multimedia) demand and usage is already creating a significant burden on existing cellular networks. 5G wireless systems, with improved data rates, capacity, latency, and QoS are expected to be the panacea of most of the current cellular networks’ problems. In this survey, we make an exhaustive review of wireless evolution toward 5G networks. We first discuss the new architectural changes associated with the radio access network (RAN) design, including air interfaces, smart antennas, cloud and heterogeneous RAN. Subsequently, we make an in-depth survey of underlying novel mm-wave physical layer technologies, encompassing new channel model estimation, directional antenna design, beamforming algorithms, and massive MIMO technologies. Next, the details of MAC layer protocols and multiplexing schemes needed to efficiently support this new physical layer are discussed. We also look into the killer applications, considered as the major driving force behind 5G. In order to understand the improved user experience, we provide highlights of new QoS, QoE, and SON features associated with the 5G evolution. For alleviating the increased network energy consumption and operating expenditure, we make a detail review on energy awareness and cost efficiency. As understanding the current status of 5G implementation is important for its eventual commercialization, we also discuss relevant field trials, drive tests, and simulation experiments. Finally, we point out major existing research issues and identify possible future research directions." ] }
1708.05137
2747668150
We propose a novel video object segmentation algorithm based on pixel-level matching using Convolutional Neural Networks (CNN). Our network aims to distinguish the target area from the background on the basis of the pixel-level similarity between two object units. The proposed network represents a target object using features from different depth layers in order to take advantage of both the spatial details and the category-level semantic information. Furthermore, we propose a feature compression technique that drastically reduces the memory requirements while maintaining the capability of feature representation. Two-stage training (pre-training and fine-tuning) allows our network to handle any target object regardless of its category (even if the object's type does not belong to the pre-training data) or of variations in its appearance through a video sequence. Experiments on large datasets demonstrate the effectiveness of our model - against related methods - in terms of accuracy, speed, and stability. Finally, we introduce the transferability of our network to different domains, such as the infrared data domain.
Most recent approaches @cite_16 @cite_13 @cite_10 @cite_34 @cite_26 @cite_17 separate discriminative objects from a background by optimizing an energy equation under various pixel graph relationships. For instance, fully connected graphs have been proposed in @cite_6 to construct a long range spatio-temporal graph structure robust to challenging situations such as occlusion. In another study @cite_19 , the higher potential term in a supervoxel cluster unit was used to enforce the steadiness of a graph structure. More recently, non-local graph connections were effectively approximated in the bilateral space @cite_24 , which drastically improved the accuracy of segmentation. However, many recent methods are too computationally expensive to deal with long video sequences. They are also greatly affected by cluttered backgrounds, resulting in a drifting effect. Furthermore, many challenges remain partly unsolved, such as large scale variations and dynamic appearance changes. The main reason behind these failure cases is likely poor target appearance representations which do not encompass any semantic level information.
{ "cite_N": [ "@cite_13", "@cite_26", "@cite_34", "@cite_6", "@cite_24", "@cite_19", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2740060125", "2765667535", "1963851726", "88469699" ], "abstract": [ "In this paper, we investigate a weakly-supervised object detection framework. Most existing frameworks focus on using static images to learn object detectors. However, these detectors often fail to generalize to videos because of the existing domain shift. Therefore, we investigate learning these detectors directly from boring videos of daily activities. Instead of using bounding boxes, we explore the use of action descriptions as supervision since they are relatively easy to gather. A common issue, however, is that objects of interest that are not involved in human actions are often absent in global action descriptions known as \"missing label\". To tackle this problem, we propose a novel temporal dynamic graph Long Short-Term Memory network (TD-Graph LSTM). TD-Graph LSTM enables global temporal reasoning by constructing a dynamic graph that is based on temporal correlations of object proposals and spans the entire video. The missing label issue for each individual frame can thus be significantly alleviated by transferring knowledge across correlated objects proposals in the whole video. Extensive evaluations on a large-scale daily-life action dataset (i.e., Charades) demonstrates the superiority of our proposed method. We also release object bounding-box annotations for more than 5,000 frames in Charades. We believe this annotated data can also benefit other research on video-based object recognition in the future.", "In this paper, we propose a novel graph model, called weighted sparse representation regularized graph, to learn a robust object representation using multispectral (RGB and thermal) data for visual tracking. In particular, the tracked object is represented with a graph with image patches as nodes. This graph is dynamically learned from two aspects. First, the graph affinity (i.e., graph structure and edge weights) that indicates the appearance compatibility of two neighboring nodes is optimized based on the weighted sparse representation, in which the modality weight is introduced to leverage RGB and thermal information adaptively. Second, each node weight that indicates how likely it belongs to the foreground is propagated from others along with graph affinity. The optimized patch weights are then imposed on the extracted RGB and thermal features, and the target object is finally located by adopting the structured SVM algorithm. Moreover, we also contribute a comprehensive dataset for RGB-T tracking purpose. Comparing with existing ones, the new dataset has the following advantages: 1) Its size is sufficiently large for large-scale performance evaluation (total frame number: 210K, maximum frames per video pair: 8K). 2) The alignment between RGB-T video pairs is highly accurate, which does not need pre- and post-processing. 3) The occlusion levels are annotated for analyzing the occlusion-sensitive performance of different methods. Extensive experiments on both public and newly created datasets demonstrate the effectiveness of the proposed tracker against several state-of-the-art tracking methods.", "We propose an interactive video segmentation system built on the basis of occlusion and long term spatio-temporal structure cues. User supervision is incorporated in a superpixel graph clustering framework that differs crucially from prior art in that it modifies the graph according to the output of an occlusion boundary detector. Working with long temporal intervals (up to 100 frames) enables our system to significantly reduce annotation effort with respect to state of the art systems. Even though the segmentation results are less than perfect, they are obtained efficiently and can be used in weakly supervised learning from video or for video content description. We do not rely on a discriminative object appearance model and allow extracting multiple foreground objects together, saving user time if more than one object is present. Additional experiments with unsupervised clustering based on occlusion boundaries demonstrate the importance of this cue for video segmentation and thus validate our system design.", "We present a spatio-temporal energy minimization formulation for simultaneous video object discovery and co-segmentation across multiple videos containing irrelevant frames. Our approach overcomes a limitation that most existing video co-segmentation methods possess, i.e., they perform poorly when dealing with practical videos in which the target objects are not present in many frames. Our formulation incorporates a spatio-temporal auto-context model, which is combined with appearance modeling for superpixel labeling. The superpixel-level labels are propagated to the frame level through a multiple instance boosting algorithm with spatial reasoning, based on which frames containing the target object are identified. Our method only needs to be bootstrapped with the frame-level labels for a few video frames (e.g., usually 1 to 3) to indicate if they contain the target objects or not. Extensive experiments on four datasets validate the efficacy of our proposed method: 1) object segmentation from a single video on the SegTrack dataset, 2) object co-segmentation from multiple videos on a video co-segmentation dataset, and 3) joint object discovery and co-segmentation from multiple videos containing irrelevant frames on the MOViCS dataset and XJTU-Stevens, a new dataset that we introduce in this paper. The proposed method compares favorably with the state-of-the-art in all of these experiments." ] }
1708.05468
2747901247
The privacy-utility tradeoff problem is formulated as determining the privacy mechanism (random mapping) that minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version. The minimization is studied with two types of constraints on the distortion between the public features and the released version of the dataset: (i) subject to a constraint on the expected value of a cost function @math applied to the distortion, and (ii) subject to bounding the complementary CDF of the distortion by a non-increasing function @math . The first scenario captures various practical cost functions for distorted released data, while the second scenario covers large deviation constraints on utility. The asymptotic optimal leakage is derived in both scenarios. For the distortion cost constraint, it is shown that for convex cost functions there is no asymptotic loss in using stationary memoryless mechanisms. For the complementary CDF bound on distortion, the asymptotic leakage is derived for general mechanisms and shown to be the integral of the single letter leakage function with respect to the Lebesgue measure defined based on the refined bound on distortion. However, it is shown that memoryless mechanisms are generally suboptimal in both cases.
An alternative approach to more general distortion constraints is considered in @cite_8 and referred to as footnote 0 We have changed their notation from @math -separable to @math -separable, in order to avoid confusion with our notation. . In @cite_8 , a multi-letter distortion measure @math is defined as @math -separable if for an increasing function @math . The distortion cost constraints that we consider are more general in the sense that our notion of cost function @math applied to the distortion measure @math covers a broader class of distortion constraints than an average bound on @math -separable distortion measures studied in @cite_8 . Specifically, the average constraint on an @math -separable distortion measure has the form which clearly is a specific case for our formulation in that results from choosing @math and @math , such that @math . Moreover, we allow for non-decreasing functions @math , which means that @math does not have to be strictly increasing. We also note that our focus is on privacy rather than source coding.
{ "cite_N": [ "@cite_8" ], "mid": [ "2789706212", "2001085501", "1605194072", "1964207488" ], "abstract": [ "In this work we relax the usual separability assumption made in rate-distortion literature and propose f -separable distortion measures, which are well suited to model non-linear penalties. The main insight behind f -separable distortion measures is to define an n-letter distortion measure to be an f -mean of single-letter distortions. We prove a rate-distortion coding theorem for stationary ergodic sources with f -separable distortion measures, and provide some illustrative examples of the resulting rate-distortion functions. Finally, we discuss connections between f -separable distortion measures, and the subadditive distortion measure previously proposed in literature.", "Given a set of @math points in @math , how many dimensions are needed to represent all pair wise distances within a specific distortion? This dimension-distortion tradeoff question is well understood for the @math norm, where @math dimensions suffice to achieve @math distortion. In sharp contrast, there is a significant gap between upper and lower bounds for dimension reduction in @math . A recent result shows that distortion @math can be achieved with @math dimensions. On the other hand, the only lower bounds known are that distortion @math requires @math dimensions and that distortion @math requires @math dimensions. In this work, we show the first near linear lower bounds for dimension reduction in @math . In particular, we show that @math distortion requires at least @math dimensions. Our proofs are combinatorial, but inspired by linear programming. In fact, our techniques lead to a simple combinatorial argument that is equivalent to the LP based proof of Brinkman-Charikar for lower bounds on dimension reduction in @math .", "Consider the recovery of an unknown signal @math from quantized linear measurements. In the one-bit compressive sensing setting, one typically assumes that @math is sparse, and that the measurements are of the form @math . Since such measurements give no information on the norm of @math , recovery methods typically assume that @math . We show that if one allows more generally for quantized affine measurements of the form @math , and if the vectors @math are random, an appropriate choice of the affine shifts @math allows norm recovery to be easily incorporated into existing methods for one-bit compressive sensing. In addition, we show that for arbitrary fixed @math in the annulus @math , one may estimate the norm @math up to additive error @math from @math such binary measurements through a single evaluation of the inverse Gaussian error function. Finally, all of our recovery guarantees can be made universal over sparse vectors in the sense that with high probability, one set of measurements and thresholds can successfully estimate all sparse vectors @math in a Euclidean ball of known radius.", "We investigate nonparametric multiproduct pricing problems, in which we want to find revenue maximizing prices for products @math based on a set of customer samples @math . We mostly focus on the unit-demand case, in which products constitute strict substitutes and each customer aims to purchase a single product. In this setting a customer sample consists of a number of nonzero values for different products and possibly an additional product ranking. Once prices are fixed, each customer chooses to buy one of the products she can afford based on some predefined selection rule. We distinguish between the min-buying, max-buying, and rank-buying models. Some of our results also extend to single-minded pricing, in which case products are strict complements and every customer seeks to buy a single set of products, which she purchases if the sum of prices is below her valuation for that set. For the min-buying model we show that the revenue maximization problem is not approximable within factor @math for some constant @math , unless @math , thereby almost closing the gap between the known algorithmic results and previous lower bounds. We also prove inapproximability within @math , @math being an upper bound on the number of nonzero values per customer, and @math under slightly stronger assumptions and provide matching upper bounds. Surprisingly, these hardness results hold even if a price ladder constraint, i.e., a predefined order on the prices of all products, is given. Without the price ladder constraint we obtain similar hardness results for the special case of uniform valuations, i.e., the case that every customer has identical values for all the products she is interested in, assuming specific hardness of the balanced bipartite independent set problem in constant degree graphs or hardness of refuting random 3CNF formulas. Introducing a slightly more general problem definition in which customers are given as an explicit probability distribution, we obtain inapproximability within @math assuming @math . These results apply to single-minded pricing as well. For the max-buying model a polynomial-time approximation scheme exists if a price ladder is given. We give a matching lower bound by proving strong NP-hardness. Assuming limited product supply, we analyze a generic local search algorithm and prove that it is 2-approximate. Finally, we discuss implications for the rank-buying model." ] }
1708.05468
2747901247
The privacy-utility tradeoff problem is formulated as determining the privacy mechanism (random mapping) that minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version. The minimization is studied with two types of constraints on the distortion between the public features and the released version of the dataset: (i) subject to a constraint on the expected value of a cost function @math applied to the distortion, and (ii) subject to bounding the complementary CDF of the distortion by a non-increasing function @math . The first scenario captures various practical cost functions for distorted released data, while the second scenario covers large deviation constraints on utility. The asymptotic optimal leakage is derived in both scenarios. For the distortion cost constraint, it is shown that for convex cost functions there is no asymptotic loss in using stationary memoryless mechanisms. For the complementary CDF bound on distortion, the asymptotic leakage is derived for general mechanisms and shown to be the integral of the single letter leakage function with respect to the Lebesgue measure defined based on the refined bound on distortion. However, it is shown that memoryless mechanisms are generally suboptimal in both cases.
In the context of privacy, the privacy utility tradeoff with distinct @math and @math is studied in @cite_23 and more extensively in @cite_6 , but the utility metric is only restricted to identity cost functions, i.e. @math . Generalizing this to the excess distortion constraint was considered by @cite_20 . In @cite_20 , we also differentiated between explicit availability or unavailability of the private data @math to the privacy mechanism. Information theoretic approaches to privacy that are agnostic to the length of the dataset are considered in @cite_25 @cite_16 @cite_19 .
{ "cite_N": [ "@cite_6", "@cite_19", "@cite_23", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "2587813977", "2564029303", "2951448804", "1622686296" ], "abstract": [ "The tradeoff between privacy and utility is studied for small datasets using tools from fixed error asymptotics in information theory. The problem is formulated as determining the privacy mechanism (random mapping) which minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version, subject to a distortion constraint between the public features and the released version. An excess probability bound is used to constrain the distortion, thus limiting the random variation in distortion due to the finite length. Bounds are derived for the following variants of the problem: (1) whether the mechanism is memoryless (local privacy) or not (global privacy), (2) whether the privacy mechanism has direct access to the private data or not. It is shown that these settings yield different performance in the first order: for global privacy, the first-order leakage decreases with the excess probability, whereas for local privacy it remains constant. The derived bounds also provide tight performance results up to second order for local privacy, as well as bounds on the second order term for global privacy.", "This paper investigates the relation between three different notions of privacy: identifiability, differential privacy, and mutual-information privacy. Under a unified privacy-distortion framework, where the distortion is defined to be the expected Hamming distance between the input and output databases, we establish some fundamental connections between these three privacy notions. Given a maximum allowable distortion @math , we define the privacy-distortion functions @math , @math , and @math to be the smallest (most private best) identifiability level, differential privacy level, and mutual information between the input and the output, respectively. We characterize @math and @math , and prove that @math for @math within certain range, where @math is a constant determined by the prior distribution of the original database @math , and diminishes to zero when @math is uniformly distributed. Furthermore, we show that @math and @math can be achieved by the same mechanism for @math within certain range, i.e., there is a mechanism that simultaneously minimizes the identifiability level and achieves the best mutual-information privacy. Based on these two connections, we prove that this mutual-information optimal mechanism satisfies @math -differential privacy with @math . The results in this paper reveal some consistency between two worst case notions of privacy, namely, identifiability and differential privacy, and an average notion of privacy, mutual-information privacy.", "A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .", "We investigate the problem of intentionally disclosing information about a set of measurement points X (useful information), while guaranteeing that little or no information is revealed about a private variable S (private information). Given that S and X are drawn from a finite set with joint distribution pS,X, we prove that a non-trivial amount of useful information can be disclosed while not disclosing any private information if and only if the smallest principal inertia component of the joint distribution of S and X is 0. This fundamental result characterizes when useful information can be privately disclosed for any privacy metric based on statistical dependence. We derive sharp bounds for the tradeoff between disclosure of useful and private information, and provide explicit constructions of privacy-assuring mappings that achieve these bounds." ] }
1708.05468
2747901247
The privacy-utility tradeoff problem is formulated as determining the privacy mechanism (random mapping) that minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version. The minimization is studied with two types of constraints on the distortion between the public features and the released version of the dataset: (i) subject to a constraint on the expected value of a cost function @math applied to the distortion, and (ii) subject to bounding the complementary CDF of the distortion by a non-increasing function @math . The first scenario captures various practical cost functions for distorted released data, while the second scenario covers large deviation constraints on utility. The asymptotic optimal leakage is derived in both scenarios. For the distortion cost constraint, it is shown that for convex cost functions there is no asymptotic loss in using stationary memoryless mechanisms. For the complementary CDF bound on distortion, the asymptotic leakage is derived for general mechanisms and shown to be the integral of the single letter leakage function with respect to the Lebesgue measure defined based on the refined bound on distortion. However, it is shown that memoryless mechanisms are generally suboptimal in both cases.
In @cite_20 , we also allow the mechanisms to be either memoryless (also referred to as ) or general. This approach has also been considered in the context of differential privacy (DP) (see for example @cite_21 @cite_24 @cite_18 @cite_4 @cite_10 ). In the information theoretic context, it is useful to understand how memoryless mechanisms behave for more general distortion constraints as considered here. Furthermore, even less is known about how general mechanisms behave and that is what this paper aims to do.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_21", "@cite_24", "@cite_10", "@cite_20" ], "mid": [ "2134568086", "2587813977", "2152137554", "2154086287" ], "abstract": [ "The rate distortion behavior of sparse memoryless sources is studied. These serve as models of sparse signal representations and facilitate the performance analysis of “sparsifying” transforms like the wavelet transform and nonlinear approximation schemes. For strictly sparse binary sources with Hamming distortion, R(D) is shown to be almost linear. For nonstrictly sparse continuous-valued sources, termed compressible, two measures of compressibility are introduced: incomplete moments and geometric mean. The former lead to low- and high-rate upper bounds on mean squared error D(R), while the latter yields lower and upper bounds on source entropy, thereby characterizing asymptotic R(D) behavior. Thus, the notion of compressibility is quantitatively connected with actual lossy compression. These bounding techniques are applied to two source models: Gaussian mixtures and power laws matching the approximately scale-invariant decay of wavelet coefficients. The former are versatile models for sparse data, which in particular allow to bound high-rate compression performance of a scalar mixture compared to a corresponding unmixed transform coding system. Such a comparison is interesting for transforms with known coefficient decay, but unknown coefficient ordering, e.g., when positions of highest-variance coefficients are unknown. The use of these models and results in distributed coding and compressed sensing scenarios are also discussed.", "The tradeoff between privacy and utility is studied for small datasets using tools from fixed error asymptotics in information theory. The problem is formulated as determining the privacy mechanism (random mapping) which minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version, subject to a distortion constraint between the public features and the released version. An excess probability bound is used to constrain the distortion, thus limiting the random variation in distortion due to the finite length. Bounds are derived for the following variants of the problem: (1) whether the mechanism is memoryless (local privacy) or not (global privacy), (2) whether the privacy mechanism has direct access to the private data or not. It is shown that these settings yield different performance in the first order: for global privacy, the first-order leakage decreases with the excess probability, whereas for local privacy it remains constant. The derived bounds also provide tight performance results up to second order for local privacy, as well as bounds on the second order term for global privacy.", "In this paper, we consider a discrete memoryless state-dependent relay channel with non-causal Channel State Information (CSI). We investigate three different cases in which perfect channel states can be known non-causally: i) only to the source, ii) only to the relay or iii) both to the source and to the relay node. For these three cases we establish lower bounds on the channel capacity (achievable rates) based on using Gel'fand-Pinsker coding at the nodes where the CSI is available and using Compress-and-Forward (CF) strategy at the relay. Furthermore, for the general Gaussian relay channel with additive independent and identically distributed (i.i.d) states and noise, we obtain lower bounds on the capacity for the cases in which CSI is available at the source or at the relay. We also compare our derived bounds with the previously obtained results which were based on Decode-and-Forward (DF) strategy, and we show the cases in which our derived lower bounds outperform DF based bounds, and can achieve the rates close to the upper bound.", "A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Publishing fully accurate information maximizes utility while minimizing privacy, while publishing random noise accomplishes the opposite. Privacy can be rigorously quantified using the framework of differential privacy, which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism M* -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user u, no matter what its side information and preferences, derives as much utility from M* as from interacting with a differentially private mechanism Mu that is optimally tailored to u. More precisely, for every user u there is an optimal mechanism Mu for it that factors into a user-independent part (the geometric mechanism M*) followed by user-specific post-processing that can be delegated to the user itself. The first part of our proof of this result characterizes the optimal differentially private mechanism for a fixed but arbitrary user in terms of a certain basic feasible solution to a linear program with constraints that encode differential privacy. The second part shows that all of the relevant vertices of this polytope (ranging over all possible users) are derivable from the geometric mechanism via suitable remappings of its range." ] }
1708.05349
2746073525
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an "incomplete" signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to maps the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. We demonstrate our approach for various input modalities, and for various domains ranging from human faces to cats-and-dogs to shoes and handbags.
Synthesis with CNNs: Convolutional Neural Networks (CNNs) have enjoyed great success for various discriminative pixel-level tasks such as segmentation @cite_9 @cite_45 , depth and surface normal estimation @cite_25 @cite_9 @cite_1 @cite_37 , semantic boundary detection @cite_9 @cite_34 etc. Such networks are usually trained using standard losses (such as softmax or @math regression) on image-label data pairs. However, such networks do not typically perform well for the inverse problem of image synthesis from a (incomplete) label, though exceptions do exist @cite_33 . A major innovation was the introduction of adversarially-trained generative networks (GANs) @cite_10 . This formulation was hugely influential in computer visions, having been applied to various image generation tasks that condition on a low-resolution image @cite_44 @cite_43 , segmentation mask @cite_11 , surface normal map @cite_32 and other inputs @cite_4 @cite_6 @cite_31 @cite_23 @cite_8 @cite_14 . Most related to us is @cite_11 who propose a general loss function for adversarial learning, applying it to a diverse set of image synthesis tasks.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_4", "@cite_33", "@cite_8", "@cite_9", "@cite_1", "@cite_32", "@cite_6", "@cite_44", "@cite_43", "@cite_45", "@cite_23", "@cite_31", "@cite_34", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2526782364", "2563705555", "2951402970", "2786129249" ], "abstract": [ "Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.", "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.", "Recently, the convolutional neural network (CNN) has been successfully applied to the task of brain tumor segmentation. However, the effectiveness of a CNN-based method is limited by the small receptive field, and the segmentation results don’t perform well in the spatial contiguity. Therefore, many attempts have been made to strengthen the spatial contiguity of the network output. In this paper, we proposed an adversarial training approach to train the CNN network. A discriminator network is trained along with a generator network which produces the synthetic segmentation results. The discriminator network is encouraged to discriminate the synthetic labels from the ground truth labels. Adversarial adjustments provided by the discriminator network are fed back to the generator network to help reduce the differences between the synthetic labels and the ground truth labels and reinforce the spatial contiguity with high-order loss terms. The presented method is evaluated on the Brats2017 training dataset. The experiment results demonstrate that the presented method could enhance the spatial contiguity of the segmentation results and improve the segmentation accuracy." ] }
1708.05349
2746073525
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an "incomplete" signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to maps the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. We demonstrate our approach for various input modalities, and for various domains ranging from human faces to cats-and-dogs to shoes and handbags.
Interpretability and user-control: Interpreting and explaining the outputs of generative deep networks is an open problem. As a community, we do not have a clear understanding of what, where, and how outputs are generated. Our work is fundamentally based on information via nearest neighbors, which explicitly reveals how each pixel-level output is generated (by in turn revealing where it was copied from). This makes our synthesized outputs quite interpretable. One important consequence is the ability to intuitively edit and control the process of synthesis. @cite_38 provide a user with controls for editing image such as color, and outline. But instead of using a predefined set of editing operations, we allow a user to have an arbitrarily -fine level of control through on-the-fly editing of the exemplar set (E.g., resynthesize an image using the eye from this image and the nose from that one'').
{ "cite_N": [ "@cite_38" ], "mid": [ "2797046819", "2560481159", "2530372461", "2962963674" ], "abstract": [ "Deep generative models have demonstrated great performance in image synthesis. However, results deteriorate in case of spatial deformations, since they generate images of objects directly, rather than modeling the intricate interplay of their inherent shape and appearance. We present a conditional U-Net for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance. The approach is trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Experiments show that the model enables conditional image generation and transfer. Therefore, either shape or appearance can be retained from a query image, while freely altering the other. Moreover, appearance can be sampled due to its stochastic latent representation, while preserving shape. In quantitative and qualitative experiments on COCO, DeepFashion, shoes, Market-1501 and handbags, the approach demonstrates significant improvements over the state-of-the-art.", "Several recent works have used deep convolutional networks to generate realistic imagery. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach generates more realistic, diverse, and controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.", "Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 × 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations.", "Deep generative models have demonstrated great performance in image synthesis. However, results deteriorate in case of spatial deformations, since they generate images of objects directly, rather than modeling the intricate interplay of their inherent shape and appearance. We present a conditional U-Net [30] for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance. The approach is trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Experiments show that the model enables conditional image generation and transfer. Therefore, either shape or appearance can be retained from a query image, while freely altering the other. Moreover, appearance can be sampled due to its stochastic latent representation, while preserving shape. In quantitative and qualitative experiments on COCO [20], DeepFashion [21, 23], shoes [43], Market-1501 [47] and handbags [49] the approach demonstrates significant improvements over the state-of-the-art." ] }
1708.05349
2746073525
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an "incomplete" signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to maps the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. We demonstrate our approach for various input modalities, and for various domains ranging from human faces to cats-and-dogs to shoes and handbags.
Correspondence: An important byproduct of pixelwise NN is the generation of pixelwise correspondences between the synthesized output and training examples. Establishing such pixel-level correspondence has been one of the core challenges in computer vision @cite_29 @cite_17 @cite_51 @cite_20 @cite_50 @cite_30 @cite_12 . @cite_18 use SIFT flow @cite_51 to hallucinate details for image super-resolution. @cite_12 propose a CNN to predict appearance flow that can be used to transfer information from input views to synthesize a new view. @cite_17 generate 3D reconstructions by training a CNN to learn correspondence between object instances. Our work follows from the crucial observation of @cite_20 , who suggest that features from pre-trained convnets can also be used for pixel-level correspondences. In this work, we make an additional empirical observation: hypercolumn features trained for semantic segmentation learn nuances and details better than one trained for image classification. This finding helped us to establish semantic correspondences between the pixels in query and training images, and enabled us to extract high-frequency information from the training examples to synthesize a new image from a given input.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_29", "@cite_50", "@cite_51", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2558625610", "2950181906", "2348664362", "2785325870" ], "abstract": [ "Robust estimation of correspondences between image pixels is an important problem in robotics, with applications in tracking, mapping, and recognition of objects, environments, and other agents. Correspondence estimation has long been the domain of hand-engineered features, but more recently deep learning techniques have provided powerful tools for learning features from raw data. The drawback of the latter approach is that a vast amount of (labeled, typically) training data are required for learning. This paper advocates a new approach to learning visual descriptors for dense correspondence estimation in which we harness the power of a strong three-dimensional generative model to automatically label correspondences in RGB-D video data. A fully convolutional network is trained using a contrastive loss to produce viewpoint- and lighting-invariant descriptors. As a proof of concept, we collected two datasets: The first depicts the upper torso and head of the same person in widely varied settings, and the second depicts an office as seen on multiple days with objects rearranged within. Our datasets focus on revisitation of the same objects and environments, and we show that by training the CNN only from local tracking data, our learned visual descriptor generalizes toward identifying nonlabeled correspondences across videos. We furthermore show that our approach to descriptor learning can be used to achieve state-of-the-art single-frame localization results on the MSR 7-scenes dataset without using any labels identifying correspondences between separate videos of the same scenes at training time.", "We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints. We approach this as a learning task but, critically, instead of learning to synthesize pixels from scratch, we learn to copy them from the input image. Our approach exploits the observation that the visual appearance of different views of the same instance is highly correlated, and such correlation could be explicitly learned by training a convolutional neural network (CNN) to predict appearance flows -- 2-D coordinate vectors specifying which pixels in the input view could be used to reconstruct the target view. Furthermore, the proposed framework easily generalizes to multiple input views by learning how to optimally combine single-view predictions. We show that for both objects and scenes, our approach is able to synthesize novel views of higher perceptual quality than previous CNN-based techniques.", "We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints. We approach this as a learning task but, critically, instead of learning to synthesize pixels from scratch, we learn to copy them from the input image. Our approach exploits the observation that the visual appearance of different views of the same instance is highly correlated, and such correlation could be explicitly learned by training a convolutional neural network (CNN) to predict appearance flows – 2-D coordinate vectors specifying which pixels in the input view could be used to reconstruct the target view. Furthermore, the proposed framework easily generalizes to multiple input views by learning how to optimally combine single-view predictions. We show that for both objects and scenes, our approach is able to synthesize novel views of higher perceptual quality than previous CNN-based techniques.", "Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL ." ] }
1708.05349
2746073525
We present a simple nearest-neighbor (NN) approach that synthesizes high-frequency photorealistic images from an "incomplete" signal such as a low-resolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to maps the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. We demonstrate our approach for various input modalities, and for various domains ranging from human faces to cats-and-dogs to shoes and handbags.
Nonparametrics: Our work closely follows data-driven approaches that make use of nearest neighbors @cite_7 @cite_49 @cite_40 @cite_27 @cite_36 @cite_19 . Hays and Efros @cite_49 match a query image to 2 million training images for various tasks such as image completion. We make use of dramatically smaller training sets by allowing for compositional matches. @cite_48 propose a two-step pipeline for face hallucination where global constraints capture overall structure, and local constraints produce photorealistic local features. While they focus on the task of facial super-resolution, we address variety of synthesis applications. Final, our compositional approach is inspired by Boiman and Irani @cite_39 @cite_21 , who reconstruct a query image via compositions of training examples.
{ "cite_N": [ "@cite_7", "@cite_36", "@cite_48", "@cite_21", "@cite_39", "@cite_19", "@cite_27", "@cite_40", "@cite_49" ], "mid": [ "1962739028", "2953278262", "2507235960", "2951461827" ], "abstract": [ "Both parametric and non-parametric approaches have demonstrated encouraging performances in the human parsing task, namely segmenting a human image into several semantic regions (e.g., hat, bag, left arm, face). In this work, we aim to develop a new solution with the advantages of both methodologies, namely supervision from annotated data and the flexibility to use newly annotated (possibly uncommon) images, and present a quasi-parametric human parsing model. Under the classic K Nearest Neighbor (KNN)-based nonparametric framework, the parametric Matching Convolutional Neural Network (M-CNN) is proposed to predict the matching confidence and displacements of the best matched region in the testing image for a particular semantic region in one KNN image. Given a testing image, we first retrieve its KNN images from the annotated manually-parsed human image corpus. Then each semantic region in each KNN image is matched with confidence to the testing image using M-CNN, and the matched regions from all KNN images are further fused, followed by a superpixel smoothing procedure to obtain the ultimate human parsing result. The M-CNN differs from the classic CNN [12] in that the tailored cross image matching filters are introduced to characterize the matching between the testing image and the semantic region of a KNN image. The cross image matching filters are defined at different convolutional layers, each aiming to capture a particular range of displacements. Comprehensive evaluations over a large dataset with 7,700 annotated human images well demonstrate the significant performance gain from the quasi-parametric model over the state-of-the-arts [29, 30], for the human parsing task.", "Both parametric and non-parametric approaches have demonstrated encouraging performances in the human parsing task, namely segmenting a human image into several semantic regions (e.g., hat, bag, left arm, face). In this work, we aim to develop a new solution with the advantages of both methodologies, namely supervision from annotated data and the flexibility to use newly annotated (possibly uncommon) images, and present a quasi-parametric human parsing model. Under the classic K Nearest Neighbor (KNN)-based nonparametric framework, the parametric Matching Convolutional Neural Network (M-CNN) is proposed to predict the matching confidence and displacements of the best matched region in the testing image for a particular semantic region in one KNN image. Given a testing image, we first retrieve its KNN images from the annotated manually-parsed human image corpus. Then each semantic region in each KNN image is matched with confidence to the testing image using M-CNN, and the matched regions from all KNN images are further fused, followed by a superpixel smoothing procedure to obtain the ultimate human parsing result. The M-CNN differs from the classic CNN in that the tailored cross image matching filters are introduced to characterize the matching between the testing image and the semantic region of a KNN image. The cross image matching filters are defined at different convolutional layers, each aiming to capture a particular range of displacements. Comprehensive evaluations over a large dataset with 7,700 annotated human images well demonstrate the significant performance gain from the quasi-parametric model over the state-of-the-arts, for the human parsing task.", "We present a novel framework for hallucinating faces of unconstrained poses and with very low resolution (face size as small as 5pxIOD). In contrast to existing studies that mostly ignore or assume pre-aligned face spatial configuration (e.g. facial landmarks localization or dense correspondence field), we alternatingly optimize two complementary tasks, namely face hallucination and dense correspondence field estimation, in a unified framework. In addition, we propose a new gated deep bi-network that contains two functionality-specialized branches to recover different levels of texture details. Extensive experiments demonstrate that such formulation allows exceptional hallucination quality on in-the-wild low-res faces with significant pose and illumination variations.", "We present a novel approach for synthesizing photo-realistic images of people in arbitrary poses using generative adversarial learning. Given an input image of a person and a desired pose represented by a 2D skeleton, our model renders the image of the same person under the new pose, synthesizing novel views of the parts visible in the input image and hallucinating those that are not seen. This problem has recently been addressed in a supervised manner, i.e., during training the ground truth images under the new poses are given to the network. We go beyond these approaches by proposing a fully unsupervised strategy. We tackle this challenging scenario by splitting the problem into two principal subtasks. First, we consider a pose conditioned bidirectional generator that maps back the initially rendered image to the original pose, hence being directly comparable to the input image without the need to resort to any training image. Second, we devise a novel loss function that incorporates content and style terms, and aims at producing images of high perceptual quality. Extensive experiments conducted on the DeepFashion dataset demonstrate that the images rendered by our model are very close in appearance to those obtained by fully supervised approaches." ] }
1708.05122
2747206248
As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.
Visual Conversational Agents. Our AI agents are visual conversational models, which have recently emerged as a popular research area in visually-grounded language modeling @cite_25 @cite_4 @cite_6 @cite_26 . @cite_25 introduced the task of Visual Dialog and collected the VisDial dataset by pairing subjects on Amazon Mechanical Turk (AMT) to chat about an image (with assigned roles of questioner and answerer). @cite_4 pre-trained questioner and answerer agents on this VisDial dataset via supervised learning and fine-tuned them via self-talk (reinforcement learning), observing that RL-fine-tuned - are better at image-guessing after interacting with each other. However, as described in sec:intro , they do not evaluate if this change in - performance translates to human-AI teams.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_25", "@cite_6" ], "mid": [ "2953119472", "2741373182", "2754573465", "2795571593" ], "abstract": [ "We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end -- from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a 'sanity check' demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find that two bots invent their own communication protocol and start using certain symbols to ask answer about certain visual attributes (shape color style). Thus, we demonstrate the emergence of grounded language and communication among 'visual' dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset, where we pretrain with supervised dialog data and show that the RL 'fine-tuned' agents significantly outperform SL agents. Interestingly, the RL Qbot learns to ask questions that Abot is good at, ultimately resulting in more informative dialog and a better team.", "We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data. Within a life-long interactive learning period, the agent, trained using Reinforcement Learning (RL), must be able to handle natural conversations with human users and achieve good learning performance (accuracy) while minimising human effort in the learning process. We train and evaluate this system in interaction with a simulated human tutor, which is built on the BURCHAK corpus -- a Human-Human Dialogue dataset for the visual learning task. The results show that: 1) The learned policy can coherently interact with the simulated user to achieve the goal of the task (i.e. learning visual attributes of objects, e.g. colour and shape); and 2) it finds a better trade-off between classifier accuracy and tutoring costs than hand-crafted rule-based policies, including ones with dynamic policies.", "Building dialog agents that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence. In open-domain human-computer conversation, where the conversational agent is expected to respond to human responses in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively. In this paper, we investigate the impact of providing commonsense knowledge about the concepts covered in the dialog. Our model represents the first attempt to integrating a large commonsense knowledge base into end-to-end conversational models. In the retrieval-based scenario, we propose the Tri-LSTM model to jointly take into account message and commonsense for selecting an appropriate response. Our experiments suggest that the knowledge-augmented models are superior to their knowledge-free counterparts in automatic evaluation.", "Computer-based conversational agents are becoming ubiquitous. However, for these systems to be engaging and valuable to the user, they must be able to express emotion, in addition to providing informative responses. Humans rely on much more than language during conversations; visual information is key to providing context. We present the first example of an image-grounded conversational agent using visual sentiment, facial expression and scene features. We show that key qualities of the generated dialogue can be manipulated by the features used for training the agent. We evaluate our model on a large and very challenging real-world dataset of conversations from social media (Twitter). The image-grounding leads to significantly more informative, emotional and specific responses, and the exact qualities can be tuned depending on the image features used. Furthermore, our model improves the objective quality of dialogue responses when evaluated on standard natural language metrics." ] }
1708.05122
2747206248
As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.
Human Computation Games. Human computation games have been shown to be time- and cost-efficient, reliable, intrinsically engaging for participants @cite_23 @cite_29 , and hence an effective method to collect data annotations. There is a long line of work on designing such Games with a Purpose (GWAP) @cite_11 for data labeling purposes across various domains including images @cite_28 @cite_20 @cite_7 @cite_27 , audio @cite_17 @cite_18 , language @cite_15 @cite_1 , movies @cite_24 . While such games have traditionally focused on human-human collaboration, we extend these ideas to human-AI teams. Rather than collecting labeled data, our game is designed to measure the effectiveness of the AI in the context of human-AI teams.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_28", "@cite_29", "@cite_1", "@cite_17", "@cite_24", "@cite_27", "@cite_23", "@cite_15", "@cite_20", "@cite_11" ], "mid": [ "2077805339", "2152729775", "1600300810", "2119249434" ], "abstract": [ "Human computation can address complex computational problems by tapping into large resource pools for relatively little cost. Two prominent human-computation techniques - games with a purpose (GWAP) and microtask crowdsourcing - can help resolve semantic-technology-related tasks, including knowledge representation, ontology alignment, and semantic annotation. To evaluate which approach is better with respect to costs and benefits, the authors employ categorization challenges in Wikipedia to ultimately create a large, general-purpose ontology. They first use the OntoPronto GWAP, then replicate its problem-solving setting in Amazon Mechanical Turk, using a similar task-design structure, evaluation mechanisms, and input data.", "Developing computer-controlled groups to engage in combat, control the use of limited resources, and create units and buildings in real-time strategy (RTS) games is a novel application in game AI. However, tightly controlled online commercial game pose challenges to researchers interested in observing player activities, constructing player strategy models, and developing practical AI technology in them. Instead of setting up new programming environments or building a large amount of agentpsilas decision rules by playerpsilas experience for conducting real-time AI research, the authors use replays of the commercial RTS game StarCraft to evaluate human player behaviors and to construct an intelligent system to learn human-like decisions and behaviors. A case-based reasoning approach was applied for the purpose of training our system to learn and predict player strategies. Our analysis indicates that the proposed system is capable of learning and predicting individual player strategies, and that players provide evidence of their personal characteristics through their building construction order.", "Motivation has been one of the central challenges of human computation. A promising approach is the integration of human computation tasks into digital games. Different human computation games have been successfully deployed, but tend to provide relatively narrow gaming experiences. This survey discusses various approaches of digital games for human computation and aims to explore the ties to signal processing and possible generalizations.", "How do we build multiagent algorithms for agent interactions with human adversaries? Stackelberg games are natural models for many important applications that involve human interaction, such as oligopolistic markets and security domains. In Stackelberg games, one player, the leader, commits to a strategy and the follower makes their decision with knowledge of the leader's commitment. Existing algorithms for Stackelberg games efficiently find optimal solutions (leader strategy), but they critically assume that the follower plays optimally. Unfortunately, in real-world applications, agents face human followers (adversaries) who --- because of their bounded rationality and limited observation of the leader strategy --- may deviate from their expected optimal response. Not taking into account these likely deviations when dealing with human adversaries can cause an unacceptable degradation in the leader's reward, particularly in security applications where these algorithms have seen real-world deployment. To address this crucial problem, this paper introduces three new mixed-integer linear programs (MILPs) for Stackelberg games to consider human adversaries, incorporating: (i) novel anchoring theories on human perception of probability distributions and (ii) robustness approaches for MILPs to address human imprecision. Since these new approaches consider human adversaries, traditional proofs of correctness or optimality are insufficient; instead, it is necessary to rely on empirical validation. To that end, this paper considers two settings based on real deployed security systems, and compares 6 different approaches (three new with three previous approaches), in 4 different observability conditions, involving 98 human subjects playing 1360 games in total. The final conclusion was that a model which incorporates both the ideas of robustness and anchoring achieves statistically significant better rewards and also maintains equivalent or faster solution speeds compared to existing approaches." ] }
1708.05122
2747206248
As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.
Evaluating Conversational Agents. Goal-driven (non-visual) conversational models have typically been evaluated on task-completion rate or time-to-task-completion @cite_5 , so shorter conversations are better. At the other end of the spectrum, free-form conversation models are often evaluated by metrics that rely on n-gram overlaps, such as BLEU, METEOR, ROUGE, but these have been shown to correlate poorly with human judgment @cite_13 . Human evaluation of conversations is typically in the format where humans rate the quality of machine utterances given context, without actually taking part in the conversation, as in @cite_4 and @cite_21 . To the best of our knowledge, we are the first to evaluate conversational models via team performance where humans are continuously interacting with agents to succeed at a downstream task.
{ "cite_N": [ "@cite_5", "@cite_21", "@cite_13", "@cite_4" ], "mid": [ "2795571593", "1975798038", "2754573465", "1591706642" ], "abstract": [ "Computer-based conversational agents are becoming ubiquitous. However, for these systems to be engaging and valuable to the user, they must be able to express emotion, in addition to providing informative responses. Humans rely on much more than language during conversations; visual information is key to providing context. We present the first example of an image-grounded conversational agent using visual sentiment, facial expression and scene features. We show that key qualities of the generated dialogue can be manipulated by the features used for training the agent. We evaluate our model on a large and very challenging real-world dataset of conversations from social media (Twitter). The image-grounding leads to significantly more informative, emotional and specific responses, and the exact qualities can be tuned depending on the image features used. Furthermore, our model improves the objective quality of dialogue responses when evaluated on standard natural language metrics.", "Conversational agents provide powerful opportunities to interact and engage with the users. The challenge is how to create naturalistic behaviors that replicate the complex gestures observed during human interactions. Previous studies have used rule-based frameworks or data-driven models to generate appropriate gestures, which are properly synchronized with the underlying discourse functions. Among these methods, speech-driven approaches are especially appealing given the rich information conveyed on speech. It captures emotional cues and prosodic patterns that are important to synthesize behaviors (i.e., modeling the variability and complexity of the timings of the behaviors). The main limitation of these models is that they fail to capture the underlying semantic and discourse functions of the message (e.g., nodding). This study proposes a speech-driven framework that explicitly model discourse functions, bridging the gap between speech-driven and rule-based models. The approach is based on dynamic Bayesian Network (DBN), where an additional node is introduced to constrain the models by specific discourse functions. We implement the approach by synthesizing head and eyebrow motion. We conduct perceptual evaluations to compare the animations generated using the constrained and unconstrained models.", "Building dialog agents that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence. In open-domain human-computer conversation, where the conversational agent is expected to respond to human responses in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively. In this paper, we investigate the impact of providing commonsense knowledge about the concepts covered in the dialog. Our model represents the first attempt to integrating a large commonsense knowledge base into end-to-end conversational models. In the retrieval-based scenario, we propose the Tri-LSTM model to jointly take into account message and commonsense for selecting an appropriate response. Our experiments suggest that the knowledge-augmented models are superior to their knowledge-free counterparts in automatic evaluation.", "Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model." ] }
1708.05122
2747206248
As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.
Turing Test. Finally, our game is in line with ideas in @cite_19 , re-imagining the traditional Turing Test for state-of-the-art AI systems, taking the pragmatic view that an effective AI teammate need not appear human-like, act or be mistaken for one, provided its behavior does not feel jarring or baffle teammates, leaving them wondering not about what it is thinking but whether it is.
{ "cite_N": [ "@cite_19" ], "mid": [ "300525892", "2129824148", "2604175534", "2786377825" ], "abstract": [ "As language and visual understanding by machines progresses rapidly, we are observing an increasing interest in holistic architectures that tightly interlink both modalities in a joint learning and inference process. This trend has allowed the community to progress towards more challenging and open tasks and refueled the hope at achieving the old AI dream of building machines that could pass a turing test in open domains. In order to steadily make progress towards this goal, we realize that quantifying performance becomes increasingly difficult. Therefore we ask how we can precisely define such challenges and how we can evaluate different algorithms on this open tasks? In this paper, we summarize and discuss such challenges as well as try to give answers where appropriate options are available in the literature. We exemplify some of the solutions on a recently presented dataset of question-answering task based on real-world indoor images that establishes a visual turing challenge. Finally, we argue despite the success of unique ground-truth annotation, we likely have to step away from carefully curated dataset and rather rely on 'social consensus' as the main driving force to create suitable benchmarks. Providing coverage in this inherently ambiguous output space is an emerging challenge that we face in order to make quantifiable progress in this area.", "We exploit the gap in ability between human and machine vision systems to craft a family of automatic challenges that tell human and machine users apart via graphical interfaces including Internet browsers. Turing proposed (1950) a method whereby human judges might validate \"artificial intelligence\" by failing to distinguish between human and machine interlocutors. Stimulated by the \"chat room problem\", and influenced by the CAPTCHA project of (2000), we propose a variant of the Turing test using pessimal print: that is, low-quality images of machine-printed text synthesized pseudo-randomly over certain ranges of words, typefaces, and image degradations. We show experimentally that judicious choice of these ranges can ensure that the images are legible to human readers but illegible to several of the best present-day optical character recognition (OCR) machines. Our approach is motivated by a decade of research on performance evaluation of OCR machines and on quantitative stochastic models of document image quality. The slow pace of evolution of OCR and other species of machine vision over many decades suggests that pessimal print will defy automated attack for many years. Applications include 'bot' barriers and database rationing.", "Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human–machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human–machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.", "The ability of intelligent agents to play games in human-like fashion is popularly considered a benchmark of progress in Artificial Intelligence. Similarly, performance on multi-disciplinary tasks such as Visual Question Answering (VQA) is considered a marker for gauging progress in Computer Vision. In our work, we bring games and VQA together. Specifically, we introduce the first computational model aimed at Pictionary, the popular word-guessing social game. We first introduce Sketch-QA, an elementary version of Visual Question Answering task. Styled after Pictionary, Sketch-QA uses incrementally accumulated sketch stroke sequences as visual data. Notably, Sketch-QA involves asking a fixed question (\"What object is being drawn?\") and gathering open-ended guess-words from human guessers. We analyze the resulting dataset and present many interesting findings therein. To mimic Pictionary-style guessing, we subsequently propose a deep neural model which generates guess-words in response to temporally evolving human-drawn sketches. Our model even makes human-like mistakes while guessing, thus amplifying the human mimicry factor. We evaluate our model on the large-scale guess-word dataset generated via Sketch-QA task and compare with various baselines. We also conduct a Visual Turing Test to obtain human impressions of the guess-words generated by humans and our model. Experimental results demonstrate the promise of our approach for Pictionary and similarly themed games." ] }
1708.05133
2749330229
A growing demand for natural-scene text detection has been witnessed by the computer vision community since text information plays a significant role in scene understanding and image indexing. Deep neural networks are being used due to their strong capabilities of pixel-wise classification or word localization, similar to being used in common vision problems. In this paper, we present a novel two-task network with integrating bottom and top cues. The first task aims to predict a pixel-by-pixel labeling and based on which, word proposals are generated with a canonical connected component analysis. The second task aims to output a bundle of character candidates used later to verify the word proposals. The two sub-networks share base convolutional features and moreover, we present a new loss to strengthen the interaction between them. We evaluate the proposed network on public benchmark datasets and show it can detect arbitrary-orientation scene text with a finer output boundary. In ICDAR 2013 text localization task, we achieve the state-of-the-art performance with an F-score of 0.919 and a much better recall of 0.915.
In this paper, we focus on the use of convolutional neural networks (CNNs) in scene-text detection. It can date back to 2012, when Wang al @cite_7 presented a sliding-window approach to detect individual characters. The convolutional network was being used as a 62-category classifier. With the emergence of dedicated networks for common object detection, applying those models into text problem seems straightforward. In DeepText of Zhong al @cite_16 , they follow the Faster R-CNN @cite_1 to detect words in images. The Region Proposal Network is redesigned with the introduction of multiple sets of convolution and pooling layers. The work of @cite_15 follows another recent network called SSD @cite_18 with implicit proposals. The authors also improve the adaption of the model to text issue by adjusting the network parameters. The major challenge for word detection networks is the great variation of words in aspect ratio and orientation, both of which can significantly reduce the efficiency of word proposals. In the work of Shi al @cite_5 , a Spatial Transformer Network @cite_22 is introduced. By projecting selected landmark points, the problem of rotation and perspective distortion can be partly solved.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_1", "@cite_5", "@cite_15", "@cite_16" ], "mid": [ "2472159136", "2604243686", "2289772031", "2774989306" ], "abstract": [ "We propose a system that finds text in natural scenes using a variety of cues. Our novel data-driven method incorporates coarse-to-fine detection of character pixels using convolutional features (Text-Conv), followed by extracting connected components (CCs) from characters using edge and color features, and finally performing a graph-based segmentation of CCs into words (Word-Graph). For Text-Conv, the initial detection is based on convolutional feature maps similar to those used in Convolutional Neural Networks (CNNs), but learned using Convolutional k-means. Convolution masks defined by local and neighboring patch features are used to improve detection accuracy. The Word-Graph algorithm uses contextual information to both improve word segmentation and prune false character word detections. Different definitions for foreground (text) regions are used to train the detection stages, some based on bounding box intersection, and others on bounding box and pixel intersection. Our system obtains pixel, character, and word detection f-measures of 93.14 , 90.26 , and 86.77 respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems. This approach may work for other detection targets with homogenous color in natural scenes.", "Detecting incidental scene text is a challenging task because of multi-orientation, perspective distortion, and variation of text size, color and scale. Retrospective research has only focused on using rectangular bounding box or horizontal sliding window to localize text, which may result in redundant background noise, unnecessary overlap or even information loss. To address these issues, we propose a new Convolutional Neural Networks (CNNs) based method, named Deep Matching Prior Network (DMPNet), to detect text with tighter quadrangle. First, we use quadrilateral sliding windows in several specific intermediate convolutional layers to roughly recall the text with higher overlapping area and then a shared Monte-Carlo method is proposed for fast and accurate computing of the polygonal areas. After that, we designed a sequential protocol for relative regression which can exactly predict text with compact quadrangle. Moreover, a auxiliary smooth Ln loss is also proposed for further regressing the position of text, which has better overall performance than L2 loss and smooth L1 loss in terms of robustness and stability. The effectiveness of our approach is evaluated on a public word-level, multi-oriented scene text database, ICDAR 2015 Robust Reading Competition Challenge 4 Incidental scene text localization. The performance of our method is evaluated by using F-measure and found to be 70.64 , outperforming the existing state-of-the-art method with F-measure 63.76 .", "Convolutional neural networks (CNNs) have recently achieved remarkable successes in various image classification and understanding tasks. The deep features obtained at the top fully connected layer of the CNN (FC-features) exhibit rich global semantic information and are extremely effective in image classification. On the other hand, the convolutional features in the middle layers of the CNN also contain meaningful local information, but are not fully explored for image representation. In this paper, we propose a novel locally supervised deep hybrid model (LS-DHM) that effectively enhances and explores the convolutional features for scene recognition. First, we notice that the convolutional features capture local objects and fine structures of scene images, which yield important cues for discriminating ambiguous scenes, whereas these features are significantly eliminated in the highly compressed FC representation. Second, we propose a new local convolutional supervision layer to enhance the local structure of the image by directly propagating the label information to the convolutional layers. Third, we propose an efficient Fisher convolutional vector (FCV) that successfully rescues the orderless mid-level semantic information (e.g., objects and textures) of scene image. The FCV encodes the large-sized convolutional maps into a fixed-length mid-level representation, and is demonstrated to be strongly complementary to the high-level FC-features. Finally, both the FCV and FC-features are collaboratively employed in the LS-DHM representation, which achieves outstanding performance in our experiments. It obtains 83.75 and 67.56 accuracies, respectively, on the heavily benchmarked MIT Indoor67 and SUN397 data sets, advancing the state-of-the-art substantially.", "Convolutional neural networks (CNNs) have demonstrated their ability object detection of very high resolution remote sensing images. However, CNNs have obvious limitations for modeling geometric variations in remote sensing targets. In this paper, we introduced a CNN structure, namely deformable ConvNet, to address geometric modeling in object recognition. By adding offsets to the convolution layers, feature mapping of CNN can be applied to unfixed locations, enhancing CNNs’ visual appearance understanding. In our work, a deformable region-based fully convolutional networks (R-FCN) was constructed by substituting the regular convolution layer with a deformable convolution layer. To efficiently use this deformable convolutional neural network (ConvNet), a training mechanism is developed in our work. We first set the pre-trained R-FCN natural image model as the default network parameters in deformable R-FCN. Then, this deformable ConvNet was fine-tuned on very high resolution (VHR) remote sensing images. To remedy the increase in lines like false region proposals, we developed aspect ratio constrained non maximum suppression (arcNMS). The precision of deformable ConvNet for detecting objects was then improved. An end-to-end approach was then developed by combining deformable R-FCN, a smart fine-tuning strategy and aspect ratio constrained NMS. The developed method was better than a state-of-the-art benchmark in object detection without data augmentation." ] }
1708.05133
2749330229
A growing demand for natural-scene text detection has been witnessed by the computer vision community since text information plays a significant role in scene understanding and image indexing. Deep neural networks are being used due to their strong capabilities of pixel-wise classification or word localization, similar to being used in common vision problems. In this paper, we present a novel two-task network with integrating bottom and top cues. The first task aims to predict a pixel-by-pixel labeling and based on which, word proposals are generated with a canonical connected component analysis. The second task aims to output a bundle of character candidates used later to verify the word proposals. The two sub-networks share base convolutional features and moreover, we present a new loss to strengthen the interaction between them. We evaluate the proposed network on public benchmark datasets and show it can detect arbitrary-orientation scene text with a finer output boundary. In ICDAR 2013 text localization task, we achieve the state-of-the-art performance with an F-score of 0.919 and a much better recall of 0.915.
Another group of methods are based on image segmentation networks. Zhang al @cite_23 use the Fully Convolutional Network (FCN) @cite_19 to obtain salient maps with the foreground as candidates of text lines. The trouble is that the candidates may stick to each other, and their boundaries are often blurry. To make the final predictions of quadrilateral shape, the authors have to set up some hard constrains regrading intensity and geometry.
{ "cite_N": [ "@cite_19", "@cite_23" ], "mid": [ "2777686015", "2963342403", "2317851288", "2951120635" ], "abstract": [ "Fully convolutional network (FCN) has been successfully applied in semantic segmentation of scenes represented with RGB images. Images augmented with depth channel provide more understanding of the geometric information of the scene in the image. The question is how to best exploit this additional information to improve the segmentation performance.,,In this paper, we present a neural network with multiple branches for segmenting RGB-D images. Our approach is to use the available depth to split the image into layers with common visual characteristic of objects scenes, or common “scene-resolution”. We introduce context-aware receptive field (CaRF) which provides a better control on the relevant contextual information of the learned features. Equipped with CaRF, each branch of the network semantically segments relevant similar scene-resolution, leading to a more focused domain which is easier to learn. Furthermore, our network is cascaded with features from one branch augmenting the features of adjacent branch. We show that such cascading of features enriches the contextual information of each branch and enhances the overall performance. The accuracy that our network achieves outperforms the stateof-the-art methods on two public datasets.", "Most current semantic segmentation methods rely on fully convolutional networks (FCNs). However, their use of large receptive fields and many pooling layers cause low spatial resolution inside the deep layers. This leads to predictions with poor localization around the boundaries. Prior work has attempted to address this issue by post-processing predictions with CRFs or MRFs. But such models often fail to capture semantic relationships between objects, which causes spatially disjoint predictions. To overcome these problems, recent methods integrated CRFs or MRFs into an FCN framework. The downside of these new models is that they have much higher complexity than traditional FCNs, which renders training and testing more challenging. In this work we introduce a simple, yet effective Convolutional Random Walk Network (RWN) that addresses the issues of poor boundary localization and spatially fragmented predictions with very little increase in model complexity. Our proposed RWN jointly optimizes the objectives of pixelwise affinity and semantic segmentation. It combines these two objectives via a novel random walk layer that enforces consistent spatial grouping in the deep layers of the network. Our RWN is implemented using standard convolution and matrix multiplication. This allows an easy integration into existing FCN frameworks and it enables end-to-end training of the whole network via standard back-propagation. Our implementation of RWN requires just 131 additional parameters compared to the traditional FCNs, and yet it consistently produces an improvement over the FCNs on semantic segmentation and scene labeling.", "Fully convolutional networks (FCNs) have been proven very successful for semantic segmentation, but the FCN outputs are unaware of object instances. In this paper, we develop FCNs that are capable of proposing instance-level segment candidates. In contrast to the previous FCN that generates one score map, our FCN is designed to compute a small set of instance-sensitive score maps, each of which is the outcome of a pixel-wise classifier of a relative position to instances. On top of these instance-sensitive score maps, a simple assembling module is able to output instance candidate at each position. In contrast to the recent DeepMask method for segmenting instances, our method does not have any high-dimensional layer related to the mask resolution, but instead exploits image local coherence for estimating instances. We present competitive results of instance segment proposal on both PASCAL VOC and MS COCO.", "Fully convolutional networks (FCNs) have been proven very successful for semantic segmentation, but the FCN outputs are unaware of object instances. In this paper, we develop FCNs that are capable of proposing instance-level segment candidates. In contrast to the previous FCN that generates one score map, our FCN is designed to compute a small set of instance-sensitive score maps, each of which is the outcome of a pixel-wise classifier of a relative position to instances. On top of these instance-sensitive score maps, a simple assembling module is able to output instance candidate at each position. In contrast to the recent DeepMask method for segmenting instances, our method does not have any high-dimensional layer related to the mask resolution, but instead exploits image local coherence for estimating instances. We present competitive results of instance segment proposal on both PASCAL VOC and MS COCO." ] }
1708.05234
2964325361
Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.
Previous face detection systems are mostly based on hand-craft features. Since the seminal Viola-Jones face detector @cite_23 that proposes to combine Haar feature, Adaboost learning and cascade inference for face detection, many subsequent works are proposed for real-time face detection, such as new local features @cite_33 @cite_40 , new boosting algorithms @cite_16 @cite_28 and new cascade structures @cite_27 @cite_46 .
{ "cite_N": [ "@cite_33", "@cite_28", "@cite_40", "@cite_27", "@cite_23", "@cite_46", "@cite_16" ], "mid": [ "1994215930", "2041497292", "2125277152", "2495387757" ], "abstract": [ "We present a novel boosting cascade based face detection framework using SURF features. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by two key contributions. First, the proposed framework deals with only several hundreds of multidimensional local SURF patches instead of hundreds of thousands of single dimensional haar features in the VJ framework. Second, it takes AUC as a single criterion for the convergence test of each cascade stage rather than the two conflicting criteria (false-positive-rate and detection-rate) in the VJ framework. These modifications yield much faster training convergence and much fewer stages in the final cascade. We made experiments on training face detector from large scale database. Results shows that the proposed method is able to train face detectors within one hour through scanning billions of negative samples on current personal computers. Furthermore, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed.", "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS", "Face detection has been one of the most studied topics in the computer vision literature. In this technical report, we survey the recent advances in face detection for the past decade. The seminal Viola-Jones face detector is first reviewed. We then survey the various techniques according to how they extract features and what learning algorithms are adopted. It is our hope that by reviewing the many existing algorithms, we will see even better algorithms developed to solve this fundamental computer vision problem. 1", "Large pose variations remain to be a challenge that confronts real-word face detection. We propose a new cascaded Convolutional Neural Network, dubbed the name Supervised Transformer Network, to address this challenge. The first stage is a multi-task Region Proposal Network (RPN), which simultaneously predicts candidate face regions along with associated facial landmarks. The candidate regions are then warped by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns. The second stage, which is a RCNN, then verifies if the warped candidate regions are valid faces or not. We conduct end-to-end learning of the cascaded network, including optimizing the canonical positions of the facial landmarks. This supervised learning of the transformations automatically selects the best scale to differentiate face non-face patterns. By combining feature maps from both stages of the network, we achieve state-of-the-art detection accuracies on several public benchmarks. For real-time performance, we run the cascaded network only on regions of interests produced from a boosting cascade face detector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution image." ] }
1708.05234
2964325361
Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.
Besides the cascade framework, methods based on structural models progressively achieve better performance and become more and more efficient. Some researches @cite_10 @cite_26 @cite_47 introduce the deformable part model (DPM) into face detection tasks. These works use supervised parts, more pose partition, better training or more efficient inference to achieve remarkable detection performance.
{ "cite_N": [ "@cite_47", "@cite_26", "@cite_10" ], "mid": [ "2295689258", "2056025798", "2963479408", "1818102884" ], "abstract": [ "Face detection using part based model becomes a new trend in Computer Vision. Following this trend, we propose an extension of Deformable Part Models to detect faces which increases not only precision but also speed compared with current versions of DPM. First, to reduce computation cost, we create a lookup table instead of repeatedly calculating scores in each processing step by approximating inner product between HOG features and weight vectors. Furthermore, early cascading method is also introduced to boost up speed. Second, we propose new integrated model for face representation and its score of detection. Besides, the intuitive non-maximum suppression is also proposed to get more accuracy in detecting result. We evaluate the merit of our method on the public dataset Face Detection Data Set and Benchmark (FDDB). Experimental results shows that our proposed method can significantly boost 5.5 times in speed of DPM method for face detection while achieve up to 94.64 the accuracy of the state-of-the-art technique. This leads to a promising way to combine DPM with other techniques to solve difficulties of face detection in the wild.", "This paper solves the speed bottleneck of deformable part model (DPM), while maintaining the accuracy in detection on challenging datasets. Three prohibitive steps in cascade version of DPM are accelerated, including 2D correlation between root filter and feature map, cascade part pruning and HOG feature extraction. For 2D correlation, the root filter is constrained to be low rank, so that 2D correlation can be calculated by more efficient linear combination of 1D correlations. A proximal gradient algorithm is adopted to progressively learn the low rank filter in a discriminative manner. For cascade part pruning, neighborhood aware cascade is proposed to capture the dependence in neighborhood regions for aggressive pruning. Instead of explicit computation of part scores, hypotheses can be pruned by scores of neighborhoods under the first order approximation. For HOG feature extraction, look-up tables are constructed to replace expensive calculations of orientation partition and magnitude with simpler matrix index operations. Extensive experiments show that (a) the proposed method is 4 times faster than the current fastest DPM method with similar accuracy on Pascal VOC, (b) the proposed method achieves state-of-the-art accuracy on pedestrian and face detection task with frame-rate speed.", "Cascade regression framework has been shown to be effective for facial landmark detection. It starts from an initial face shape and gradually predicts the face shape update from the local appearance features to generate the facial landmark locations in the next iteration until convergence. In this paper, we improve upon the cascade regression framework and propose the Constrained Joint Cascade Regression Framework (CJCRF) for simultaneous facial action unit recognition and facial landmark detection, which are two related face analysis tasks, but are seldomly exploited together. In particular, we first learn the relationships among facial action units and face shapes as a constraint. Then, in the proposed constrained joint cascade regression framework, with the help from the constraint, we iteratively update the facial landmark locations and the action unit activation probabilities until convergence. Experimental results demonstrate that the intertwined relationships of facial action units and face shapes boost the performances of both facial action unit recognition and facial landmark detection. The experimental results also demonstrate the effectiveness of the proposed method comparing to the state-of-the-art works.", "We present a face detection algorithm based on Deformable Part Models and deep pyramidal features. The proposed method called DP2MFD is able to detect faces of various sizes and poses in unconstrained conditions. It reduces the gap in training and testing of DPM on deep features by adding a normalization layer to the deep convolutional neural network (CNN). Extensive experiments on four publicly available unconstrained face detection datasets show that our method is able to capture the meaningful structure of faces and performs significantly better than many competitive face detection algorithms." ] }
1708.05234
2964325361
Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.
The first use of CNN for face detection can be traced back to 1994. @cite_19 use a trained CNN in a sliding windows manner to detect faces. @cite_7 @cite_48 introduce a retinally connected neural network for upright frontal face detection, and a router" network designed to estimate the orientation for rotation invariant face detection. @cite_3 develop a neural network to detect semi-frontal faces. @cite_31 train a CNN for simultaneous face detection and pose estimation. These earlier methods can get relatively good performance only on easy dataset.
{ "cite_N": [ "@cite_7", "@cite_48", "@cite_3", "@cite_19", "@cite_31" ], "mid": [ "2952198537", "1934410531", "2963770578", "2732082028" ], "abstract": [ "We present a multi-purpose algorithm for simultaneous face detection, face alignment, pose estimation, gender recognition, smile detection, age estimation and face recognition using a single deep convolutional neural network (CNN). The proposed method employs a multi-task learning framework that regularizes the shared parameters of CNN and builds a synergy among different domains and tasks. Extensive experiments show that the network has a better understanding of face and achieves state-of-the-art result for most of these tasks.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "Convolutional neural network (CNN) based face detectors are inefficient in handling faces of diverse scales. They rely on either fitting a large single model to faces across a large scale range or multi-scale testing. Both are computationally expensive. We propose Scale-aware Face Detection (SAFD) to handle scale explicitly using CNN, and achieve better performance with less computation cost. Prior to detection, an efficient CNN predicts the scale distribution histogram of the faces. Then the scale histogram guides the zoom-in and zoom-out of the image. Since the faces will be approximately in uniform scale after zoom, they can be detected accurately even with much smaller CNN. Actually, more than 99 of the faces in AFW can be covered with less than two zooms per image. Extensive experiments on FDDB, MALF and AFW show advantages of SAFD.", "Convolutional neural network (CNN) based face detectors are inefficient in handling faces of diverse scales. They rely on either fitting a large single model to faces across a large scale range or multi-scale testing. Both are computationally expensive. We propose Scale-aware Face Detector (SAFD) to handle scale explicitly using CNN, and achieve better performance with less computation cost. Prior to detection, an efficient CNN predicts the scale distribution histogram of the faces. Then the scale histogram guides the zoom-in and zoom-out of the image. Since the faces will be approximately in uniform scale after zoom, they can be detected accurately even with much smaller CNN. Actually, more than 99 of the faces in AFW can be covered with less than two zooms per image. Extensive experiments on FDDB, MALF and AFW show advantages of SAFD." ] }
1708.05234
2964325361
Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.
Recent years have witnessed the advance of CNN based face detectors. CCF @cite_6 uses boosting on top of CNN features for face detection. @cite_38 fine-tune CNN model trained on 1k ImageNet classification task for face and non-face classification task. Faceness @cite_14 trains a series of CNNs for facial attribute recognition to detect partially occluded faces. CascadeCNN @cite_37 develops a cascade architecture built on CNNs with powerful discriminative capability and high performance. @cite_0 propose to jointly train CascadeCNN to realize end-to-end optimization. Similar to @cite_45 , MTCNN @cite_20 proposes a multi-task cascaded CNNs based framework for joint face detection and alignment. UnitBox @cite_32 introduces a new intersection-over-union loss function. CMS-RCNN @cite_24 uses Faster R-CNN in face detection with body contextual information. Convnet @cite_25 integrates CNN with 3D face model in an end-to-end multi-task learning framework. STN @cite_9 proposes a new supervised transformer network and a ROI convolution for face detection.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_14", "@cite_9", "@cite_32", "@cite_6", "@cite_0", "@cite_24", "@cite_45", "@cite_25", "@cite_20" ], "mid": [ "2495387757", "1934410531", "2772572186", "2520774990" ], "abstract": [ "Large pose variations remain to be a challenge that confronts real-word face detection. We propose a new cascaded Convolutional Neural Network, dubbed the name Supervised Transformer Network, to address this challenge. The first stage is a multi-task Region Proposal Network (RPN), which simultaneously predicts candidate face regions along with associated facial landmarks. The candidate regions are then warped by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns. The second stage, which is a RCNN, then verifies if the warped candidate regions are valid faces or not. We conduct end-to-end learning of the cascaded network, including optimizing the canonical positions of the facial landmarks. This supervised learning of the transformations automatically selects the best scale to differentiate face non-face patterns. By combining feature maps from both stages of the network, we achieve state-of-the-art detection accuracies on several public benchmarks. For real-time performance, we run the cascaded network only on regions of interests produced from a boosting cascade face detector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution image.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "Recent years have witnessed promising results of face detection using deep learning, especially for the family of region-based convolutional neural networks (R-CNN) methods and their variants. Despite making remarkable progresses, face detection in the wild remains an open research challenge especially when detecting faces at vastly different scales and characteristics. In this paper, we propose a novel framework of \"Feature Agglomeration Networks\" (FAN) to build a new single stage face detector, which not only achieves state-of-the-art performance but also runs efficiently. As inspired by the recent success of Feature Pyramid Networks (FPN) lin2016fpn for generic object detection, the core idea of our framework is to exploit inherent multi-scale features of a single convolutional neural network to detect faces of varied scales and characteristics by aggregating higher-level semantic feature maps of different scales as contextual cues to augment lower-level feature maps via a hierarchical agglomeration manner at marginal extra computation cost. Unlike the existing FPN approach, we construct our FAN architecture using a new Agglomerative Connection module and further propose a Hierarchical Loss to effectively train the FAN model. We evaluate the proposed FAN detector on several public face detection benchmarks and achieved new state-of-the-art results with real-time detection speed on GPU.", "Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks." ] }
1708.05271
2743573407
Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.
The research on image captioning has proceeded along three different dimensions: template-based methods @cite_28 @cite_26 @cite_16 , search-based approaches @cite_24 @cite_19 @cite_3 , and language-based models @cite_10 @cite_6 @cite_14 @cite_9 @cite_0 @cite_13 @cite_12 .
{ "cite_N": [ "@cite_13", "@cite_26", "@cite_14", "@cite_28", "@cite_9", "@cite_3", "@cite_6", "@cite_24", "@cite_19", "@cite_0", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "2885822952", "68733909", "2251804196", "2963299217" ], "abstract": [ "Image captioning, which aims to automatically generate a sentence description for an image, has attracted much research attention in cognitive computing. The task is rather challenging, since it requires cognitively combining the techniques from both computer vision and natural language processing domains. Existing CNN-RNN framework-based methods suffer from two main problems: in the training phase, all the words of captions are treated equally without considering the importance of different words; in the caption generation phase, the semantic objects or scenes might be misrecognized. In our paper, we propose a method based on the encoder-decoder framework, named Reference based Long Short Term Memory (R-LSTM), aiming to lead the model to generate a more descriptive sentence for the given image by introducing reference information. Specifically, we assign different weights to the words according to the correlation between words and images during the training phase. We additionally maximize the consensus score between the captions generated by the captioning model and the reference information from the neighboring images of the target image, which can reduce the misrecognition problem. We have conducted extensive experiments and comparisons on the benchmark datasets MS COCO and Flickr30k. The results show that the proposed approach can outperform the state-of-the-art approaches on all metrics, especially achieving a 10.37 improvement in terms of CIDEr on MS COCO. By analyzing the quality of the generated captions, we come to a conclusion that through the introduction of reference information, our model can learn the key information of images and generate more trivial and relevant words for images.", "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.", "We present a novel, count-based approach to obtaining inter-lingual word representations based on inverted indexing of Wikipedia. We present experiments applying these representations to 17 datasets in document classification, POS tagging, dependency parsing, and word alignment. Our approach has the advantage that it is simple, computationally efficient and almost parameter-free, and, more importantly, it enables multi-source crosslingual learning. In 14 17 cases, we improve over using state-of-the-art bilingual embeddings.", "Mainstream captioning models often follow a sequential structure to generate cap- tions, leading to issues such as introduction of irrelevant semantics, lack of diversity in the generated captions, and inadequate generalization performance. In this paper, we present an alternative paradigm for image captioning, which factorizes the captioning procedure into two stages: (1) extracting an explicit semantic representation from the given image; and (2) constructing the caption based on a recursive compositional procedure in a bottom-up manner. Compared to conventional ones, our paradigm better preserves the semantic content through an explicit factorization of semantics and syntax. By using the compositional generation procedure, caption construction follows a recursive structure, which naturally fits the properties of human language. Moreover, the proposed compositional procedure requires less data to train, generalizes better, and yields more diverse captions." ] }
1708.05271
2743573407
Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.
Template-based methods predefine the template for sentence generation and split sentence into several parts (e.g., subject, verb, and object). With such sentence fragments, many works align each part with visual content (e.g., CRF in @cite_28 and HMM in @cite_16 ) and then generate the sentence for the image. Obviously, most of them highly depend on the templates of sentence and always generate sentence with syntactical structure. Search-based approaches @cite_24 @cite_19 @cite_3 generate" sentence for an image by selecting the most semantically similar sentences from sentence pool. This direction indeed can achieve human-level descriptions as all the output sentences are from existing human-generated ones. The need to collect human-generated sentences, however, makes the sentence pool hard to be scaled up.
{ "cite_N": [ "@cite_28", "@cite_3", "@cite_24", "@cite_19", "@cite_16" ], "mid": [ "2950012948", "68733909", "1858383477", "2149172860" ], "abstract": [ "In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content, and one matching CNN learning the joint representation of image and sentence. The matching CNN composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results on benchmark databases of bidirectional image and sentence retrieval demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. Specifically, our proposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and Microsoft COCO databases achieve the state-of-the-art performances.", "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.", "We propose a sentence generation strategy that describes images by predicting the most likely nouns, verbs, scenes and prepositions that make up the core sentence structure. The input are initial noisy estimates of the objects and scenes detected in the image using state of the art trained detectors. As predicting actions from still images directly is unreliable, we use a language model trained from the English Gigaword corpus to obtain their estimates; together with probabilities of co-located nouns, scenes and prepositions. We use these estimates as parameters on a HMM that models the sentence generation process, with hidden nodes as sentence components and image detections as the emissions. Experimental results show that our strategy of combining vision and language produces readable and descriptive sentences compared to naive strategies that use vision alone.", "We present a holistic data-driven approach to image description generation, exploiting the vast amount of (noisy) parallel image data and associated natural language descriptions available on the web. More specifically, given a query image, we retrieve existing human-composed phrases used to describe visually similar images, then selectively combine those phrases to generate a novel description for the query image. We cast the generation process as constraint optimization problems, collectively incorporating multiple interconnected aspects of language composition for content planning, surface realization and discourse structure. Evaluation by human annotators indicates that our final system generates more semantically correct and linguistically appealing descriptions than two nontrivial baselines." ] }
1708.05271
2743573407
Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.
Different from template-based and search-based models, language-based models aim to learn the probability distribution in the common space of visual content and textual sentence to generate novel sentences with more flexible syntactical structures. In this direction, recent works explore such probability distribution mainly using neural networks and have achieved promising results for image captioning task. Kiros @cite_6 employ the neural networks to generate sentence for an image by proposing a multimodal log-bilinear neural language model. In @cite_14 , Vinyals propose an end-to-end neural networks architecture by utilizing LSTM to generate sentence for an image, which is further incorporated with attention mechanism in @cite_0 to automatically focus on salient objects when generating corresponding words. More recently, in @cite_9 , high-level concepts attributes are shown to obtain clear improvements on image captioning task when injected into existing state-of-the-art RNN-based model. Such high-level attributes are further utilized as semantic attention in @cite_12 and complementary representations to visual features in @cite_27 @cite_13 to enhance image video captioning.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_6", "@cite_0", "@cite_27", "@cite_13", "@cite_12" ], "mid": [ "2159243025", "2481240925", "2556388456", "2951159095" ], "abstract": [ "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks (RNN) over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions outperform retrieval baselines on both full images and on a new dataset of region-level annotations. Finally, we conduct large-scale analysis of our RNN language model on the Visual Genome dataset of 4.1 million captions and highlight the differences between image and region-level caption statistics.", "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)&#x2014;a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.", "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNN) to encode video content and Recurrent Neural Networks (RNN) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)---a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results when compared to state-of-the-art methods are also reported on M-VAD and MPII-MD." ] }
1708.05271
2743573407
Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.
The novel object captioning is a new problem that has received increasing attention most recently, which leverages additional image-sentence paired data @cite_4 or unpaired image text data @cite_8 @cite_15 to describe novel objects in existing RNN-based image captioning frameworks. @cite_4 is one of the early works that enlarges the original limited word dictionary to describe novel objects by using only a few paired image-sentence data. In particular, a transposed weight sharing scheme is proposed to avoid extensive retraining. In contrast, with the largely available unpaired image text data (e.g., ImageNet and Wikipedia), Hendricks @cite_8 explicitly transfer the knowledge of semantically related objects to compose the descriptions about novel objects in the proposed Deep Compositional Captioner (DCC). The DCC model is further extended to an end-to-end system by simultaneously optimizing the visual recognition network, LSTM-based language model, and image captioning network with different sources in @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_8" ], "mid": [ "2952155606", "2173180041", "2743573407", "2463508871" ], "abstract": [ "While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-caption data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.", "While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired imagesentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-sentence data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.", "Image captioning often requires a large set of training image-sentence pairs. In practice, however, acquiring sufficient training pairs is always expensive, making the recent captioning models limited in their ability to describe objects outside of training corpora (i.e., novel objects). In this paper, we present Long Short-Term Memory with Copying Mechanism (LSTM-C) --- a new architecture that incorporates copying into the Convolutional Neural Networks (CNN) plus Recurrent Neural Networks (RNN) image captioning framework, for describing novel objects in captions. Specifically, freely available object recognition datasets are leveraged to develop classifiers for novel objects. Our LSTM-C then nicely integrates the standard word-by-word sentence generation by a decoder RNN with copying mechanism which may instead select words from novel objects at proper places in the output sentence. Extensive experiments are conducted on both MSCOCO image captioning and ImageNet datasets, demonstrating the ability of our proposed LSTM-C architecture to describe novel objects. Furthermore, superior results are reported when compared to state-of-the-art deep models.", "Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can describe a large number of object categories not present in existing image-caption datasets. Our model takes advantage of external sources -- labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets. We demonstrate that our model exploits semantic information to generate captions for hundreds of object categories in the ImageNet object recognition dataset that are not observed in MSCOCO image-caption training data, as well as many categories that are observed very rarely. Both automatic evaluations and human judgements show that our model considerably outperforms prior work in being able to describe many more categories of objects." ] }
1708.05340
2748080090
Commercial off the shelf (COTS) 3D scanners are capable of generating point clouds covering visible portions of a face with sub-millimeter accuracy at close range, but lack the coverage and specialized anatomic registration provided by more expensive 3D facial scanners. We demonstrate an effective pipeline for joint alignment of multiple unstructured 3D point clouds and registration to a parameterized 3D model which represents shape variation of the human head. Most algorithms separate the problems of pose estimation and mesh warping, however we propose a new iterative method where these steps are interwoven. Error decreases with each iteration, showing the proposed approach is effective in improving geometry and alignment. The approach described is used to align the NDOff-2007 dataset, which contains 7,358 individual scans at various poses of 396 subjects. The dataset has a number of full profile scans which are correctly aligned and contribute directly to the associated mesh geometry. The dataset in its raw form contains a significant number of mislabeled scans, which are identified and corrected based on alignment error using the proposed algorithm. The average point to surface distance between the aligned scans and the produced geometries is one half millimeter.
The proposed alignment method begins by using sparse localized landmarks, but as dense 3D information is available and real time performance is not necessary, additional steps are taken to refine the initial alignment by registering each scan to the subject-specific mesh geometry. Mesh geometry is computed by finding a set of 3D offsets that express the local difference between the scans and the base mesh. These offsets are used at first to warp a 3DMM using precomputed PCA components. Direct (unconstrained) mesh warping without the PCA model is performed as a final step by a method similar to the one described by Arberg @cite_6 . The major geometry variations are described by the 3DMM warping, while the direct approach is able to account for smaller, finer details not represented by the 3DMM @. The significant warping that occurs using the 3DMM PCA components removes the need for a decreasing stiffness parameter when estimating the direct warping.
{ "cite_N": [ "@cite_6" ], "mid": [ "2746892480", "2086550580", "2064499898", "2108891011" ], "abstract": [ "While the ready availability of 3D scan data has influenced research throughout computer vision, less attention has focused on 4D data, that is 3D scans of moving non-rigid objects, captured over time. To be useful for vision research, such 4D scans need to be registered, or aligned, to a common topology. Consequently, extending mesh registration methods to 4D is important. Unfortunately, no ground-truth datasets are available for quantitative evaluation and comparison of 4D registration methods. To address this we create a novel dataset of high-resolution 4D scans of human subjects in motion, captured at 60 fps. We propose a new mesh registration method that uses both 3D geometry and texture information to register all scans in a sequence to a common reference topology. The approach exploits consistency in texture over both short and long time intervals and deals with temporal offsets between shape and texture capture. We show how using geometry alone results in significant errors in alignment when the motions are fast and non-rigid. We evaluate the accuracy of our registration and provide a dataset of 40,000 raw and aligned meshes. Dynamic FAUST extends the popular FAUST dataset to dynamic 4D data, and is available for research purposes at http: dfaust.is.tue.mpg.de.", "We present a new volumetric method for reconstructing watertight triangle meshes from arbitrary, unoriented point clouds. While previous techniques usually reconstruct surfaces as the zero level-set of a signed distance function, our method uses an unsigned distance function and hence does not require any information about the local surface orientation. Our algorithm estimates local surface confidence values within a dilated crust around the input samples. The surface which maximizes the global confidence is then extracted by computing the minimum cut of a weighted spatial graph structure. We present an algorithm, which efficiently converts this cut into a closed, manifold triangle mesh with a minimal number of vertices. The use of an unsigned distance function avoids the topological noise artifacts caused by misalignment of 3D scans, which are common to most volumetric reconstruction techniques. Due to a hierarchical approach our method efficiently produces solid models of low genus even for noisy and highly irregular data containing large holes, without loosing fine details in densely sampled regions. We show several examples for different application settings such as model generation from raw laser-scanned data, image-based 3D reconstruction, and mesh repair.", "We introduce 4PCS, a fast and robust alignment scheme for 3D point sets that uses wide bases, which are known to be resilient to noise and outliers. The algorithm allows registering raw noisy data, possibly contaminated with outliers, without pre-filtering or denoising the data. Further, the method significantly reduces the number of trials required to establish a reliable registration between the underlying surfaces in the presence of noise, without any assumptions about starting alignment. Our method is based on a novel technique to extract all coplanar 4-points sets from a 3D point set that are approximately congruent, under rigid transformation, to a given set of coplanar 4-points. This extraction procedure runs in roughly O(n2 + k) time, where n is the number of candidate points and k is the number of reported 4-points sets. In practice, when noise level is low and there is sufficient overlap, using local descriptors the time complexity reduces to O(n + k). We also propose an extension to handle similarity and affine transforms. Our technique achieves an order of magnitude asymptotic acceleration compared to common randomized alignment techniques. We demonstrate the robustness of our algorithm on several sets of multiple range scans with varying degree of noise, outliers, and extent of overlap.", "We present a new geometry compression method for animations, which is based on the clustered principal component analysis (CPCA). Instead of analyzing the set of vertices for each frame, our method analyzes the set of paths for all vertices for a certain animation length. Thus, using a data-driven approach, it can identify mesh parts, that are \"coherent\" over time. This usually leads to a very efficient and robust segmentation of the mesh into meaningful clusters, e.g. the wings of a chicken. These parts are then compressed separately using standard principal component analysis (PCA). Each of this clusters can be compressed more efficiently with lesser PCA components compared to previous approaches. Results show, that the new method outperforms other compression schemes like pure PCA based compression or combinations with linear prediction coding, while maintaining a better reconstruction error. This is true, even if the components and weights are quantized before transmission. The reconstruction process is very simple and can be performed directly on the GPU." ] }
1708.05286
2749129571
Stance classification determines the attitude, or stance, in a (typically short) text. The task has powerful applications, such as the detection of fake news or the automatic extraction of attitudes toward entities or events in the media. This paper describes a surprisingly simple and efficient classification approach to open stance classification in Twitter, for rumour and veracity classification. The approach profits from a novel set of automatically identifiable problem-specific features, which significantly boost classifier accuracy and achieve above state-of-the-art results on recent benchmark datasets. This calls into question the value of using complex sophisticated models for stance classification without first doing informed feature extraction.
The first study that tackles automatic stance classification is that of . With a dataset containing 10K tweets and using a Bayesian classifier and three types of features categorised as content'', network'' and Twitter specific memes'', the authors achieved an accuracy of 93.5 use a rule-based method and show that it outperforms the approach reported by . enrich the feature sets investigated by earlier studies by features derived from the Linguistic Inquiry and Word Count (LIWC) dictionaries @cite_7 . investigate Gaussian Processes as rumour stance classifier. For the first time the authors also use Brown Clusters to extract the features for each tweet. Unlike researchers above, evalute on the rumour data released by , where they report an accuracy of 67.7 Subsequent work has also tackled stance classification for new, unseen rumours. @cite_23 moved away from the classification of tweets in isolation, focusing instead on Twitter 'conversations' @cite_25 initiated by rumours, as part of the Pheme project @cite_13 . They looked at tree-structured conversations initiated by a rumour and followed by tweets responding to it by supporting, denying, querying or commenting on the rumour.
{ "cite_N": [ "@cite_13", "@cite_23", "@cite_25", "@cite_7" ], "mid": [ "2347127863", "2437771934", "1974570422", "2792410075" ], "abstract": [ "We can often detect from a person’s utterances whether he or she is in favor of or against a given target entity—one’s stance toward the target. However, a person may express the same stance toward a target by using negative or positive language. Here for the first time we present a dataset of tweet–target pairs annotated for both stance and sentiment. The targets may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. Partitions of this dataset were used as training and test sets in a SemEval-2016 shared task competition. We propose a simple stance detection system that outperforms submissions from all 19 teams that participated in the shared task. Additionally, access to both stance and sentiment annotations allows us to explore several research questions. We show that although knowing the sentiment expressed by a tweet is beneficial for stance classification, it alone is not sufficient. Finally, we use additional unlabeled data through distant supervision techniques and word embeddings to further improve stance classification.", "Stance detection is the task of classifying the attitude expressed in a text towards a target such as Hillary Clinton to be \"positive\", negative\" or \"neutral\". Previous work has assumed that either the target is mentioned in the text or that training data for every target is given. This paper considers the more challenging version of this task, where targets are not always mentioned and no training data is available for the test targets. We experiment with conditional LSTM encoding, which builds a representation of the tweet that is dependent on the target, and demonstrate that it outperforms encoding the tweet and the target independently. Performance is improved further when the conditional model is augmented with bidirectional encoding. We evaluate our approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving performance second best only to a system trained on semi-automatically labelled tweets for the test target. When such weak supervision is added, our approach achieves state-of-the-art results.", "In this paper, we present two methods for classification of different social network actors (individuals or organizations) such as leaders (e.g., news groups), lurkers, spammers and close associates. The first method is a two-stage process with a fuzzy-set theoretic (FST) approach to evaluation of the strengths of network links (or equivalently, actor-actor relationships) followed by a simple linear classifier to separate the actor classes. Since this method uses a lot of contextual information including actor profiles, actor-actor tweet and reply frequencies, it may be termed as a context-dependent approach. To handle the situation of limited availability of actor data for learning network link strengths, we also present a second method that performs actor classification by matching their short-term (say, roughly 25 days) tweet patterns with the generic tweet patterns of the prototype actors of different classes. Since little contextual information is used here, this can be called a context-independent approach. Our experimentation with over 500 randomly sampled records from a twitter database consists of 441,234 actors, 2,045,804 links, 6,481,900 tweets, and 2,312,927 total reply messages indicates that, in the context-independent analysis, a multilayer perceptron outperforms on both on classification accuracy and a new F-measure for classification performance, the Bayes classifier and Random Forest classifiers. However, as expected, the context-dependent analysis using link strengths evaluated using the FST approach in conjunction with some actor information reveals strong clustering of actor data based on their types, and hence can be considered as a superior approach when data available for training the system is abundant.", "Stance detection is a subproblem of sentiment analysis where the stance of the author of a piece of natural language text for a particular target (either explicitly stated in the text or not) is explored. The stance output is usually given as Favor, Against, or Neither. In this paper, we target at stance detection on sports-related tweets and present the performance results of our SVM-based stance classifiers on such tweets. First, we describe three versions of our proprietary tweet data set annotated with stance information, all of which are made publicly available for research purposes. Next, we evaluate SVM classifiers using different feature sets for stance detection on this data set. The employed features are based on unigrams, bigrams, hashtags, external links, emoticons, and lastly, named entities. The results indicate that joint use of the features based on unigrams, hashtags, and named entities by SVM classifiers is a plausible approach for stance detection problem on sports-related tweets." ] }
1708.05237
2750317406
This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.
Face detection has attracted extensive research attention in past decades. The milestone work of Viola-Jones @cite_29 uses Haar feature and AdaBoost to train a cascade of face non-face classifiers that achieves a good accuracy with real-time efficiency. After that, lots of works have focused on improving the performance with more sophisticated hand-crafted features @cite_42 @cite_17 @cite_53 @cite_60 and more powerful classifiers @cite_22 @cite_37 . Besides the cascade structure, @cite_55 @cite_10 @cite_62 introduce deformable part models (DPM) into face detection tasks and achieve remarkable performance. However, these methods highly depend on the robustness of hand-crafted features and optimize each component separately, making face detection pipeline sub-optimal.
{ "cite_N": [ "@cite_37", "@cite_62", "@cite_22", "@cite_60", "@cite_53", "@cite_29", "@cite_42", "@cite_55", "@cite_10", "@cite_17" ], "mid": [ "1994215930", "2041497292", "2167955297", "2104577112" ], "abstract": [ "We present a novel boosting cascade based face detection framework using SURF features. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by two key contributions. First, the proposed framework deals with only several hundreds of multidimensional local SURF patches instead of hundreds of thousands of single dimensional haar features in the VJ framework. Second, it takes AUC as a single criterion for the convergence test of each cascade stage rather than the two conflicting criteria (false-positive-rate and detection-rate) in the VJ framework. These modifications yield much faster training convergence and much fewer stages in the final cascade. We made experiments on training face detector from large scale database. Results shows that the proposed method is able to train face detectors within one hour through scanning billions of negative samples on current personal computers. Furthermore, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed.", "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS", "Locating facial feature points in images of faces is an important stage for numerous facial image interpretation tasks. In this paper we present a method for fully automatic detection of 20 facial feature points in images of expressionless faces using Gabor feature based boosted classifiers. The method adopts fast and robust face detection algorithm, which represents an adapted version of the original Viola-Jones face detector. The detected face region is then divided into 20 relevant regions of interest, each of which is examined further to predict the location of the facial feature points. The proposed facial feature point detection method uses individual feature patch templates to detect points in the relevant region of interest. These feature models are GentleBoost templates built from both gray level intensities and Gabor wavelet features. When tested on the Cohn-Kanade database, the method has achieved average recognition rates of 93 .", "A cascade face detector uses a sequence of node classifiers to distinguish faces from nonfaces. This paper presents a new approach to design node classifiers in the cascade detector. Previous methods used machine learning algorithms that simultaneously select features and form ensemble classifiers. We argue that if these two parts are decoupled, we have the freedom to design a classifier that explicitly addresses the difficulties caused by the asymmetric learning goal. There are three contributions in this paper: The first is a categorization of asymmetries in the learning goal and why they make face detection hard. The second is the forward feature selection (FFS) algorithm and a fast precomputing strategy for AdaBoost. FFS and the fast AdaBoost can reduce the training time by approximately 100 and 50 times, in comparison to a naive implementation of the AdaBoost feature selection method. The last contribution is a linear asymmetric classifier (LAC), a classifier that explicitly handles the asymmetric learning goal as a well-defined constrained optimization problem. We demonstrated experimentally that LAC results in an improved ensemble classifier performance." ] }
1708.05237
2750317406
This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.
Recent years have witnessed the advance of CNN-based face detectors. CascadeCNN @cite_49 develops a cascade architecture built on CNNs with powerful discriminative capability and high performance. @cite_0 proposes to jointly train CascadeCNN to realize end-to-end optimization. Faceness @cite_18 trains a series of CNNs for facial attribute recognition to detect partially occluded faces. MTCNN @cite_26 proposes to jointly solve face detection and alignment using several multi-task CNNs. UnitBox @cite_40 introduces a new intersection-over-union loss function.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_0", "@cite_40", "@cite_49" ], "mid": [ "1934410531", "2495387757", "2520774990", "2504335775" ], "abstract": [ "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "Large pose variations remain to be a challenge that confronts real-word face detection. We propose a new cascaded Convolutional Neural Network, dubbed the name Supervised Transformer Network, to address this challenge. The first stage is a multi-task Region Proposal Network (RPN), which simultaneously predicts candidate face regions along with associated facial landmarks. The candidate regions are then warped by mapping the detected facial landmarks to their canonical positions to better normalize the face patterns. The second stage, which is a RCNN, then verifies if the warped candidate regions are valid faces or not. We conduct end-to-end learning of the cascaded network, including optimizing the canonical positions of the facial landmarks. This supervised learning of the transformations automatically selects the best scale to differentiate face non-face patterns. By combining feature maps from both stages of the network, we achieve state-of-the-art detection accuracies on several public benchmarks. For real-time performance, we run the cascaded network only on regions of interests produced from a boosting cascade face detector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution image.", "Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks.", "In present object detection systems, the deep convolutional neural networks (CNNs) are utilized to predict bounding boxes of object candidates, and have gained performance advantages over the traditional region proposal methods. However, existing deep CNN methods assume the object bounds to be four independent variables, which could be regressed by the l2 loss separately. Such an oversimplified assumption is contrary to the well-received observation, that those variables are correlated, resulting to less accurate localization. To address the issue, we firstly introduce a novel Intersection over Union (IoU) loss function for bounding box prediction, which regresses the four bounds of a predicted box as a whole unit. By taking the advantages of IoU loss and deep fully convolutional networks, the UnitBox is introduced, which performs accurate and efficient localization, shows robust to objects of varied shapes and scales, and converges fast. We apply UnitBox on face detection task and achieve the best performance among all published methods on the FDDB benchmark." ] }
1708.05237
2750317406
This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.
Additionally, face detection has inherited some achievements from generic object detection tasks. @cite_34 applies Faster R-CNN in face detection and achieves promising results. CMS-RCNN @cite_30 uses Faster R-CNN in face detection with body contextual information. Convnet @cite_31 integrates CNN with 3D face model in an end-to-end multi-task learning framework. @cite_41 combines Faster R-CNN with hard negative mining and achieves significant boosts in face detection performance. STN @cite_9 proposes a new supervised transformer network and a ROI convolution with RPN for face detection. @cite_27 presents several effective strategies to improve Faster RCNN for resolving face detection tasks. In this paper, inspired by the RPN in Faster RCNN @cite_4 and the multi-scale mechanism in SSD @cite_45 , we develop a state-of-the-art face detector with real-time speed.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_41", "@cite_9", "@cite_27", "@cite_45", "@cite_31", "@cite_34" ], "mid": [ "2964095005", "2417750831", "2772572186", "2951191545" ], "abstract": [ "While deep learning based methods for generic object detection have improved rapidly in the last two years, most approaches to face detection are still based on the R-CNN framework [11], leading to limited accuracy and processing speed. In this paper, we investigate applying the Faster RCNN [26], which has recently demonstrated impressive results on various object detection benchmarks, to face detection. By training a Faster R-CNN model on the large scale WIDER face dataset [34], we report state-of-the-art results on the WIDER test set as well as two other widely used face detection benchmarks, FDDB and the recently released IJB-A.", "This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face proposal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by pruning and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state-of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of predefined anchor boxes in the region proposals network (RPN) by exploiting a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth (l_1 )-losses of both the facial key-points and the face bounding boxes. In experiments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks.", "Recent years have witnessed promising results of face detection using deep learning, especially for the family of region-based convolutional neural networks (R-CNN) methods and their variants. Despite making remarkable progresses, face detection in the wild remains an open research challenge especially when detecting faces at vastly different scales and characteristics. In this paper, we propose a novel framework of \"Feature Agglomeration Networks\" (FAN) to build a new single stage face detector, which not only achieves state-of-the-art performance but also runs efficiently. As inspired by the recent success of Feature Pyramid Networks (FPN) lin2016fpn for generic object detection, the core idea of our framework is to exploit inherent multi-scale features of a single convolutional neural network to detect faces of varied scales and characteristics by aggregating higher-level semantic feature maps of different scales as contextual cues to augment lower-level feature maps via a hierarchical agglomeration manner at marginal extra computation cost. Unlike the existing FPN approach, we construct our FAN architecture using a new Agglomerative Connection module and further propose a Hierarchical Loss to effectively train the FAN model. We evaluate the proposed FAN detector on several public face detection benchmarks and achieved new state-of-the-art results with real-time detection speed on GPU.", "This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face pro- posal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by prun- ing and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state- of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of prede- fined anchor boxes in the region proposals network (RPN) by exploit- ing a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth l1 -losses [14] of both the facial key-points and the face bounding boxes. In ex- periments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks." ] }
1906.00742
2947283491
Word embeddings learnt from massive text collections have demonstrated significant levels of discriminative biases such as gender, racial or ethnic biases, which in turn bias the down-stream NLP applications that use those word embeddings. Taking gender-bias as a working example, we propose a debiasing method that preserves non-discriminative gender-related information, while removing stereotypical discriminative gender biases from pre-trained word embeddings. Specifically, we consider four types of information: , , and , which represent the relationship between gender vs. bias, and propose a debiasing method that (a) preserves the gender-related information in feminine and masculine words, (b) preserves the neutrality in gender-neutral words, and (c) removes the biases from stereotypical words. Experimental results on several previously proposed benchmark datasets show that our proposed method can debias pre-trained word embeddings better than existing SoTA methods proposed for debiasing word embeddings while preserving gender-related but non-discriminative information.
proposed Gender-Neutral Global Vectors (GN-GloVe) by adding a constraint to the Global Vectors (GloVe) @cite_12 objective such that the gender-related information is confined to a sub-vector. During optimisation, the squared @math distance between gender-related sub-vectors are maximised, while simultaneously minimising the GloVe objective. GN-GloVe learns gender-debiased word embeddings from scratch from a given corpus, and cannot be used to debias pre-trained word embeddings. Moreover, similar to hard and soft debiasing methods described above, GN-GloVe uses pre-defined lists of feminine, masculine and gender-neutral words and debias words in these lists.
{ "cite_N": [ "@cite_12" ], "mid": [ "2950018712", "2483215953", "2909620036", "2796868841" ], "abstract": [ "The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to \"debias\" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.", "The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.", "Neural machine translation has significantly pushed forward the quality of the field. However, there are remaining big issues with the translations and one of them is fairness. Neural models are trained on large text corpora which contains biases and stereotypes. As a consequence, models inherit these social biases. Recent methods have shown results in reducing gender bias in other natural language processing applications such as word embeddings. We take advantage of the fact that word embeddings are used in neural machine translation to propose the first debiased machine translation system. Specifically, we propose, experiment and analyze the integration of two debiasing techniques over GloVe embeddings in the Transformer translation architecture. We evaluate our proposed system on a generic English-Spanish task, showing gains up to one BLEU point. As for the gender bias evaluation, we generate a test set of occupations and we show that our proposed system learns to equalize existing biases from the baseline system.", "We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing coreference benchmark datasets. Our dataset and code are available at this http URL" ] }
1906.00742
2947283491
Word embeddings learnt from massive text collections have demonstrated significant levels of discriminative biases such as gender, racial or ethnic biases, which in turn bias the down-stream NLP applications that use those word embeddings. Taking gender-bias as a working example, we propose a debiasing method that preserves non-discriminative gender-related information, while removing stereotypical discriminative gender biases from pre-trained word embeddings. Specifically, we consider four types of information: , , and , which represent the relationship between gender vs. bias, and propose a debiasing method that (a) preserves the gender-related information in feminine and masculine words, (b) preserves the neutrality in gender-neutral words, and (c) removes the biases from stereotypical words. Experimental results on several previously proposed benchmark datasets show that our proposed method can debias pre-trained word embeddings better than existing SoTA methods proposed for debiasing word embeddings while preserving gender-related but non-discriminative information.
Debiasing can be seen as a problem of information related to a attribute such as gender, for which adversarial learning methods @cite_17 @cite_5 @cite_30 have been proposed in the fairness-aware machine learning community @cite_33 . In these approaches, inputs are first encoded, and then two classifiers are trained -- a that uses the encoded input to predict the target NLP task, and a that uses the encoded input to predict the protected attribute. The two classifiers and the encoder is learnt jointly such that the accuracy of the target task predictor is maximised, while minimising the accuracy of the protected-attribute predictor. However, showed that although it is possible to obtain chance-level development-set accuracy for the protected attribute during training, a post-hoc classifier, trained on the encoded inputs can still manage to reach substantially high accuracies for the protected attributes. They conclude that adversarial learning alone does not guarantee invariant representations for the protected attributes.
{ "cite_N": [ "@cite_30", "@cite_5", "@cite_33", "@cite_17" ], "mid": [ "2725155646", "2767382337", "2964139811", "2031366895" ], "abstract": [ "How can we learn a classifier that is \"fair\" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.", "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.", "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.", "In machine learning and computer vision, input signals are often filtered to increase data discriminability. For example, preprocessing face images with Gabor band-pass filters is known to improve performance in expression recognition tasks [1]. Sometimes, however, one may wish to purposely decrease discriminability of one classification task (a “distractor” task), while simultaneously preserving information relevant to another task (the target task): For example, due to privacy concerns, it may be important to mask the identity of persons contained in face images before submitting them to a crowdsourcing site (e.g., Mechanical Turk) when labeling them for certain facial attributes. Suppressing discriminability in distractor tasks may also be needed to improve inter-dataset generalization: training datasets may sometimes contain spurious correlations between a target attribute (e.g., facial expression) and a distractor attribute (e.g., gender). We might improve generalization to new datasets by suppressing the signal related to the distractor task in the training dataset. This can be seen as a special form of supervised regularization. In this paper we present an approach to automatically learning preprocessing filters that suppress discriminability in distractor tasks while preserving it in target tasks. We present promising results in simulated image classification problems and in a realistic expression recognition problem." ] }
1906.00939
2947576064
Prediction of user traffic in cellular networks has attracted profound attention for improving resource utilization. In this paper, we study the problem of network traffic traffic prediction and classification by employing standard machine learning and statistical learning time series prediction methods, including long short-term memory (LSTM) and autoregressive integrated moving average (ARIMA), respectively. We present an extensive experimental evaluation of the designed tools over a real network traffic dataset. Within this analysis, we explore the impact of different parameters to the effectiveness of the predictions. We further extend our analysis to the problem of network traffic classification and prediction of traffic bursts. The results, on the one hand, demonstrate superior performance of LSTM over ARIMA in general, especially when the length of the training time series is high enough, and it is augmented by a wisely-selected set of features. On the other hand, the results shed light on the circumstances in which, ARIMA performs close to the optimal with lower complexity.
Traffic classification has been a hot topic in computer communication networks for more than two decades due to its vastly diverse applications in resource provisioning, billing and service prioritization, and security and anomaly detection @cite_13 @cite_15 . While different statistical and machine learning tools have been used till now for traffic classification, e.g. refer to @cite_21 and references herein, most of these works are dependent upon features which are either not available in encrypted traffic, or cannot be extracted in real time, e.g. port number and payload data @cite_21 @cite_13 . In @cite_7 , classification of traffic using convolutional neural network using 1400 packet-based features as well as network flow features has been investigated for classification of encrypted traffic, which is too complex for a cellular network to be used for each user. Reviewing the state-of-the-art reveals that there is a need for investigation of low-complex scalable cellular traffic classification schemes (i) without looking into the packets, due to encryption and latency, (ii) without analyzing the inter-packet arrival for all packets, due to latency and complexity, and (iii) with as few numbers of features as possible. This research gap is addressed in this work.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_13", "@cite_7" ], "mid": [ "2952806211", "2891570868", "2468337398", "2096118443" ], "abstract": [ "Traffic classification has been studied for two decades and applied to a wide range of applications from QoS provisioning and billing in ISPs to security-related applications in firewalls and intrusion detection systems. Port-based, data packet inspection, and classical machine learning methods have been used extensively in the past, but their accuracy have been declined due to the dramatic changes in the Internet traffic, particularly the increase in encrypted traffic. With the proliferation of deep learning methods, researchers have recently investigated these methods for traffic classification task and reported high accuracy. In this article, we introduce a general framework for deep-learning-based traffic classification. We present commonly used deep learning methods and their application in traffic classification tasks. Then, we discuss open problems and their challenges, as well as opportunities for traffic classification.", "Nowadays, network traffic classification plays an important role in many fields including network management, intrusion detection system, malware detection system, etc. Most of the previous research works concentrate on features extracted in the non-encrypted network traffic. However, these features are not compatible with all kind of traffic characterization. Google's QUIC protocol (Quick UDP Internet Connection protocol) is implemented in many services of Google. Nevertheless, the emergence of this protocol imposes many obstacles for traffic classification due to the reduction of visibility for operators into network traffic, so the port and payload- based traditional methods cannot be applied to identify the QUIC- based services. To address this issue, we proposed a novel technique for traffic classification based on the convolutional neural network which combines the feature extraction and classification phase into one system. The proposed method uses the flow and packet-based features to improve the performance. In comparison with current methods, the proposed method can detect some kind of QUIC-based services such as Google Hangout Chat, Google Hangout Voice Call, YouTube, File transfer and Google play music. Besides, the proposed method can achieve the microaveraging F1-score of 99.24 percent.", "With the widespread use of encrypted data transport, network traffic encryption is becoming a standard nowadays. This presents a challenge for traffic measurement, especially for analysis and anomaly detection methods, which are dependent on the type of network traffic. In this paper, we survey existing approaches for classification and analysis of encrypted traffic. First, we describe the most widespread encryption protocols used throughout the Internet. We show that the initiation of an encrypted connection and the protocol structure give away much information for encrypted traffic classification and analysis. Then, we survey payload and feature-based classification methods for encrypted traffic and categorize them using an established taxonomy. The advantage of some of described classification methods is the ability to recognize the encrypted application protocol in addition to the encryption protocol. Finally, we make a comprehensive comparison of the surveyed feature-based classification methods and present their weaknesses and strengths. Copyright © 2015 John Wiley & Sons, Ltd.", "The research community has begun looking for IP traffic classification techniques that do not rely on well known? TCP or UDP port numbers, or interpreting the contents of packet payloads. New work is emerging on the use of statistical traffic characteristics to assist in the identification and classification process. This survey paper looks at emerging research into the application of Machine Learning (ML) techniques to IP traffic classification - an inter-disciplinary blend of IP networking and data mining techniques. We provide context and motivation for the application of ML techniques to IP traffic classification, and review 18 significant works that cover the dominant period from 2004 to early 2007. These works are categorized and reviewed according to their choice of ML strategies and primary contributions to the literature. We also discuss a number of key requirements for the employment of ML-based traffic classifiers in operational IP networks, and qualitatively critique the extent to which the reviewed works meet these requirements. Open issues and challenges in the field are also discussed." ] }
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
The literature is rich with research that deals with network issues in smart communities. @cite_3 presented the networking requirements for different smart city applications and additionally presented network architectures for different smart city systems. In @cite_8 , the authors discussed the networking and communications challenges encountered in smart cities. @cite_13 state that deploying wireless sensor networks along with the aggregation network in different locations in the smart city is very costly and consequently propose an infrastructure-less approach in which vehicles equipped with sensors is used to collect data.
{ "cite_N": [ "@cite_13", "@cite_3", "@cite_8" ], "mid": [ "2810743368", "2140669960", "2790486829", "2034419870" ], "abstract": [ "Significant advancements in various technologies such as Cyber-Physical systems (CPS), Internet of Things (IoT), Wireless Sensor Networks (WSNs), Cloud Computing, and Unmanned Aerial Vehicles (UAVs) have taken place lately. These important advancements have led to their adoption in the smart city model, which is used by many organizations for large cities around the world to significantly enhance and improve the quality of life of the inhabitants, improve the utilization of city resources, and reduce operational costs. However, in order to reach these important objectives, efficient networking and communication protocols are needed in order to provide the necessary coordination and control of the various system components. In this paper, we identify the networking characteristics and requirements of smart city applications and identify the networking protocols that can be used to support the various data traffic flows that are needed between the different components in such applications. In addition, we provide an illustration of networking architectures of selected smart city systems, which include pipeline monitoring and control, smart grid, and smart water systems.", "Increasing population density in urban centers demands adequate provision of services and infrastructure to meet the needs of city inhabitants, encompassing residents, workers, and visitors. The utilization of information and communications technologies to achieve this objective presents an opportunity for the development of smart cities, where city management and citizens are given access to a wealth of real-time information about the urban environment upon which to base decisions, actions, and future planning. This paper presents a framework for the realization of smart cities through the Internet of Things (IoT). The framework encompasses the complete urban information system, from the sensory level and networking support structure through to data management and Cloud-based integration of respective systems and services, and forms a transformational part of the existing cyber-physical system. This IoT vision for a smart city is applied to a noise mapping case study to illustrate a new method for existing operations that can be adapted for the enhancement and delivery of important city services.", "New technologies such as sensor networks have been incorporated into the management of buildings for organizations and cities. Sensor networks have led to an exponential increase in the volume of data available in recent years, which can be used to extract consumption patterns for the purposes of energy and monetary savings. For this reason, new approaches and strategies are needed to analyze information in big data environments. This paper proposes a methodology to extract electric energy consumption patterns in big data time series, so that very valuable conclusions can be made for managers and governments. The methodology is based on the study of four clustering validity indices in their parallelized versions along with the application of a clustering technique. In particular, this work uses a voting system to choose an optimal number of clusters from the results of the indices, as well as the application of the distributed version of the k-means algorithm included in Apache Spark’s Machine Learning Library. The results, using electricity consumption for the years 2011–2017 for eight buildings of a public university, are presented and discussed. In addition, the performance of the proposed methodology is evaluated using synthetic big data, which cab represent thousands of buildings in a smart city. Finally, policies derived from the patterns discovered are proposed to optimize energy usage across the university campus.", "Sensor network technology promises a vast increase in automatic data collection capabilities through efficient deployment of tiny sensing devices. The technology will allow users to measure phenomena of interest at unprecedented spatial and temporal densities. However, as with almost every data-driven technology, the many benefits come with a significant challenge in data reliability. If wireless sensor networks are really going to provide data for the scientific community, citizen-driven activism, or organizations which test that companies are upholding environmental laws, then an important question arises: How can a user trust the accuracy of information provided by the sensor networkq Data integrity is vulnerable to both node and system failures. In data collection systems, faults are indicators that sensor nodes are not providing useful information. In data fusion systems the consequences are more dire; the final outcome is easily affected by corrupted sensor measurements, and the problems are no longer visibly obvious. In this article, we investigate a generalized and unified approach for providing information about the data accuracy in sensor networks. Our approach is to allow the sensor nodes to develop a community of trust. We propose a framework where each sensor node maintains reputation metrics which both represent past behavior of other nodes and are used as an inherent aspect in predicting their future behavior. We employ a Bayesian formulation, specifically a beta reputation system, for the algorithm steps of reputation representation, updates, integration and trust evolution. This framework is available as a middleware service on motes and has been ported to two sensor network operating systems, TinyOS and SOS. We evaluate the efficacy of this framework using multiple contexts: (1) a lab-scale test bed of Mica2 motes, (2) Avrora simulations, and (3) real data sets collected from sensor network deployments in James Reserve." ] }
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
@cite_7 present a system where public and semi-public vehicles are used for transporting data between stations distributed around the city and the main server. @cite_4 introduce the concept of Smart Vehicle as a Service (SVaaS). They predict the future location of the vehicle in order to guarantee a continuous vehicle service in smart cities. In another work @cite_12 , the authors indicate that cars will be the building blocks for future smart cities due to their mobility, communications, and processing capabilities. They propose Car4ICT, an architecture that uses cars as the main ICT resource in a smart city. The authors in @cite_10 propose an algorithm for collecting and forwarding data through vehicles in a multi-hop fashion in smart cities. They proposed a ranking system in which vehicles are ranked based on the connection time between the OBU and the RSU. The authors claim that their ranking system results in a better delivery ratio and decrease the number of replicated messages.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_12", "@cite_7" ], "mid": [ "2787210773", "2787288190", "2057661320", "2443552644" ], "abstract": [ "Efficient and cost effective data collection from smart city sensors through vehicular networks is crucial for many applications, such as travel comfort, safety and urban sensing. Static and mobile sensors data can be gathered through the vehicles that will be used as data mules and, while moving, they will be able to access road side units (RSUs), and then, send the data to a server in the cloud. Therefore, it is important to research how to use opportunistic vehicular networks to forward data packets through each other in a multi-hop fashion until they reach the destination. This paper proposes a novel data forwarding algorithm for urban vehicular networks taking into consideration the rank of each vehicle, which is based on the probability to reach a road side unit. The proposed forwarding algorithm is evaluated in the mOVERS emulator considering different forwarding decisions, such as, no restriction on broadcasting packets to neighboring On-Board Units (OBUs), restriction on broadcasting by the average rank of neighboring OBUs, and the number of hops between source and destination. Results show that, by restricting the broadcast messages in the proposed algorithm, we are able to reduce the network's overhead, therefore increasing the packet delivery ratio between the sensors and the server.", "The Smart City vision is to improve quality of life and efficiency of urban operations and services while meeting economic, social, and environmental needs of its dwellers. Realizing this vision requires cities to make significant investments in all kinds of smart objects. Recently, the concept of smart vehicle has also emerged as a viable solution for various pressing problems such as traffic management, drivers' comfort, road safety and on-demand provisioning services. With the availability of onboard vehicular services, these vehicles will be a constructive key enabler of smart cities. Smart vehicles are capable of sharing and storing digital content, sensing and monitoring its surroundings, and mobilizing on-demand services. However, the provisioning of these services is challenging due to different ownerships, costs, demand levels, and rewards. In this paper, we present the concept of Smart Vehicle as a Service (SVaaS) to provide continuous vehicular services in smart cities. The solution relies on a location prediction mechanism to determine a vehicle's future location. Once a vehicle's predicted location is determined, a Quality of Experience (QoE) based service selection mechanism is used to select services that are needed before the vehicle's arrival. We provide simulation results to show that our approach can adequately establish vehicular services in a timely and efficient manner. It also shows that the number of utilized services have been doubled when prediction and service discovery is applied.", "Vehicular communications are becoming an emerging technology for safety control, traffic control, urban monitoring, pollution control, and many other road safety and traffic efficiency applications. All these applications generate a lot of data which should be distributed among communication parties such as vehicles and users in an efficient manner. On the other hand, the generated data cause a significant load on a network infrastructure, which aims at providing uninterrupted services to the communication parties in an urban scenario. To make a balance of load on the network for such situations in the urban scenario, frequently accessed contents should be cached at specified locations either in the vehicles or at some other sites on the infrastructure providing connectivity to the vehicles. However, due to the high mobility and sparse distribution of the vehicles on the road, sometimes, it is not feasible to place the contents on the existing infrastructure, and useful information generated from the vehicles may not be sent to its final destination. To address this issue, in this paper, we propose a new peer-to-peer (P2P) cooperative caching scheme. To minimize the load on the infrastructure, traffic information among vehicles is shared in a P2P manner using a Markov chain model with three states. The replacement of existing data to accommodate newly arrived data is achieved in a probabilistic manner. The probability is calculated using the time to stay in a waiting state and the frequency of access of a particular data item in a given time interval. The performance of the proposed scheme is evaluated in comparison to those of existing schemes with respect to the metrics such as network congestion, query delay, and hit ratio. Analysis results show that the proposed scheme has reduced the congestion and query delay by 30 with an increase in the hit ratio by 20 .", "This paper presents the first study on scheduling for cooperative data dissemination in a hybrid infrastructure-to-vehicle (I2V) and vehicle-to-vehicle (V2V) communication environment. We formulate the novel problem of cooperative data scheduling (CDS). Each vehicle informs the road-side unit (RSU) the list of its current neighboring vehicles and the identifiers of the retrieved and newly requested data. The RSU then selects sender and receiver vehicles and corresponding data for V2V communication, while it simultaneously broadcasts a data item to vehicles that are instructed to tune into the I2V channel. The goal is to maximize the number of vehicles that retrieve their requested data. We prove that CDS is NP-hard by constructing a polynomial-time reduction from the Maximum Weighted Independent Set (MWIS) problem. Scheduling decisions are made by transforming CDS to MWIS and using a greedy method to approximately solve MWIS. We build a simulation model based on realistic traffic and communication characteristics and demonstrate the superiority and scalability of the proposed solution. The proposed model and solution, which are based on the centralized scheduler at the RSU, represent the first known vehicular ad hoc network (VANET) implementation of software defined network (SDN) concept." ] }