diff --git "a/data/de/test.jsonl" "b/data/de/test.jsonl" new file mode 100644--- /dev/null +++ "b/data/de/test.jsonl" @@ -0,0 +1,618 @@ +{"source": "Incremental class learning involves sequentially learning classes in bursts of examples from the same class. This violates the assumptions that underlie methods for training standard deep neural networks, and will cause them to suffer from catastrophic forgetting. Arguably, the best method for incremental class learning is iCaRL, but it requires storing training examples for each class, making it challenging to scale. Here, we propose FearNet for incremental class learning. FearNet is a generative model that does not store previous examples, making it memory efficient. FearNet uses a brain-inspired dual-memory system in which new memories are consolidated from a network for recent memories inspired by the mammalian hippocampal complex to a network for long-term storage inspired by medial prefrontal cortex. Memory consolidation is inspired by mechanisms that occur during sleep. FearNet also uses a module inspired by the basolateral amygdala for determining which memory system to use for recall. FearNet achieves state-of-the-art performance at incremental class learning on image (CIFAR-100, CUB-200) and audio classification (AudioSet) benchmarks.\n", "target": ["FearNet ist ein speichereffizientes neuronales Netz, das von der Gedächtnisbildung im Säugetiergehirn inspiriert ist und inkrementelles Klassenlernen ohne katastrophales Vergessen ermöglicht.", "In diesem Beitrag wird eine neuartige Lösung für ein inkrementelles Klassifizierungsproblem auf der Grundlage eines dualen Speichersystems vorgestellt. "]} +{"source": "Multi-view learning can provide self-supervision when different views are available of the same data. Distributional hypothesis provides another form of useful self-supervision from adjacent sentences which are plentiful in large unlabelled corpora. Motivated by the asymmetry in the two hemispheres of the human brain as well as the observation that different learning architectures tend to emphasise different aspects of sentence meaning, we present two multi-view frameworks for learning sentence representations in an unsupervised fashion. One framework uses a generative objective and the other a discriminative one. In both frameworks, the final representation is an ensemble of two views, in which, one view encodes the input sentence with a Recurrent Neural Network (RNN), and the other view encodes it with a simple linear model. We show that, after learning, the vectors produced by our multi-view frameworks provide improved representations over their single-view learnt counterparts, and the combination of different views gives representational improvement over each view and demonstrates solid transferability on standard downstream tasks.", "target": ["Lernen mit mehreren Ansichten verbessert das unüberwachte Lernen von Satzrepräsentationen", "Der Ansatz verwendet verschiedene, komplementäre Kodierer des Eingabesatzes und eine Konsensmaximierung.", "Der Artikel präsentiert einen Multi-View-Rahmen zur Verbesserung der Satzrepräsentation in NLP-Aufgaben unter Verwendung generativer und diskriminativer Zielarchitekturen.", "Diese Arbeit zeigt, dass Multi-View Frameworks beim Lernen von Satzrepräsentationen effektiver sind als die Verwendung einzelner Encoder."]} +{"source": "We show how discrete objects can be learnt in an unsupervised fashion from pixels, and how to perform reinforcement learning using this object representation.\n\n More precisely, we construct a differentiable mapping from an image to a discrete tabular list of objects, where each object consists of a differentiable position, feature vector, and scalar presence value that allows the representation to be learnt using an attention mechanism.\n\n Applying this mapping to Atari games, together with an interaction net-style architecture for calculating quantities from objects, we construct agents that can play Atari games using objects learnt in an unsupervised fashion. During training, many natural objects emerge, such as the ball and paddles in Pong, and the submarine and fish in Seaquest.\n\n This gives the first reinforcement learning agent for Atari with an interpretable object representation, and opens the avenue for agents that can conduct object-based exploration and generalization.", "target": ["Wir zeigen, wie diskrete Objekte auf unüberwachte Weise aus Pixeln gelernt werden können und wie man mit dieser Objektrepräsentation Reinforcement Learning durchführen kann.", "Eine Methode zum Lernen von Objektrepräsentationen aus Pixeln für die Durchführung von Reinforcement Learning. ", "Die Arbeit schlägt eine neuronale Architektur vor, um Videoströme auf eine diskrete Sammlung von Objekten abzubilden, ohne menschliche Annotationen, unter Verwendung eines unbeaufsichtigten Pixel-Rekonstruktionsverlustes. "]} +{"source": "Most recent gains in visual recognition have originated from the inclusion of attention mechanisms in deep convolutional networks (DCNs). Because these networks are optimized for object recognition, they learn where to attend using only a weak form of supervision derived from image class labels. Here, we demonstrate the benefit of using stronger supervisory signals by teaching DCNs to attend to image regions that humans deem important for object recognition. We first describe a large-scale online experiment (ClickMe) used to supplement ImageNet with nearly half a million human-derived \"top-down\" attention maps. Using human psychophysics, we confirm that the identified top-down features from ClickMe are more diagnostic than \"bottom-up\" saliency features for rapid image categorization. As a proof of concept, we extend a state-of-the-art attention network and demonstrate that adding ClickMe supervision significantly improves its accuracy and yields visual features that are more interpretable and more similar to those used by human observers.", "target": ["Ein umfangreicher Datensatz zum Training von Aufmerksamkeitsmodellen für die Objekterkennung führt zu einer genaueren, interpretierbaren und menschenähnlichen Objekterkennung.", "Die jüngsten Fortschritte in der visuellen Erkennung sind auf die Verwendung visueller Aufmerksamkeitsmechanismen in tiefen Convolutional Networks zurückzuführen, die durch eine schwache Form der Überwachung auf der Grundlage von Bildklassenbezeichnungen lernen, worauf sie sich konzentrieren sollen.", "Es wird ein neuer Ansatz zum Thema Aufmerksamkeit vorgestellt, bei dem ein großer Aufmerksamkeitsdatensatz gesammelt und verwendet wird, um ein NN auf überwachte Weise zu trainieren, um die selbstberichtete menschliche Aufmerksamkeit zu nutzen.", "Diese Arbeit schlägt einen neuen Ansatz vor, um informativere Signale zu verwenden, insbesondere Regionen, die Menschen auf Bildern als wichtig erachten, um tiefe Convolutional Neural Networks zu verbessern."]} +{"source": "In recent years, deep neural networks have demonstrated outstanding performancein many machine learning tasks. However, researchers have discovered that thesestate-of-the-art models are vulnerable to adversarial examples: legitimate examples added by small perturbations which are unnoticeable to human eyes. Adversarial training, which augments the training data with adversarial examples duringthe training process, is a well known defense to improve the robustness of themodel against adversarial attacks. However, this robustness is only effective tothe same attack method used for adversarial training. Madry et al. (2017) suggest that effectiveness of iterative multi-step adversarial attacks and particularlythat projected gradient descent (PGD) may be considered the universal first order adversary and applying the adversarial training with PGD implies resistanceagainst many other first order attacks. However, the computational cost of theadversarial training with PGD and other multi-step adversarial examples is muchhigher than that of the adversarial training with other simpler attack techniques. In this paper, we show how strong adversarial examples can be generated only ata cost similar to that of two runs of the fast gradient sign method (FGSM), allowing defense against adversarial attacks with a robustness level comparable to thatof the adversarial training with multi-step adversarial examples. We empiricallydemonstrate the effectiveness of the proposed two-step defense approach againstdifferent attack methods and its improvements over existing defense strategies.", "target": ["Wir haben eine zeiteffiziente Verteidigungsmethode gegen einstufige und iterative adversarial Angriffe vorgeschlagen.", "Vorschlag einer neuartigen, rechnerisch effizienten Methode mit dem Namen e2SAD, die für jede saubere Trainingsprobe einen Satz von zwei adversarial Trainingsbeispielen erzeugt.", "Der Beitrag stellt eine zweistufige adversarische Verteidigungsmethode vor, die zwei adversarische Beispiele pro sauberer Probe erzeugt und sie in die eigentliche Trainingsschleife einbezieht, um Robustheit zu erreichen, und behauptet, dass sie teurere iterative Methoden übertreffen kann.", "Die Arbeit stellt einen 2-Schritt-Ansatz vor, um starke Gegenbeispiele zu generieren, und zwar zu weitaus geringeren Kosten als bei aktuellen iterativen mehrstufigen Gegenangriffen."]} +{"source": "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Little studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and less prone to overfitting.", "target": ["Ein Vergleich von fünf tiefen neuronalen Netzwerkarchitekturen zur Erkennung bösartiger Domänennamen zeigt erstaunlich wenig Unterschiede.", "Die Autoren schlagen die Verwendung von fünf tiefgreifenden Architekturen für die Cybersicherheitsaufgabe der Erkennung von Algorithmen der Domänenerzeugung vor.", "Wendet verschiedene NN-Architekturen zur Klassifizierung von URLs an, die mit Gutartig und Malware in Verbindung stehen.", "In diesem Beitrag wird vorgeschlagen, Domänennamen automatisch als bösartig oder gutartig zu erkennen, indem tiefe Netzwerke trainiert werden, um die Zeichenfolge direkt als solche zu klassifizieren."]} +{"source": "Recognizing the relationship between two texts is an important aspect of natural language understanding (NLU), and a variety of neural network models have been proposed for solving NLU tasks. Unfortunately, recent work showed that the datasets these models are trained on often contain biases that allow models to achieve non-trivial performance without possibly learning the relationship between the two texts. We propose a framework for building robust models by using adversarial learning to encourage models to learn latent, bias-free representations. We test our approach in a Natural Language Inference (NLI) scenario, and show that our adversarially-trained models learn robust representations that ignore known dataset-specific biases. Our experiments demonstrate that our models are more robust to new NLI datasets.", "target": ["Adversarial-Learning-Methoden ermutigen NLI-Modelle dazu, datensatzspezifische Verzerrungen zu ignorieren und helfen bei der Übertragung von Modellen auf andere Datensätze.", "Die Arbeit schlägt einen adversarial Aufbau vor, um Annotationsartefakte in natürlichsprachlichen Inferenzdaten zu entschärfen", "In diesem Beitrag wird eine Methode zur Beseitigung von Verzerrungen eines textuellen Entailment-Modells durch ein adversarial Trainingsziel vorgestellt. "]} +{"source": "We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. The success of such a task heavily relies on the ability of modeling and inferring the patterns of (or between) the relations. In this paper, we present a new approach for knowledge graph embedding called RotatE, which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. In addition, we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the RotatE model. Experimental results on multiple benchmark knowledge graphs show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction.", "target": ["Ein neuer hochmoderner Ansatz für die Einbettung von Wissensgraphen.", "Es wird eine neuronale Bewertungsfunktion für Verknüpfungsvorhersagen vorgestellt, die Symmetrie, Antisymmetrie, Inversion und Kompositionsmuster von Beziehungen in einer Wissensbasis ableiten kann.", "In diesem Beitrag wird ein Ansatz zur Einbettung von Wissensgraphen vorgeschlagen, bei dem Beziehungen als Rotationen im komplexen Vektorraum modelliert werden.", "Vorschlagen einer Methode zur Einbettung von Graphen, die für die Vorhersage von Links verwendet werden kann."]} +{"source": "Deep learning algorithms have been known to be vulnerable to adversarial perturbations in various tasks such as image classification. This problem was addressed by employing several defense methods for detection and rejection of particular types of attacks. However, training and manipulating networks according to particular defense schemes increases computational complexity of the learning algorithms. In this work, we propose a simple yet effective method to improve robustness of convolutional neural networks (CNNs) to adversarial attacks by using data dependent adaptive convolution kernels. To this end, we propose a new type of HyperNetwork in order to employ statistical properties of input data and features for computation of statistical adaptive maps. Then, we filter convolution weights of CNNs with the learned statistical maps to compute dynamic kernels. Thereby, weights and kernels are collectively optimized for learning of image classification models robust to\n adversarial attacks without employment of additional target detection and rejection algorithms.\n We empirically demonstrate that the proposed method enables CNNs to spontaneously defend against different types of attacks, e.g. attacks generated by Gaussian noise, fast gradient sign methods (Goodfellow et al., 2014) and a black-box attack (Narodytska & Kasiviswanathan, 2016).", "target": ["Wir haben das CNN mit Hilfe von HyperNetworks modifiziert und eine bessere Robustheit gegenüber ungünstigen Beispielen festgestellt.", "Verbesserung der Robustheit und Zuverlässigkeit von tiefen neuronalen Convolutional Neural Networks durch Verwendung datenabhängiger Convolution Kernerls."]} +{"source": "Adapting deep networks to new concepts from a few examples is challenging, due to the high computational requirements of standard fine-tuning procedures.\n Most work on few-shot learning has thus focused on simple learning techniques for adaptation, such as nearest neighbours or gradient descent.\n Nonetheless, the machine learning literature contains a wealth of methods that learn non-deep models very efficiently.\n In this paper, we propose to use these fast convergent methods as the main adaptation mechanism for few-shot learning.\n The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data.\n This requires back-propagating errors through the solver steps.\n While normally the cost of the matrix operations involved in such a process would be significant, by using the Woodbury identity we can make the small number of examples work to our advantage.\n We propose both closed-form and iterative solvers, based on ridge regression and logistic regression components.\n Our methods constitute a simple and novel approach to the problem of few-shot learning and achieve performance competitive with or superior to the state of the art on three benchmarks.", "target": ["Wir schlagen einen Meta-Learning-Ansatz für die Few-Shot Klassifizierung vor, der durch Backpropagation über die Lösung von schnellen Solvern wie Ridge-Regression oder logistischer Regression eine hohe Leistung bei hoher Geschwindigkeit erzielt.", "In dem Papier wird ein Algorithmus für das Meta-Lernen vorgeschlagen, der darauf hinausläuft, die Merkmale festzulegen (d. h. alle verborgenen Schichten eines tiefen NN) und jede Aufgabe als eigene Endschicht zu behandeln, die eine Ridge-Regression oder eine logistische Regression sein könnte.", "In diesem Papier wird ein Meta-Learning-Ansatz für das Problem der Few-Shot Klassifizierung vorgeschlagen. Die Methode basiert auf der Parametrisierung des Lerners für jede Aufgabe durch einen geschlossene Form Löser."]} +{"source": "While many active learning papers assume that the learner can simply ask for a label and receive it, real annotation often presents a mismatch between the form of a label (say, one among many classes), and the form of an annotation (typically yes/no binary feedback). To annotate examples corpora for multiclass classification, we might need to ask multiple yes/no questions, exploiting a label hierarchy if one is available. To address this more realistic setting, we propose active learning with partial feedback (ALPF), where the learner must actively choose both which example to label and which binary question to ask. At each step, the learner selects an example, asking if it belongs to a chosen (possibly composite) class. Each answer eliminates some classes, leaving the learner with a partial label. The learner may then either ask more questions about the same example (until an exact label is uncovered) or move on immediately, leaving the first example partially labeled. Active learning with partial labels requires (i) a sampling strategy to choose (example, class) pairs, and (ii) learning from partial labels between rounds. Experiments on Tiny ImageNet demonstrate that our most effective method improves 26% (relative) in top-1 classification accuracy compared to i.i.d. baselines and standard active learners given 30% of the annotation budget that would be required (naively) to annotate the dataset. Moreover, ALPF-learners fully annotate TinyImageNet at 42% lower cost. Surprisingly, we observe that accounting for per-example annotation costs can alter the conventional wisdom that active learners should solicit labels for hard examples.", "target": ["Wir bieten eine neue Perspektive auf das Training eines maschinellen Lernmodells von Grund auf in einer hierarchischen Umgebung, d.h. wir betrachten es als eine wechselseitige Kommunikation zwischen Mensch und Algorithmus und untersuchen, wie wir die Effizienz messen und verbessern können. ", "Es wird eine neue Einstellung für aktives Lernen eingeführt, bei der das Orakel eine partielle oder schwache Kennzeichnung anbietet, anstatt nach der Kennzeichnung eines bestimmten Beispiels zu fragen, was zu einem einfacheren Abruf von Informationen führt.", "In dieser Arbeit wird eine Methode des aktiven Lernens mit partiellem Feedback vorgeschlagen, die bei einem begrenzten Budget besser abschneidet als die existierenden Basissysteme.", "In dem Beitrag wird ein Mehrklassen-Klassifizierungsproblem betrachtet, bei dem Etiketten in eine bestimmte Anzahl M von Teilmengen gruppiert werden, die alle einzelnen Etiketten als Singletons enthalten."]} +{"source": "Despite their prevalence, Euclidean embeddings of data are fundamentally limited in their ability to capture latent semantic structures, which need not conform to Euclidean spatial assumptions. Here we consider an alternative, which embeds data as discrete probability distributions in a Wasserstein space, endowed with an optimal transport metric. Wasserstein spaces are much larger and more flexible than Euclidean spaces, in that they can successfully embed a wider variety of metric structures. We propose to exploit this flexibility by learning an embedding that captures the semantic information in the Wasserstein distance between embedded distributions. We examine empirically the representational capacity of such learned Wasserstein embeddings, showing that they can embed a wide variety of complex metric structures with smaller distortion than an equivalent Euclidean embedding. We also investigate an application to word embedding, demonstrating a unique advantage of Wasserstein embeddings: we can directly visualize the high-dimensional embedding, as it is a probability distribution on a low-dimensional space. This obviates the need for dimensionality reduction techniques such as t-SNE for visualization.", "target": ["Wir zeigen, dass Wasserstein-Räume gute Ziele für die Einbettung von Daten mit komplexer semantischer Struktur sind.", "Lernt Einbettungen in einem diskreten Raum von Wahrscheinlichkeitsverteilungen unter Verwendung einer minimierten, regularisierten Version von Wasserstein-Distanzen.", "Die Arbeit beschreibt eine neue Einbettungsmethode, die Daten in den Raum der Wahrscheinlichkeitsmaße einbettet, die mit der Wasserstein-Distanz ausgestattet sind. ", "In dem Beitrag wird vorgeschlagen, die Daten in niedrigdimensionale Wasserstein-Räume einzubetten, die die zugrunde liegende Struktur der Daten genauer erfassen können."]} +{"source": "Clustering high-dimensional datasets is hard because interpoint distances become less informative in high-dimensional spaces. We present a clustering algorithm that performs nonlinear dimensionality reduction and clustering jointly. The data is embedded into a lower-dimensional space by a deep autoencoder. The autoencoder is optimized as part of the clustering process. The resulting network produces clustered data. The presented approach does not rely on prior knowledge of the number of ground-truth clusters. Joint nonlinear dimensionality reduction and clustering are formulated as optimization of a global continuous objective. We thus avoid discrete reconfigurations of the objective that characterize prior clustering algorithms. Experiments on datasets from multiple domains demonstrate that the presented algorithm outperforms state-of-the-art clustering schemes, including recent methods that use deep networks.", "target": ["Ein Clustering-Algorithmus, der eine gemeinsame nichtlineare Dimensionalitätsreduktion und ein Clustering durch Optimierung eines globalen kontinuierlichen Ziels durchführt.", "Stellt einen Clustering-Algorithmus vor, der tiefe Autoencoder und Clustering als ein globales, kontinuierliches Ziel gemeinsam löst und bessere Ergebnisse als moderne Clustering-Schemata zeigt.", "Deep Continuous Clustering ist eine Clustering-Methode, die das Ziel des Autoencoders mit dem Ziel des Clustering verbindet und dann mit SGD trainiert."]} +{"source": "Deep convolutional neural networks (CNNs) are deployed in various applications but demand immense computational requirements. Pruning techniques and Winograd convolution are two typical methods to reduce the CNN computation. However, they cannot be directly combined because Winograd transformation fills in the sparsity resulting from pruning. Li et al. (2017) propose sparse Winograd convolution in which weights are directly pruned in the Winograd domain, but this technique is not very practical because Winograd-domain retraining requires low learning rates and hence significantly longer training time. Besides, Liu et al. (2018) move the ReLU function into the Winograd domain, which can help increase the weight sparsity but requires changes in the network structure. To achieve a high Winograd-domain weight sparsity without changing network structures, we propose a new pruning method, spatial-Winograd pruning. As the first step, spatial-domain weights are pruned in a structured way, which efficiently transfers the spatial-domain sparsity into the Winograd domain and avoids Winograd-domain retraining. For the next step, we also perform pruning and retraining directly in the Winograd domain but propose to use an importance factor matrix to adjust weight importance and weight gradients. This adjustment makes it possible to effectively retrain the pruned Winograd-domain network without changing the network structure. For the three models on the datasets of CIFAR-10, CIFAR-100, and ImageNet, our proposed method can achieve the Winograd-domain sparsities of 63%, 50%, and 74%, respectively.", "target": ["Um die Berechnung von Convolutional Neural Networks zu beschleunigen, schlagen wir eine neue zweistufige Pruning Technik vor, die eine höhere Sparsamkeit der Winograd-Domäne erreicht, ohne die Netzstruktur zu verändern.", "Schlägt ein räumliches Winograd Pruning Framework vor, das es ermöglicht, Pruned Gewichte aus dem räumlichen Bereich im Winograd-Bereich zu behalten und die Spärlichkeit des Winograd-Bereichs zu verbessern.", "Schlägt zwei Techniken für das Pruning von Convolutional Layers vor, die den Winograd-Algorithmus verwenden."]} +{"source": "In federated learning problems, data is scattered across different servers and exchanging or pooling it is often impractical or prohibited. We develop a Bayesian nonparametric framework for federated learning with neural networks. Each data server is assumed to train local neural network weights, which are modeled through our framework. We then develop an inference approach that allows us to synthesize a more expressive global network without additional supervision or data pooling. We then demonstrate the efficacy of our approach on federated learning problems simulated from two popular image classification datasets.", "target": ["Wir schlagen ein nichtparametrisches Bayes'sches Modell für föderiertes Lernen mit neuronalen Netzen vor.", "Verwendet das Beta-Verfahren für den föderalen neuronalen Abgleich.", "Die Arbeit befasst sich mit dem föderativen Lernen neuronaler Netze, bei dem die Daten auf mehrere Rechner verteilt sind und die Verteilung der Datenpunkte potenziell inhomogen und unausgewogen ist."]} +{"source": "We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. Python source code will be open-sourced with the camera-ready paper.", "target": ["Allgemeine Methode zum Trainieren ausdrucksstarker MCMC-Kerne, die mit tiefen neuronalen Netzen parametrisiert sind. Bei einer Zielverteilung p bietet unsere Methode einen schnell mischenden Sampler, der den Zustandsraum effizient erkunden kann.", "Er schlägt eine verallgemeinerte HMC vor, indem er den Leapfrog-Integrator mit Hilfe neuronaler Netze modifiziert, um den Sampler schnell konvergieren und mischen zu lassen. "]} +{"source": "This paper addresses the problem of evaluating learning systems in safety critical domains such as autonomous driving, where failures can have catastrophic consequences. We focus on two problems: searching for scenarios when learned agents fail and assessing their probability of failure. The standard method for agent evaluation in reinforcement learning, Vanilla Monte Carlo, can miss failures entirely, leading to the deployment of unsafe agents. We demonstrate this is an issue for current agents, where even matching the compute used for training is sometimes insufficient for evaluation. To address this shortcoming, we draw upon the rare event probability estimation literature and propose an adversarial evaluation approach. Our approach focuses evaluation on adversarially chosen situations, while still providing unbiased estimates of failure probabilities. The key difficulty is in identifying these adversarial situations -- since failures are rare there is little signal to drive optimization. To solve this we propose a continuation approach that learns failure modes in related but less robust agents. Our approach also allows reuse of data already collected for training the agent. We demonstrate the efficacy of adversarial evaluation on two standard domains: humanoid control and simulated driving. Experimental results show that our methods can find catastrophic failures and estimate failures rates of agents multiple orders of magnitude faster than standard evaluation schemes, in minutes to hours rather than days.", "target": ["Wir zeigen, dass seltene, aber katastrophale Fehler bei Zufallstests völlig übersehen werden können, was Probleme für einen sicheren Einsatz mit sich bringt. Unser vorgeschlagener Ansatz für adversarial Tests behebt dieses Problem.", "Es wird eine Methode vorgeschlagen, mit der eine Vorhersage der Ausfallwahrscheinlichkeit für einen gelernten Agenten erlernt werden kann, was zu Vorhersagen darüber führt, welche Anfangszustände zum Ausfall eines Systems führen.", "In dieser Arbeit wird ein Ansatz für die Auswahl von Fehlerfällen für RL-Algorithmen vorgeschlagen, der auf einer Funktion basiert, die über ein neuronales Netz für Fehler gelernt wird, die während des Agententrainings auftreten.", "In dieser Arbeit wird ein kontradiktorischer Ansatz zur Identifizierung von katastrophalen Fehlern beim Reinforcement Learning vorgeschlagen."]} +{"source": "The variational autoencoder (VAE) is a popular combination of deep latent variable model and accompanying variational learning technique. By using a neural inference network to approximate the model's posterior on latent variables, VAEs efficiently parameterize a lower bound on marginal data likelihood that can be optimized directly via gradient methods. In practice, however, VAE training often results in a degenerate local optimum known as \"posterior collapse\" where the model learns to ignore the latent variable and the approximate posterior mimics the prior. In this paper, we investigate posterior collapse from the perspective of training dynamics. We find that during the initial stages of training the inference network fails to approximate the model's true posterior, which is a moving target. As a result, the model is encouraged to ignore the latent encoding and posterior collapse occurs. Based on this observation, we propose an extremely simple modification to VAE training to reduce inference lag: depending on the model's current mutual information between latent variable and observation, we aggressively optimize the inference network before performing each model update. Despite introducing neither new model components nor significant complexity over basic VAE, our approach is able to avoid the problem of collapse that has plagued a large amount of previous work. Empirically, our approach outperforms strong autoregressive baselines on text and image benchmarks in terms of held-out likelihood, and is competitive with more complex techniques for avoiding collapse while being substantially faster.", "target": ["Um den posterioren Kollaps in VAEs zu bekämpfen, schlagen wir ein neuartiges, aber einfaches Trainingsverfahren vor, das das Inferenznetzwerk mit mehr Updates aggressiv optimiert. Dieses neue Trainingsverfahren mildert den posterioren Kollaps und führt zu einem besseren VAE-Modell. ", "Untersucht das Phänomen des posterioren Kollaps und zeigt, dass ein verstärktes Training des Inferenznetzes das Problem verringern und zu besseren Optima führen kann.", "Die Autoren schlagen vor, das Trainingsverfahren der VAEs nur als Lösung für den posterioren Kollaps zu ändern und das Modell und das Ziel unangetastet zu lassen."]} +{"source": "Online healthcare services can provide the general public with ubiquitous access to medical knowledge and reduce the information access cost for both individuals and societies. To promote these benefits, it is desired to effectively expand the scale of high-quality yet novel relational medical entity pairs that embody rich medical knowledge in a structured form. To fulfill this goal, we introduce a generative model called Conditional Relationship Variational Autoencoder (CRVAE), which can discover meaningful and novel relational medical entity pairs without the requirement of additional external knowledge. Rather than discriminatively identifying the relationship between two given medical entities in a free-text corpus, we directly model and understand medical relationships from diversely expressed medical entity pairs. The proposed model introduces the generative modeling capacity of variational autoencoder to entity pairs, and has the ability to discover new relational medical entity pairs solely based on the existing entity pairs. Beside entity pairs, relationship-enhanced entity representations are obtained as another appealing benefit of the proposed method. Both quantitative and qualitative evaluations on real-world medical datasets demonstrate the effectiveness of the proposed method in generating relational medical entity pairs that are meaningful and novel.", "target": ["Generative Entdeckung bedeutungsvoller, neuartiger Entitätspaare mit einer bestimmten medizinischen Beziehung durch reines Lernen aus den vorhandenen bedeutungsvollen Entitätspaaren, ohne dass ein zusätzlicher Textkorpus für die diskriminative Extraktion erforderlich ist.", "Präsentiert einen variationalen Autoencoder zur Generierung von Entity-Paaren anhand einer Beziehung in einem medizinischen Umfeld.", "Im medizinischen Kontext wird in diesem Beitrag das klassische Problem der \"Vervollständigung der Wissensbasis\" aus ausschließlich strukturierten Daten beschrieben."]} +{"source": "Although variational autoencoders (VAEs) represent a widely influential deep generative model, many aspects of the underlying energy function remain poorly understood. In particular, it is commonly believed that Gaussian encoder/decoder assumptions reduce the effectiveness of VAEs in generating realistic samples . In this regard, we rigorously analyze the VAE objective, differentiating situations where this belief is and is not actually true . We then leverage the corresponding insights to develop a simple VAE enhancement that requires no additional hyperparameters or sensitive tuning . Quantitatively, this proposal produces crisp samples and stable FID scores that are actually competitive with a variety of GAN models, all while retaining desirable attributes of the original VAE architecture . The code for our model is available at \\url{https://github.com/daib13/TwoStageVAE}.", "target": ["Wir analysieren die VAE-Zielfunktion genau und ziehen neue Schlussfolgerungen, die zu einfachen Verbesserungen führen.", "schlägt eine zweistufige VAE-Methode vor, um qualitativ hochwertige Beispiele zu erzeugen und Unschärfen zu vermeiden.", "In diesem Beitrag werden die Gaußschen VAEs analysiert.", "Die Arbeit liefert eine Reihe von theoretischen Ergebnissen über \"vanilla\" Gaussian Variational Auto-Encoders, die dann verwendet werden, um einen neuen Algorithmus namens \"2 stage VAEs\" zu bauen."]} +{"source": "We give a simple, fast algorithm for hyperparameter optimization inspired by techniques from the analysis of Boolean functions. We focus on the high-dimensional regime where the canonical example is training a neural network with a large number of hyperparameters. The algorithm --- an iterative application of compressed sensing techniques for orthogonal polynomials --- requires only uniform sampling of the hyperparameters and is thus easily parallelizable.\n \n Experiments for training deep neural networks on Cifar-10 show that compared to state-of-the-art tools (e.g., Hyperband and Spearmint), our algorithm finds significantly improved solutions, in some cases better than what is attainable by hand-tuning. In terms of overall running time (i.e., time required to sample various settings of hyperparameters plus additional computation time), we are at least an order of magnitude faster than Hyperband and Bayesian Optimization. We also outperform Random Search $8\\times$.\n \nOur method is inspired by provably-efficient algorithms for learning decision trees using the discrete Fourier transform. We obtain improved sample-complexty bounds for learning decision trees while matching state-of-the-art bounds on running time (polynomial and quasipolynomial, respectively).", "target": ["Ein Algorithmus zur Abstimmung der Hyperparameter unter Verwendung der diskreten Fourier-Analyse und des komprimierten Sensings.", "Untersucht das Problem der Optimierung von Hyperparametern unter der Annahme, dass die unbekannte Funktion approximiert werden kann, und zeigt, dass die approximative Minimierung über den booleschen Hyperwürfel durchgeführt werden kann.", "Die Arbeit untersucht die Optimierung von Hyperparametern durch die Annahme einer Struktur in der unbekannten Funktion, die Hyperparameter auf die Klassifizierungsgenauigkeit abbildet."]} +{"source": "Permutations and matchings are core building blocks in a variety of latent variable models, as they allow us to align, canonicalize, and sort data. Learning in such models is difficult, however, because exact marginalization over these combinatorial objects is intractable. In response, this paper introduces a collection of new methods for end-to-end learning in such models that approximate discrete maximum-weight matching using the continuous Sinkhorn operator. Sinkhorn iteration is attractive because it functions as a simple, easy-to-implement analog of the softmax operator. With this, we can define the Gumbel-Sinkhorn method, an extension of the Gumbel-Softmax method (Jang et al. 2016, Maddison2016 et al. 2016) to distributions over latent matchings. We demonstrate the effectiveness of our method by outperforming competitive baselines on a range of qualitatively different tasks: sorting numbers, solving jigsaw puzzles, and identifying neural signals in worms.", "target": ["Eine neue Methode für die Gradientenabstammung von Permutationen, mit Anwendungen für die Inferenz von latenten Übereinstimmungen und das überwachte Lernen von Permutationen mit neuronalen Netzen.", "Die Arbeit verwendet eine endliche Annäherung des Sinkhorn-Operators, um zu beschreiben, wie man ein neuronales Netz zum Lernen aus permutationsbewerteten Trainingsdaten konstruieren kann. ", "Die Arbeit schlägt eine neue Methode vor, die das diskrete Max-Gewicht für das Lernen latenter Permutationen annähert."]} +{"source": "Recent work in network quantization has substantially reduced the time and space complexity of neural network inference, enabling their deployment on embedded and mobile devices with limited computational and memory resources. However, existing quantization methods often represent all weights and activations with the same precision (bit-width). In this paper, we explore a new dimension of the design space: quantizing different layers with different bit-widths. We formulate this problem as a neural architecture search problem and propose a novel differentiable neural architecture search (DNAS) framework to efficiently explore its exponential search space with gradient-based optimization. Experiments show we surpass the state-of-the-art compression of ResNet on CIFAR-10 and ImageNet. Our quantized models with 21.1x smaller model size or 103.9x lower computational cost can still outperform baseline quantized or even full precision models.", "target": ["Ein neuartiger Suchrahmen für differenzierbare neuronale Architekturen zur gemischten Quantisierung von ConvNets.", "Die Autoren stellen eine neue Methode für die Suche nach einer neuronalen Architektur vor, die die Präzisionsquantisierung der Gewichte in jeder Schicht des neuronalen Netzes auswählt, und verwenden sie im Zusammenhang mit der Netzkompression.", "In diesem Beitrag wird ein neuer Ansatz für die Quantisierung von Netzwerken vorgestellt, bei dem verschiedene Schichten mit unterschiedlichen Bitbreiten quantisiert werden, und es wird ein neuer Suchrahmen für differenzierbare neuronale Architekturen eingeführt."]} +{"source": "The top-$k$ error is a common measure of performance in machine learning and computer vision. In practice, top-$k$ classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top-$k$ classification can bring significant improvements.\n Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top-$k$ optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{\\\"i}ve algorithm would require $\\mathcal{O}(\\binom{n}{k})$ operations, where $n$ is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of $\\mathcal{O}(k n)$. Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of $k=5$. Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy.", "target": ["Glättende Verlustfunktion für Top-k-Fehlerminimierung.", "Schlägt vor, den Top-k-Verlust mit tiefen Modellen zu verwenden, um das Problem der Klassenverwechslung mit ähnlichen Klassen zu lösen, die im Trainingsdatensatz sowohl vorhanden als auch nicht vorhanden sind.", "Glättet die Top-k-Verluste.", "In dieser Arbeit wird eine glatte Surrogate-Verlustfunktion für die Top-k-SVM eingeführt, um die SVM mit den tiefen neuronalen Netzen zu verbinden."]} +{"source": "Designing a molecule with desired properties is one of the biggest challenges in drug development, as it requires optimization of chemical compound structures with respect to many complex properties. To augment the compound design process we introduce Mol-CycleGAN -- a CycleGAN-based model that generates optimized compounds with a chemical scaffold of interest. Namely, given a molecule our model generates a structurally similar one with an optimized value of the considered property. We evaluate the performance of the model on selected optimization objectives related to structural properties (presence of halogen groups, number of aromatic rings) and to a physicochemical property (penalized logP). In the task of optimization of penalized logP of drug-like molecules our model significantly outperforms previous results.", "target": ["Wir stellen Mol-CycleGAN vor - ein neues generatives Modell zur Optimierung von Molekülen, um das Design von Medikamenten zu verbessern.", "Die Arbeit stellt einen Ansatz zur Optimierung molekularer Eigenschaften vor, der auf der Anwendung von CycleGANs auf variationalen Autoencodern für Moleküle basiert und eine domänenspezifische VAE namens Junction Tree VAE (JT-VAE) verwendet.", "In diesem Beitrag wird ein variationaler Autoencoder verwendet, um eine Übersetzungsfunktion zu erlernen, die von der Menge der Moleküle ohne die gewünschte Eigenschaft zur Menge der Moleküle mit der Eigenschaft führt. "]} +{"source": "Knowledge distillation is a potential solution for model compression. The idea is to make a small student network imitate the target of a large teacher network, then the student network can be competitive to the teacher one. Most previous studies focus on model distillation in the classification task, where they propose different architectures and initializations for the student network. However, only the classification task is not enough, and other related tasks such as regression and retrieval are barely considered. To solve the problem, in this paper, we take face recognition as a breaking point and propose model distillation with knowledge transfer from face classification to alignment and verification. By selecting appropriate initializations and targets in the knowledge transfer, the distillation can be easier in non-classification tasks. Experiments on the CelebA and CASIA-WebFace datasets demonstrate that the student network can be competitive to the teacher one in alignment and verification, and even surpasses the teacher network under specific compression rates. In addition, to achieve stronger knowledge transfer, we also use a common initialization trick to improve the distillation performance of classification. Evaluations on the CASIA-Webface and large-scale MS-Celeb-1M datasets show the effectiveness of this simple trick.", "target": ["Wir nehmen die Gesichtserkennung als Bruchstelle und schlagen eine Modelldestillation mit Wissenstransfer von der Gesichtsklassifizierung zum Abgleich und zur Verifizierung vor.", "In diesem Beitrag wird vorgeschlagen, den Klassifikator aus dem Modell für die Gesichtsklassifizierung auf die Aufgabe der Ausrichtung und Überprüfung zu übertragen.", "Das Manuskript stellt Experimente vor, bei denen Wissen aus einem Gesichtsklassifizierungsmodell in Studentenmodelle für die Ausrichtung und Verifizierung von Gesichtern destilliert wurde."]} +{"source": "RNNs have been shown to be excellent models for sequential data and in particular for session-based user behavior. The use of RNNs provides impressive performance benefits over classical methods in session-based recommendations. In this work we introduce a novel ranking loss function tailored for RNNs in recommendation settings. The better performance of such loss over alternatives, along with further tricks and improvements described in this work, allow to achieve an overall improvement of up to 35% in terms of MRR and Recall@20 over previous session-based RNN solutions and up to 51% over classical collaborative filtering approaches. Unlike data augmentation-based improvements, our method does not increase training times significantly.", "target": ["Verbesserung der sitzungsbasierten Empfehlungen mit RNNs (GRU4Rec) um 35% durch neu entwickelte Verlustfunktionen und Beispiele.", "Diese Arbeit analysiert bestehende Verlustfunktionen für sitzungsbasierte Empfehlungen und schlägt zwei neuartige Verlustfunktionen vor, die eine Gewichtung zu bestehenden rangbasierten Verlustfunktionen hinzufügen.", "Vorstellen von Änderungen an früheren Arbeiten für sitzungsbasierte Empfehlungen unter Verwendung von RNN, indem negative Beispiele nach ihrer \"Relevanz\" gewichtet werden.", "In diesem Beitrag werden die Probleme bei der Optimierung der Verlustfunktionen in GRU4Rec erörtert, Tricks zur Optimierung vorgeschlagen und eine verbesserte Version vorgeschlagen."]} +{"source": "In representational lifelong learning an agent aims to continually learn to solve novel tasks while updating its representation in light of previous tasks. Under the assumption that future tasks are related to previous tasks, representations should be learned in such a way that they capture the common structure across learned tasks, while allowing the learner sufficient flexibility to adapt to novel aspects of a new task. We develop a framework for lifelong learning in deep neural networks that is based on generalization bounds, developed within the PAC-Bayes framework. Learning takes place through the construction of a distribution over networks based on the tasks seen so far, and its utilization for learning a new task. Thus, prior knowledge is incorporated through setting a history-dependent prior for novel tasks. We develop a gradient-based algorithm implementing these ideas, based on minimizing an objective function motivated by generalization bounds, and demonstrate its effectiveness through numerical examples.", "target": ["Wir entwickeln einen auf der PAC-Bayes-Theorie basierenden Ansatz des lifelong Learnings für das Transferlernen, bei dem die Prioritäten angepasst werden, wenn neue Aufgaben auftreten, wodurch das Lernen neuer Aufgaben erleichtert wird.", "Eine neuartige PAC-Bayes'sche Risikogrenze, die als Zielfunktion für maschinelles Lernen mit mehreren Aufgaben dient, und ein Algorithmus zur Minimierung einer vereinfachten Version dieser Zielfunktion.", "Erweitert die bestehenden PAC-Bayes-Grenzen auf Multi-Task-Lernen, damit der Prior für verschiedene Aufgaben angepasst werden kann."]} +{"source": "Optimization algorithms for training deep models not only affects the convergence rate and stability of the training process, but are also highly related to the generalization performance of trained models. While adaptive algorithms, such as Adam and RMSprop, have shown better optimization performance than stochastic gradient descent (SGD) in many scenarios, they often lead to worse generalization performance than SGD, when used for training deep neural networks (DNNs). In this work, we identify two problems regarding the direction and step size for updating the weight vectors of hidden units, which may degrade the generalization performance of Adam. As a solution, we propose the normalized direction-preserving Adam (ND-Adam) algorithm, which controls the update direction and step size more precisely, and thus bridges the generalization gap between Adam and SGD. Following a similar rationale, we further improve the generalization performance in classification tasks by regularizing the softmax logits. By bridging the gap between SGD and Adam, we also shed some light on why certain optimization algorithms generalize better than others.", "target": ["Eine maßgeschneiderte Version von Adam für das Training von DNNs, die die Generalisierungslücke zwischen Adam und SGD schließt.", "Vorschlagen einer Variante des ADAM-Optimierungsalgorithmus, bei der die Gewichte jeder versteckten Einheit durch Batch-Normalisierung normalisiert werden.", "Erweiterung des Adam-Optimierungsalgorithmus, um die Aktualisierungsrichtung zu erhalten, indem die Lernrate für die eingehenden Gewichte einer versteckten Einheit gemeinsam unter Verwendung der L2-Norm des Gradientenvektors angepasst wird."]} +{"source": "Options in reinforcement learning allow agents to hierarchically decompose a task into subtasks, having the potential to speed up learning and planning. However, autonomously learning effective sets of options is still a major challenge in the field. In this paper we focus on the recently introduced idea of using representation learning methods to guide the option discovery process. Specifically, we look at eigenoptions, options obtained from representations that encode diffusive information flow in the environment. We extend the existing algorithms for eigenoption discovery to settings with stochastic transitions and in which handcrafted features are not available. We propose an algorithm that discovers eigenoptions while learning non-linear state representations from raw pixels. It exploits recent successes in the deep reinforcement learning literature and the equivalence between proto-value functions and the successor representation. We use traditional tabular domains to provide intuition about our approach and Atari 2600 games to demonstrate its potential.", "target": ["Wir zeigen, wie wir die Nachfolgerepräsentation verwenden können, um Eigenoptionen in stochastischen Domänen aus rohen Pixeln zu entdecken. Eigenoptionen sind Optionen, die gelernt werden, um in den latenten Dimensionen einer gelernten Darstellung zu navigieren.", "Erweitert die Idee der Eigenoptionen auf Bereiche mit stochastischen Übergängen und in denen Zustandsmerkmale gelernt werden.", "Zeigt die Äquivalenz zwischen Proto-Wertfunktionen und Nachfolgedarstellungen und leitet die Idee der Eigenoptionen als Mechanismus der Optionsfindung ab.", "Das Papier knüpft an frühere Arbeiten von Machado et al. (2017) an, die zeigen, wie Proto-Wert-Funktionen verwendet werden können, um Optionen, sogenannte \"Eigenoptionen\", zu definieren."]} +{"source": "One form of characterizing the expressiveness of a piecewise linear neural network is by the number of linear regions, or pieces, of the function modeled. We have observed substantial progress in this topic through lower and upper bounds on the maximum number of linear regions and a counting procedure. However, these bounds only account for the dimensions of the network and the exact counting may take a prohibitive amount of time, therefore making it infeasible to benchmark the expressiveness of networks. In this work, we approximate the number of linear regions of specific rectifier networks with an algorithm for probabilistic lower bounds of mixed-integer linear sets. In addition, we present a tighter upper bound that leverages network coefficients. We test both on trained networks. The algorithm for probabilistic lower bounds is several orders of magnitude faster than exact counting and the values reach similar orders of magnitude, hence making our approach a viable method to compare the expressiveness of such networks. The refined upper bound is particularly stronger on networks with narrow layers. ", "target": ["Wir bieten verbesserte obere Schranken für die Anzahl der linearen Regionen, die in der Netzwerkexpressivität verwendet werden, und einen hocheffizienten Algorithmus (mit exakter Zählung), um probabilistische untere Schranken für die tatsächliche Anzahl der linearen Regionen zu erhalten.", "Beitrag zur Untersuchung der Anzahl linearer Regionen in neuronalen Netzen von RELU durch Verwendung eines approximativen probabilistischen Zählalgorithmus und einer Analyse.", "Aufbauen auf früheren Arbeiten, die die Zählung linearer Regionen in tiefen neuronalen Netzen untersuchten, und verbesserern der zuvor vorgeschlagene Obergrenze durch Änderung der Dimensionalitätsbeschränkung.", "Die Arbeit befasst sich mit der Ausdruckskraft eines stückweise linearen neuronalen Netzes, das durch die Anzahl der linearen Regionen der modellierten Funktion gekennzeichnet ist, und nutzt probabilistische Algorithmen, um die Grenzen schneller zu berechnen und engere Grenzen zu beweisen."]} +{"source": "The ability to look multiple times through a series of pose-adjusted glimpses is fundamental to human vision. This critical faculty allows us to understand highly complex visual scenes. Short term memory plays an integral role in aggregating the information obtained from these glimpses and informing our interpretation of the scene. Computational models have attempted to address glimpsing and visual attention but have failed to incorporate the notion of memory. We introduce a novel, biologically inspired visual working memory architecture that we term the Hebb-Rosenblatt memory. We subsequently introduce a fully differentiable Short Term Attentive Working Memory model (STAWM) which uses transformational attention to learn a memory over each image it sees. The state of our Hebb-Rosenblatt memory is embedded in STAWM as the weights space of a layer. By projecting different queries through this layer we can obtain goal-oriented latent representations for tasks including classification and visual reconstruction. Our model obtains highly competitive classification performance on MNIST and CIFAR-10. As demonstrated through the CelebA dataset, to perform reconstruction the model learns to make a sequence of updates to a canvas which constitute a parts-based representation. Classification with the self supervised representation obtained from MNIST is shown to be in line with the state of the art models (none of which use a visual attention mechanism). Finally, we show that STAWM can be trained under the dual constraints of classification and reconstruction to provide an interpretable visual sketchpad which helps open the `black-box' of deep learning.", "target": ["Ein biologisch inspirierter Arbeitsspeicher, das in rekurrente visuelle Aufmerksamkeitsmodelle integriert werden kann, um den neuesten Stand der Technik zu erreichen.", "Einführung einer neuen Netzarchitektur, die sich am visuell-aufmerksamen Arbeitsspeicher orientiert, und Anwendung auf Klassifizierungsaufgaben findest, sowie Verwendung als generatives Modell.", "Die Arbeit erweitert das rekurrente Aufmerksamkeitsmodell um ein neuartiges Hebb-Rosenblatt-Arbeitsspeichermodell und erzielt konkurrenzfähige Ergebnisse auf MNIST."]} +{"source": "Generative models have been successfully applied to image style transfer and domain translation. However, there is still a wide gap in the quality of results when learning such tasks on musical audio. Furthermore, most translation models only enable one-to-one or one-to-many transfer by relying on separate encoders or decoders and complex, computationally-heavy models. In this paper, we introduce the Modulated Variational auto-Encoders (MoVE) to perform musical timbre transfer. First, we define timbre transfer as applying parts of the auditory properties of a musical instrument onto another. We show that we can achieve and improve this task by conditioning existing domain translation techniques with Feature-wise Linear Modulation (FiLM). Then, by replacing the usual adversarial translation criterion by a Maximum Mean Discrepancy (MMD) objective, we alleviate the need for an auxiliary pair of discriminative networks. This allows a faster and more stable training, along with a controllable latent space encoder. By further conditioning our system on several different instruments, we can generalize to many-to-many transfer within a single variational architecture able to perform multi-domain transfers. Our models map inputs to 3-dimensional representations, successfully translating timbre from one instrument to another and supporting sound synthesis on a reduced set of control parameters. We evaluate our method in reconstruction and generation tasks while analyzing the auditory descriptor distributions across transferred domains. We show that this architecture incorporates generative controls in multi-domain transfer, yet remaining rather light, fast to train and effective on small datasets.", "target": ["Die Arbeit verwendet variational Auto-Encoding und Netzwerk Konditionierung für Übertragung musikalischer Klangfarben, wir entwickeln und verallgemeinern unsere Architektur für Many-to-Many Instrumenten Transfers zusammen mit Visualisierungen und Bewertungen.", "Vorschlagen eines modulierten variationalen Auto-Encoders für die Übertragung musikalischer Klangfarben durch Ersetzen des üblichen adversarial Übersetzungskriteriums durch eine Maximum Mean Discrepancy.", "Beschreibt ein Many-to-many Modell für die Übertragung musikalischer Klangfarben, das auf den jüngsten Entwicklungen im Bereich der Übertragung von Bereichen und Stilen aufbaut.", "Vorschlag eines hybriden VAE-basierten Modells zur Übertragung von Klangfarben auf Aufnahmen von Musikinstrumenten."]} +{"source": "We study the behavior of weight-tied multilayer vanilla autoencoders under the assumption of random weights. Via an exact characterization in the limit of large dimensions, our analysis reveals interesting phase transition phenomena when the depth becomes large. This, in particular, provides quantitative answers and insights to three questions that were yet fully understood in the literature. Firstly, we provide a precise answer on how the random deep weight-tied autoencoder model performs “approximate inference” as posed by Scellier et al. (2018), and its connection to reversibility considered by several theoretical studies. Secondly, we show that deep autoencoders display a higher degree of sensitivity to perturbations in the parameters, distinct from the shallow counterparts. Thirdly, we obtain insights on pitfalls in training initialization practice, and demonstrate experimentally that it is possible to train a deep autoencoder, even with the tanh activation and a depth as large as 200 layers, without resorting to techniques such as layer-wise pre-training or batch normalization. Our analysis is not specific to any depths or any Lipschitz activations, and our analytical techniques may have broader applicability.", "target": ["Wir untersuchen das Verhalten von gewichtsgebundenen mehrschichtigen einfachen Autoencodern unter der Annahme zufälliger Gewichte. Durch eine exakte Charakterisierung im Grenzbereich großer Dimensionen zeigt unsere Analyse interessante Phasenübergangsphänomene.", "Eine theoretische Analyse von Autoencodern mit zwischen Encoder und Decoder gebundenen Gewichten (weight-tied) mittels Mean Field Analysis.", "Analyse der Leistungen von gewichteten gebundenen Autoencodern auf der Grundlage der jüngsten Fortschritte bei der Analyse hochdimensionaler statistischer Probleme und insbesondere des Message-Passing-Algorithmus.", "In diesem Beitrag werden Auto-Encoder unter verschiedenen Annahmen untersucht und es wird aufgezeigt, dass dieses Modell eines zufälligen Auto-Encoders elegant und rigoros mit eindimensionalen Gleichungen analysiert werden kann."]} +{"source": "Assessing distance betweeen the true and the sample distribution is a key component of many state of the art generative models, such as Wasserstein Autoencoder (WAE). Inspired by prior work on Sliced-Wasserstein Autoencoders (SWAE) and\n kernel smoothing we construct a new generative model – Cramer-Wold AutoEncoder (CWAE). CWAE cost function, based on introduced Cramer-Wold distance between samples, has a simple closed-form in the case of normal prior. As a consequence, while simplifying the optimization procedure (no need of sampling necessary to evaluate the distance function in the training loop), CWAE performance matches quantitatively and qualitatively that of WAE-MMD (WAE using maximum mean discrepancy based distance function) and often improves upon SWAE.", "target": ["Inspiriert von früheren Arbeiten zu Sliced-Wasserstein-Autoencodern (SWAE) und Kernel-Glättung konstruieren wir ein neues generatives Modell - den Cramer-Wold AutoEncoder (CWAE).", "In dieser Arbeit wird eine WAE-Variante vorgeschlagen, die auf einem neuen statistischen Abstand zwischen der kodierten Datenverteilung und der latenten Prioritätsverteilung beruht.", "Vorstellen einer Variation des Wasserstein-AudoEncoders, eine neuartige regulierte Auto-Encoder Architektur, die eine spezifische Wahl der Divergenzstrafe vorschlägt.", "In diesem Beitrag wird der Cramer-Wold-Autoencoder vorgeschlagen, der den Cramer-Wold-Abstand zwischen zwei Verteilungen auf der Grundlage des Cramer-Wold-Theorems verwendet."]} +{"source": "We propose a rejection sampling scheme using the discriminator of a GAN to\n approximately correct errors in the GAN generator distribution. We show that\n under quite strict assumptions, this will allow us to recover the data distribution\n exactly. We then examine where those strict assumptions break down and design a\n practical algorithm—called Discriminator Rejection Sampling (DRS)—that can be\n used on real data-sets. Finally, we demonstrate the efficacy of DRS on a mixture of\n Gaussians and on the state of the art SAGAN model. On ImageNet, we train an\n improved baseline that increases the best published Inception Score from 52.52 to\n 62.36 and reduces the Frechet Inception Distance from 18.65 to 14.79. We then use\n DRS to further improve on this baseline, improving the Inception Score to 76.08\n and the FID to 13.75.", "target": ["Wir verwenden einen GAN-Diskriminator, um am Ausgang des GAN-Generators ein annäherndes Rückweisungsstichprobenverfahren durchzuführen.", " Vorschlagen eines Algorithmus für die Rückweisung von Stichproben aus dem GAN-Generator.", "In diesem Beitrag wird ein Post-Processing-Rejection-Sampling-Verfahren für GANs vorgeschlagen, das so genannte Discriminator Rejection Sampling, mit dem gute Beispiele aus dem GAN-Generator herausgefiltert werden können."]} +{"source": "The quality of the features used in visual recognition is of fundamental importance for the overall system. For a long time, low-level hand-designed feature algorithms as SIFT and HOG have obtained the best results on image recognition. Visual features have recently been extracted from trained convolutional neural networks. Despite the high-quality results, one of the main drawbacks of this approach, when compared with hand-designed features, is the training time required during the learning process. In this paper, we propose a simple and fast way to train supervised convolutional models to feature extraction while still maintaining its high-quality. This methodology is evaluated on different datasets and compared with state-of-the-art approaches.", "target": ["Eine einfache und schnelle Methode zur Extraktion visueller Merkmale aus Convolutional Neural Networks.", "Vorschlagen eines schnellen Wegs zum Erlernen von Convolutional Features, die später mit jedem Klassifikator verwendet werden können, indem eine geringere Anzahl von Trainings Epocs und spezifische Zeitplanverzögerungen der Lernrate verwendet werden.", "Verwenden eines Schema zum Abklingen der Lernrate, das relativ zur Anzahl der beim Training verwendeten Epochen festgelegt ist, und extrahieren der Ausgabe der vorletzten Schicht als Merkmale, um einen herkömmlichen Klassifikator zu trainieren."]} +{"source": "We develop a framework for understanding and improving recurrent neural networks (RNNs) using max-affine spline operators (MASOs). We prove that RNNs using piecewise affine and convex nonlinearities can be written as a simple piecewise affine spline operator. The resulting representation provides several new perspectives for analyzing RNNs, three of which we study in this paper. First, we show that an RNN internally partitions the input space during training and that it builds up the partition through time. Second, we show that the affine slope parameter of an RNN corresponds to an input-specific template, from which we can interpret an RNN as performing a simple template matching (matched filtering) given the input. Third, by carefully examining the MASO RNN affine mapping, we prove that using a random initial hidden state corresponds to an explicit L2 regularization of the affine parameters, which can mollify exploding gradients and improve generalization. Extensive experiments on several datasets of various modalities demonstrate and validate each of the above conclusions. In particular, using a random initial hidden states elevates simple RNNs to near state-of-the-art performers on these datasets.", "target": ["Wir bieten neue Einblicke und Interpretationen von RNNs aus der Perspektive der max-affinen Spline-Operatoren.", "Schreibt die Gleichungen des Elman RNN in Form von sogenannten max-affine Spline-Operatoren um.", "Bereitstellen eines neuen Ansatz zum Verständnis von RNNs mit Max-Affine-Spline-Operatoren (MASO), indem sie mit stückweise affinen und konvexen MASOs Aktivierungen umgeschrieben werden.", "Die Autoren bauen auf der Max-Affine-Spline-Operator Interpretation einer umfangreichen Klasse von tiefen Netzen auf und konzentrieren sich dabei auf rekurrente neuronale Netze, bei denen Störungen im anfänglichen verborgenen Zustand als Regularisierung dient."]} +{"source": "Reasoning over text and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. Transducing text to logical forms which can be operated on is a brittle and error-prone process . Operating directly on text by jointly learning representations and transformations thereof by means of neural architectures that lack the ability to learn and exploit general rules can be very data-inefficient and not generalise correctly . These issues are addressed by Neural Theorem Provers (NTPs) (Rocktäschel & Riedel, 2017), neuro-symbolic systems based on a continuous relaxation of Prolog’s backward chaining algorithm, where symbolic unification between atoms is replaced by a differentiable operator computing the similarity between their embedding representations . In this paper, we first propose Neighbourhood-approximated Neural Theorem Provers (NaNTPs) consisting of two extensions toNTPs, namely a) a method for drastically reducing the previously prohibitive time and space complexity during inference and learning, and b) an attention mechanism for improving the rule learning process, deeming them usable on real-world datasets. Then, we propose a novel approach for jointly reasoning over KB facts and textual mentions, by jointly embedding them in a shared embedding space. The proposed method is able to extract rules and provide explanations—involving both textual patterns and KB relations—from large KBs and text corpora. We show that NaNTPs perform on par with NTPs at a fraction of a cost, and can achieve competitive link prediction results on challenging large-scale datasets, including WN18, WN18RR, and FB15k-237 (with and without textual mentions) while being able to provide explanations for each prediction and extract interpretable rules.", "target": ["Wir skalieren Neuronale Theorembeweiser auf große Datensätze, verbessern den Regel-Lernprozess und erweitern ihn, um gemeinsam Schlussfolgerungen über Text und Wissensdatenbanken zu ziehen.", "Vorschlagen einer Erweiterung des Systems der neuronalen Theorembeweiser, die die wichtigsten Probleme dieser Modelle angeht, indem die zeitliche und räumliche Komplexität der Modelle verringert wird.", "Skalierung von NTPs durch approximierte Suche nach nächsten Nachbarn über Fakten und Regeln während der Vereinheitlichung und Vorschlag zur Parametrisierung von Prädikaten mit Hilfe von Aufmerksamkeit über bekannte Prädikate.", "Verbessern des zuvor vorgeschlagenen Ansatz des Neuronalen Theoremprüfers durch die Verwendung der nächste Nachbarn Suche."]} +{"source": "We investigate the methods by which a Reservoir Computing Network (RCN) learns concepts such as 'similar' and 'different' between pairs of images using a small training dataset and generalizes these concepts to previously unseen types of data. Specifically, we show that an RCN trained to identify relationships between image-pairs drawn from a subset of digits from the MNIST database or the depth maps of subset of visual scenes from a moving camera generalizes the learned transformations to images of digits unseen during training or depth maps of different visual scenes. We infer, using Principal Component Analysis, that the high dimensional reservoir states generated from an input image pair with a specific transformation converge over time to a unique relationship. Thus, as opposed to training the entire high dimensional reservoir state, the reservoir only needs to train on these unique relationships, allowing the reservoir to perform well with very few training examples. Thus, generalization of learning to unseen images is interpretable in terms of clustering of the reservoir state onto the attractor corresponding to the transformation in reservoir space. We find that RCNs can identify and generalize linear and non-linear transformations, and combinations of transformations, naturally and be a robust and effective image classifier. Additionally, RCNs perform significantly better than state of the art neural network classification techniques such as deep Siamese Neural Networks (SNNs) in generalization tasks both on the MNIST dataset and more complex depth maps of visual scenes from a moving camera. This work helps bridge the gap between explainable machine learning and biological learning through analogies using small datasets, and points to new directions in the investigation of learning processes.", "target": ["Verallgemeinerung der zwischen Bildpaaren erlernten Beziehungen anhand einer kleinen Anzahl von Trainingsdaten auf bisher ungesehene Bildtypen mit Hilfe eines erklärbaren dynamischen Systemmodells, Reservoir Computing, und einer biologisch plausiblen Lerntechnik auf der Grundlage von Analogien.", "Behauptet Ergebnisse der \"kombinierten Transformationen\" im Zusammenhang mit RC unter Verwendung eines Echo-State-Netzes mit Standard-Tanh-Aktivierungen zu erzielen, mit dem Unterschied, dass keine rekurrenten Gewichte trainiert werden.", "Neuartige Methode zur Klassifizierung verschiedener Verteilungen von MNIST-Daten.", "Die Arbeit verwendet ein Echo-State-Netzwerk, um zu lernen, Bildtransformationen zwischen Bildpaaren in eine von fünf Klassen zu klassifizieren."]} +{"source": "We present Generative Adversarial Privacy and Fairness (GAPF), a data-driven framework for learning private and fair representations of the data. GAPF leverages recent advances in adversarial learning to allow a data holder to learn \"universal\" representations that decouple a set of sensitive attributes from the rest of the dataset. Under GAPF, finding the optimal decorrelation scheme is formulated as a constrained minimax game between a generative decorrelator and an adversary. We show that for appropriately chosen adversarial loss functions, GAPF provides privacy guarantees against strong information-theoretic adversaries and enforces demographic parity. We also evaluate the performance of GAPF on multi-dimensional Gaussian mixture models and real datasets, and show how a designer can certify that representations learned under an adversary with a fixed architecture perform well against more complex adversaries.", "target": ["Wir stellen Generative Adversarial Privacy and Fairness (GAPF) vor, ein datengesteuertes Verfahren zum Erlernen privater und fairer Repräsentationen mit zertifizierten Garantien für Privatsphäre und Fairness.", "In diesem Beitrag wird ein GAN-Modell verwendet, um einen Überblick über die mit dem Private/Fair Representation Learning (PRL) verbundenen Arbeiten zu geben.", "In diesem Beitrag wird ein kontradiktorischer Ansatz für private und faire Repräsentationen durch erlernte Verzerrung von Daten vorgestellt, der die Abhängigkeit von sensiblen Variablen minimiert, während der Grad der Verzerrung begrenzt ist.", "Die Autoren beschreiben einen Rahmen für das Erlernen einer demografischen Paritätsrepräsentation, die zum Trainieren bestimmter Klassifikatoren verwendet werden kann."]} +{"source": "Current machine learning algorithms can be easily fooled by adversarial examples. One possible solution path is to make models that use confidence thresholding to avoid making mistakes. Such models refuse to make a prediction when they are not confident of their answer. We propose to evaluate such models in terms of tradeoff curves with the goal of high success rate on clean examples and low failure rate on adversarial examples. Existing untargeted attacks developed for models that do not use confidence thresholding tend to underestimate such models' vulnerability. We propose the MaxConfidence family of attacks, which are optimal in a variety of theoretical settings, including one realistic setting: attacks against linear models. Experiments show the attack attains good results in practice. We show that simple defenses are able to perform well on MNIST but not on CIFAR, contributing further to previous calls that MNIST should be retired as a benchmarking dataset for adversarial robustness research. We release code for these evaluations as part of the cleverhans (Papernot et al 2018) library (ICLR reviewers should be careful not to look at who contributed these features to cleverhans to avoid de-anonymizing this submission).", "target": ["Wir stellen Metriken und einen optimalen Angriff für die Bewertung von Modellen vor, die sich mit Hilfe von Vertrauensschwellen gegen gegnerische Beispiele verteidigen.", "In dieser Arbeit wird eine Familie von Angriffen auf Vertrauensschwellenalgorithmen vorgestellt, die sich hauptsächlich auf Bewertungsmethoden konzentrieren.", "Vorschlagen einer Bewertungsmethode für Verteidigungsmodelle mit Vertrauensschwellenwerten und einen Ansatz zur Generierung von adversarial Beispielen durch Auswahl der falschen Klasse mit dem größten Vertrauen bei gezielten Angriffen.", "In dem Beitrag wird eine Bewertungsmethode für die Beurteilung von Angriffen auf Vertrauensschwellenverfahren vorgestellt und eine neue Art von Angriff vorgeschlagen."]} +{"source": "Deep learning has achieved remarkable successes in solving challenging reinforcement learning (RL) problems when dense reward function is provided. However, in sparse reward environment it still often suffers from the need to carefully shape reward function to guide policy optimization. This limits the applicability of RL in the real world since both reinforcement learning and domain-specific knowledge are required. It is therefore of great practical importance to develop algorithms which can learn from a binary signal indicating successful task completion or other unshaped, sparse reward signals. We propose a novel method called competitive experience replay, which efficiently supplements a sparse reward by placing learning in the context of an exploration competition between a pair of agents. Our method complements the recently proposed hindsight experience replay (HER) by inducing an automatic exploratory curriculum. We evaluate our approach on the tasks of reaching various goal locations in an ant maze and manipulating objects with a robotic arm. Each task provides only binary rewards indicating whether or not the goal is achieved. Our method asymmetrically augments these sparse rewards for a pair of agents each learning the same task, creating a competitive game designed to drive exploration. Extensive experiments demonstrate that this method leads to faster converge and improved task performance.", "target": ["Eine neuartige Methode zum Lernen mit spärlicher Belohnung unter Verwendung von adversarialer Belohnungsumetikettierung.", "Vorschlagen der Nutzung einer wettbewerbsorientierten Multi-Agenten Umgebung zur Förderung der Erkundung und zeigen, dass CER + HER > HER ~ CER.", "Vorschlag für eine neue Methode zum Lernen aus spärlichen Belohnungen in modellfreien Reinforcement Learning Umgebungen und zur Verdichtung von Belohnungen.", "Um die spärlichen Belohnungsprobleme anzugehen und die Exploration in RL-Algorithmen zu fördern, schlagen die Autoren eine Relabeling-Strategie namens Competitive Experience Reply (CER) vor."]} +{"source": "This paper proposes a neural end-to-end text-to-speech (TTS) model which can control latent attributes in the generated speech that are rarely annotated in the training data, such as speaking style, accent, background noise, and recording conditions. The model is formulated as a conditional generative model with two levels of hierarchical latent variables. The first level is a categorical variable, which represents attribute groups (e.g. clean/noisy) and provides interpretability. The second level, conditioned on the first, is a multivariate Gaussian variable, which characterizes specific attribute configurations (e.g. noise level, speaking rate) and enables disentangled fine-grained control over these attributes. This amounts to using a Gaussian mixture model (GMM) for the latent distribution. Extensive evaluation demonstrates its ability to control the aforementioned attributes. In particular, it is capable of consistently synthesizing high-quality clean speech regardless of the quality of the training data for the target speaker.", "target": ["Der Aufbau eines TTS-Modells mit Gaussian Mixture VAEs ermöglicht eine feinkörnige Steuerung des Sprechstils, der Geräuschbedingungen und mehr.", "Beschreibt das konditionierte GAN-Modell zur Erzeugung von sprecherkonditionierten Mel-Spektren durch Erweiterung des z-Raums entsprechend der Identifizierung.", "In dieser Arbeit wird ein zweischichtiges Modell latenter Variablen vorgeschlagen, um eine entwirrte latente Repräsentation zu erhalten, die eine feinkörnige Kontrolle über verschiedene Attribute ermöglicht.", "In diesem Beitrag wird ein Modell vorgeschlagen, das nicht-annotierte Attribute wie Sprechstil, Akzent, Hintergrundgeräusche usw. kontrollieren kann."]} +{"source": "Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.", "target": ["Ermöglichung der Zählung von Modellen für die visuelle Beantwortung von Fragen durch die Behandlung sich überschneidender Objektvorschläge.", "In dieser Arbeit wird eine von Hand entworfene Netzwerkarchitektur auf einem Graphen von Objektvorschlägen vorgeschlagen, um eine weiche nichtmaximale Unterdrückung durchzuführen, um die Objektanzahl zu erhalten.", "Konzentriert sich auf ein Zählproblem bei der Beantwortung visueller Fragen unter Verwendung eines Aufmerksamkeitsmechanismus und schlägt einen differenzierbaren Zählkomplex vor, der explizit die Anzahl der Objekte zählt.", "Diese Arbeit befasst sich mit dem Problem der Objektzählung bei der Beantwortung visueller Fragen und schlägt mehrere Heuristiken vor, um die richtige Zählung zu finden."]} +{"source": "We propose a simple and robust training-free approach for building sentence representations. Inspired by the Gram-Schmidt Process in geometric theory, we build an orthogonal basis of the subspace spanned by a word and its surrounding context in a sentence. We model the semantic meaning of a word in a sentence based on two aspects. One is its relatedness to the word vector subspace already spanned by its contextual words. The other is its novel semantic meaning which shall be introduced as a new basis vector perpendicular to this existing subspace. Following this motivation, we develop an innovative method based on orthogonal basis to combine pre-trained word embeddings into sentence representation. This approach requires zero training and zero parameters, along with efficient inference performance. We evaluate our approach on 11 downstream NLP tasks. Experimental results show that our model outperforms all existing zero-training alternatives in all the tasks and it is competitive to other approaches relying on either large amounts of labelled data or prolonged training time.", "target": ["Ein einfacher und trainingsfreier Ansatz für Satzeinbettungen mit konkurrenzfähiger Leistung im Vergleich zu ausgefeilten Modellen, die entweder große Mengen an Trainingsdaten oder eine lange Trainingszeit benötigen.", "Vorstellung eines neuen trainingsfreien Verfahrens zur Erzeugung von Satzeinbettungen mit systematischer Analyse", "Vorschlag einer neuen, auf Geometrie basierenden Methode zur Satzeinbettung aus Worteinbettungsvektoren, indem die Neuheit, Bedeutung und Korpuseinzigartigkeit jedes Wortes quantifiziert wird.", "In diesem Beitrag wird die Satzeinbettung auf der Grundlage einer orthogonalen Zerlegung des aufgespannten Raums durch Worteinbettungen untersucht."]} +{"source": "In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems, each with a small labeled training set and its corresponding test set. In this work, we advance this few-shot classification paradigm towards a scenario where unlabeled examples are also available within each episode. We consider two situations: one where all unlabeled examples are assumed to belong to the same set of classes as the labeled examples of the episode, as well as the more challenging situation where examples from other distractor classes are also provided. To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al., 2017) that are augmented with the ability to use unlabeled examples when producing prototypes. These models are trained in an end-to-end way on episodes, to learn to leverage the unlabeled examples successfully. We evaluate these methods on versions of the Omniglot and miniImageNet benchmarks, adapted to this new framework augmented with unlabeled examples. We also propose a new split of ImageNet, consisting of a large set of classes, with a hierarchical structure. Our experiments confirm that our Prototypical Networks can learn to improve their predictions due to unlabeled examples, much like a semi-supervised algorithm would.", "target": ["Wir schlagen neuartige Erweiterungen von Prototypischen Netzen vor, die durch die Möglichkeit ergänzt werden, bei der Erstellung von Prototypen unmarkierte Beispiele zu verwenden.", "Diese Arbeit ist eine Erweiterung eines prototypischen Netzwerks, das die Verwendung von unmarkierten Beispielen für das Training jeder Episode berücksichtigt.", "Untersucht das Problem der halbüberwachten Few-Shot Klassifizierung durch Erweiterung der prototypischen Netze in die Umgebung des halbüberwachten Lernens mit Beispielen aus Ablenkungsklassen.", "Erweitert das Prototypische Netzwerk auf die halb-überwachte Einstellung, indem es Prototypen unter Verwendung zugewiesener Pseudo-Labels aktualisiert, mit Distraktoren umgeht und Proben anhand des Abstands zu den ursprünglichen Prototypen gewichtet."]} +{"source": "We investigate the properties of multidimensional probability distributions in the context of latent space prior distributions of implicit generative models. Our work revolves around the phenomena arising while decoding linear interpolations between two random latent vectors -- regions of latent space in close proximity to the origin of the space are oversampled, which restricts the usability of linear interpolations as a tool to analyse the latent space. We show that the distribution mismatch can be eliminated completely by a proper choice of the latent probability distribution or using non-linear interpolations. We prove that there is a trade off between the interpolation being linear, and the latent distribution having even the most basic properties required for stable training, such as finite mean. We use the multidimensional Cauchy distribution as an example of the prior distribution, and also provide a general method of creating non-linear interpolations, that is easily applicable to a large family of commonly used latent distributions.", "target": ["Wir beweisen theoretisch, dass lineare Interpolationen für die Analyse von trainierten impliziten generativen Modellen ungeeignet sind. ", "Untersuchung des Problems, wann der lineare Interpolant zwischen zwei Zufallsvariablen der gleichen Verteilung folgt, in Bezug auf die vorherige Verteilung eines impliziten generativen Modells.", "In dieser Arbeit geht es um die Frage, wie man bei einem latenten Variablenmodell im latenten Raum interpolieren kann."]} +{"source": "Deep neural networks (DNN) have shown promising performance in computer vision. In medical imaging, encouraging results have been achieved with deep learning for applications such as segmentation, lesion detection and classification. Nearly all of the deep learning based image analysis methods work on reconstructed images, which are obtained from original acquisitions via solving inverse problems (reconstruction). The reconstruction algorithms are designed for human observers, but not necessarily optimized for DNNs which can often observe features that are incomprehensible for human eyes. Hence, it is desirable to train the DNNs directly from the original data which lie in a different domain with the images. In this paper, we proposed an end-to-end DNN for abnormality detection in medical imaging. To align the acquisition with the annotations made by radiologists in the image domain, a DNN was built as the unrolled version of iterative reconstruction algorithms to map the acquisitions to images, and followed by a 3D convolutional neural network (CNN) to detect the abnormality in the reconstructed images. The two networks were trained jointly in order to optimize the entire DNN for the detection task from the original acquisitions. The DNN was implemented for lung nodule detection in low-dose chest computed tomography (CT), where a numerical simulation was done to generate acquisitions from 1,018 chest CT images with radiologists' annotations. The proposed end-to-end DNN demonstrated better sensitivity and accuracy for the task compared to a two-step approach, in which the reconstruction and detection DNNs were trained separately. A significant reduction of false positive rate on suspicious lesions were observed, which is crucial for the known over-diagnosis in low-dose lung CT imaging. The images reconstructed by the proposed end-to-end network also presented enhanced details in the region of interest.", "target": ["Erkennung von Lungenknoten anhand von Projektionsdaten anstelle von Bildern.", "DNNs werden für die patchbasierte Erkennung von Lungenknoten in CT-Projektionsdaten verwendet.", "Gemeinsame Modellierung von Computertomographie-Rekonstruktion und Läsionserkennung in der Lunge durch Training der Zuordnungen von Rohsinogrammen zu Erkennungsergebnissen in einer Ende-zu-Ende Weise.", "Präsentiert ein Ende-zu-Ende Training einer CNN-Architektur, die CT-Bildsignalverarbeitung und Bildanalyse kombiniert."]} +{"source": "Deep reinforcement learning (DRL) algorithms have demonstrated progress in learning to find a goal in challenging environments. As the title of the paper by Mirowski et al. (2016) suggests, one might assume that DRL-based algorithms are able to “learn to navigate” and are thus ready to replace classical mapping and path-planning algorithms, at least in simulated environments. Yet, from experiments and analysis in this earlier work, it is not clear what strategies are used by these algorithms in navigating the mazes and finding the goal. In this paper, we pose and study this underlying question: are DRL algorithms doing some form of mapping and/or path-planning? Our experiments show that the algorithms are not memorizing the maps of mazes at the testing stage but, rather, at the training stage. Hence, the DRL algorithms fall short of qualifying as mapping or path-planning algorithms with any reasonable definition of mapping. We extend the experiments in Mirowski et al. (2016) by separating the set of training and testing maps and by a more ablative coverage of the space of experiments. Our systematic experiments show that the NavA3C-D1-D2-L algorithm, when trained and tested on the same maps, is able to choose the shorter paths to the goal. However, when tested on unseen maps the algorithm utilizes a wall-following strategy to find the goal without doing any mapping or path planning.", "target": ["Wir evaluieren quantitativ und qualitativ tiefgehende, auf Reinforcement Learning basierende Navigationsmethoden unter einer Vielzahl von Bedingungen, um die Frage zu beantworten, inwieweit sie in der Lage sind, klassische Pfadplaner und Zordnungs Algorithmen zu ersetzen.", "Evaluierung eines Deep RL-basierten Modells in Trainingslabyrinthen durch Messung der wiederholten Latenzzeit zum Ziel und Vergleich mit dem kürzesten Weg."]} +{"source": "In many robotic applications, it is crucial to maintain a belief about the state of \n a system, like the location of a robot or the pose of an object.\n These state estimates serve as input for planning and decision making and \n provide feedback during task execution. \n Recursive Bayesian Filtering algorithms address the state estimation problem,\n but they require a model of the process dynamics and the sensory observations as well as \n noise estimates that quantify the accuracy of these models. \n Recently, multiple works have demonstrated that the process and sensor models can be \n learned by end-to-end training through differentiable versions of Recursive Filtering methods.\n However, even if the predictive models are known, finding suitable noise models \n remains challenging. Therefore, many practical applications rely on very simplistic noise \n models. \n Our hypothesis is that end-to-end training through differentiable Bayesian \n Filters enables us to learn more complex heteroscedastic noise models for\n the system dynamics. We evaluate learning such models with different types of \n filtering algorithms and on two different robotic tasks. Our experiments show that especially \n for sampling-based filters like the Particle Filter, learning heteroscedastic noise \n models can drastically improve the tracking performance in comparison to using \n constant noise models.", "target": ["Wir evaluieren das Lernen heteroskedastischer Rauschmodelle mit verschiedenen Differentiable Bayes Filtern.", "Vorschlag, heteroskedastische Rauschmodelle aus Daten zu lernen, indem die Vorhersagewahrscheinlichkeit von Ende zu Ende durch differenzierbare Bayes'sche Filter und zwei verschiedene Versionen des unscented Kalman Filters optimiert wird.", "Überarbeitung der Bayes-Filter und Bewertung des Nutzens des Trainings von Beobachtungs- und Prozessrauschmodellen bei gleichzeitiger Beibehaltung aller anderen Modelle.", "In diesem Beitrag wird eine Methode zum Erlernen und Verwenden von zustands- und beobachtungsabhängigem Störungen in herkömmlichen Bayes'schen Filteralgorithmen vorgestellt. Der Ansatz besteht darin, ein neuronales Netzmodell zu konstruieren, das als Eingabe die rohen Beobachtungsdaten nimmt und eine kompakte Darstellung und eine zugehörige diagonale Kovarianz erzeugt."]} +{"source": "Graph convolutional neural networks have recently shown great potential for the task of zero-shot learning. These models are highly sample efficient as related concepts in the graph structure share statistical strength allowing generalization to new classes when faced with a lack of data. However, we find that the extensive use of Laplacian smoothing at each layer in current approaches can easily dilute the knowledge from distant nodes and consequently decrease the performance in zero-shot learning. In order to still enjoy the benefit brought by the graph structure while preventing the dilution of knowledge from distant nodes, we propose a Dense Graph Propagation (DGP) module with carefully designed direct links among distant nodes. DGP allows us to exploit the hierarchical graph structure of the knowledge graph through additional connections. These connections are added based on a node's relationship to its ancestors and descendants. A weighting scheme is further used to weigh their contribution depending on the distance to the node. Combined with finetuning of the representations in a two-stage training approach our method outperforms state-of-the-art zero-shot learning approaches.", "target": ["Wir überdenken die Art und Weise, wie Informationen im Wissensgraphen effizienter genutzt werden können, um die Leistung bei der Zero-Shot Learning Aufgabe zu verbessern, und schlagen zu diesem Zweck ein Dense Graph Propagation (DGP) Modul vor.", "Die Autoren schlagen eine Lösung für das Problem der Überglättung in Graph Convolutional Networks vor, indem sie eine dichte Ausbreitung zwischen allen verwandten Knoten, gewichtet nach dem gegenseitigen Abstand, ermöglichen.", "Vorschlagen eines neuartigen Graph Convolutional Networks, um das Problem der Zero-Shot Klassifizierung anzugehen, indem relationale Strukturen zwischen Klassen als Input für Graph Convolutional Networks verwendet werden, um Klassifizierer für ungesehene Klassen zu lernen."]} +{"source": "In this paper, we propose a capsule-based neural network model to solve the semantic segmentation problem. By taking advantage of the extractable part-whole dependencies available in capsule layers, we derive the probabilities of the class labels for individual capsules through a recursive, layer-by-layer procedure. We model this procedure as a traceback pipeline and take it as a central piece to build an end-to-end segmentation network. Under the proposed framework, image-level class labels and object boundaries are jointly sought in an explicit manner, which poses a significant advantage over the state-of-the-art fully convolutional network (FCN) solutions. Experiments conducted on modified MNIST and neuroimages demonstrate that our model considerably enhance the segmentation performance compared to the leading FCN variant.\n", "target": ["Eine kapselbasierte semantische Segmentierung, bei der die Wahrscheinlichkeiten der Klassenbezeichnungen durch die Kapselpipeline zurückverfolgt werden. ", "Die Autoren stellen einen Rückverfolgungsmechanismus vor, um die unterste Ebene der Kapseln mit ihren jeweiligen Klassen zu verbinden.", "Vorschlagen einer Rückverfolgungsschicht für Kapselnetze, um eine semantische Segmentierung vorzunehmen, und ausdrückliches nutzen der Teil-Ganzes-Beziehung in den Kapselschichten.", "Vorschlagen einer Rückverfolgungsmethode, die auf dem CapsNet-Konzept von Sabour basiert, um parallel zur Klassifizierung eine semantische Segmentierung durchzuführen."]} +{"source": "Studying the evolution of information theoretic quantities during Stochastic Gradient Descent (SGD) learning of Artificial Neural Networks (ANNs) has gained popularity in recent years. \n Nevertheless, these type of experiments require estimating mutual information and entropy which becomes intractable for moderately large problems. In this work we propose a framework for understanding SGD learning in the information plane which consists of observing entropy and conditional entropy of the output labels of ANN. Through experimental results and theoretical justifications it is shown that, under some assumptions, the SGD learning trajectories appear to be similar for different ANN architectures. First, the SGD learning is modeled as a Hidden Markov Process (HMP) whose entropy tends to increase to the maximum. Then, it is shown that the SGD learning trajectory appears to move close to the shortest path between the initial and final joint distributions in the space of probability measures equipped with the total variation metric. Furthermore, it is shown that the trajectory of learning in the information plane can provide an alternative for observing the learning process, with potentially richer information about the learning than the trajectories in training and test error.", "target": ["Wir betrachten SGD als eine Trajektorie im Raum der Wahrscheinlichkeitsmaße, zeigen ihre Verbindung zu Markov-Prozessen, schlagen ein einfaches Markov-Modell des SGD-Lernens vor und vergleichen es experimentell mit SGD unter Verwendung informationstheoretischer Größen. ", "Konstruiert eine Markov-Kette, die einem verkürzten Pfad in der TV-Metrik auf P folgt, und zeigt, dass die Trajektorien von SGD und \\alpha-SMLC eine ähnliche bedingte Entropie aufweisen.", "Untersuchung des Verlaufs von H(\\hat{y}) gegenüber H(\\hat{y}|y) auf der Informationsebene für stochastische Gradientenabstiegsmethoden zum Training neuronaler Netze.", "Beschreibt SGD unter dem Gesichtspunkt der Verteilung p(y',y), wobei y ein (möglicherweise verfälschtes) wahres Klassenzeichen und y' eine Modellvorhersage ist."]} +{"source": "Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling. However, existing SG-MCMC schemes are not tailored to any specific probabilistic model, even a simple modification of the underlying dynamical system requires significant physical intuition. This paper presents the first meta-learning algorithm that allows automated design for the underlying continuous dynamics of an SG-MCMC sampler. The learned sampler generalizes Hamiltonian dynamics with state-dependent drift and diffusion, enabling fast traversal and efficient exploration of energy landscapes. Experiments validate the proposed approach on Bayesian fully connected neural network, Bayesian convolutional neural network and Bayesian recurrent neural network tasks, showing that the learned sampler outperforms generic, hand-designed SG-MCMC algorithms, and generalizes to different datasets and larger architectures.", "target": ["In diesem Beitrag wird eine Methode zur Automatisierung des Entwurfs von stochastischen Gradienten-MCMC-Vorschlägen unter Verwendung eines Meta-Lernansatzes vorgeschlagen. ", "Stellt einen Meta-Lernansatz vor, um automatisch MCMC-Sampler auf der Grundlage der Hamilton'schen Dynamik zu entwickeln, die bei Problemen, die den Trainingsproblemen ähneln, schneller mischen.", "Parametrisierung von Diffusions- und Curl-Matrizen durch neuronale Netze und Meta-Lernen und Optimierung eines sg-mcmc-Algorithmus. "]} +{"source": "We propose a new, multi-component energy function for energy-based Generative Adversarial Networks (GANs) based on methods from the image quality assessment literature. Our approach expands on the Boundary Equilibrium Generative Adversarial Network (BEGAN) by outlining some of the short-comings of the original energy and loss functions. We address these short-comings by incorporating an l1 score, the Gradient Magnitude Similarity score, and a chrominance score into the new energy function. We then provide a set of systematic experiments that explore its hyper-parameters. We show that each of the energy function's components is able to represent a slightly different set of features, which require their own evaluation criteria to assess whether they have been adequately learned. We show that models using the new energy function are able to produce better image representations than the BEGAN model in predicted ways.", "target": ["Techniken zur Bewertung der Bildqualität verbessern das Training und die Bewertung energiebasierter generativer adversarischer Netzwerke.", "Vorschlagen einer energiebasierten Formulierung des BEGAN-Modells und modifizieren des Modells, so dass es einen auf der Bewertung der Bildqualität basierenden Begriff enthält.", "Schlägt einige neue Energiefunktionen im BEGAN (boundary equilibrium GAN framework) vor, darunter l_1 score, Gradient magnitude similarity score und chrominance score."]} +{"source": "Momentum is a simple and widely used trick which allows gradient-based optimizers to pick up speed along low curvature directions. Its performance depends crucially on a damping coefficient. Largecamping coefficients can potentially deliver much larger speedups, but are prone to oscillations and instability; hence one typically resorts to small values such as 0.5 or 0.9. We propose Aggregated Momentum (AggMo), a variant of momentum which combines multiple velocity vectors with different damping coefficients. AggMo is trivial to implement, but significantly dampens oscillations, enabling it to remain stable even for aggressive damping coefficients such as 0.999. We reinterpret Nesterov's accelerated gradient descent as a special case of AggMo and analyze rates of convergence for quadratic objectives. Empirically, we find that AggMo is a suitable drop-in replacement for other momentum methods, and frequently delivers faster convergence with little to no tuning.", "target": ["Wir stellen eine einfache Variante der Momentum-Optimierung vor, die in der Lage ist, klassisches Momentum, Nesterov und Adam bei Deep-Learning-Aufgaben mit minimalem Hyperparameter-Tuning zu übertreffen.", "Einführung einer Impulsvariante, bei der mehrere Geschwindigkeiten mit unterschiedlichen Dämpfungskoeffizienten zusammengefasst werden, was die Schwingungen deutlich verringert.", "Vorschlag einer aggregierten Impulsmethode für die gradientenbasierte Optimierung durch Verwendung mehrerer Geschwindigkeitsvektoren mit unterschiedlichen Dämpfungsfaktoren anstelle eines einzelnen Geschwindigkeitsvektors zur Verbesserung der Stabilität.", "Die Autoren kombinieren mehrere Aktualisierungsschritte miteinander, um ein aggregiertes Momentum zu erreichen, und zeigen, dass es stabiler ist als andere Momentum-Methoden."]} +{"source": "Recurrent Neural Networks architectures excel at processing sequences by\n modelling dependencies over different timescales. The recently introduced\n Recurrent Weighted Average (RWA) unit captures long term dependencies\n far better than an LSTM on several challenging tasks. The RWA achieves\n this by applying attention to each input and computing a weighted average\n over the full history of its computations. Unfortunately, the RWA cannot\n change the attention it has assigned to previous timesteps, and so struggles\n with carrying out consecutive tasks or tasks with changing requirements.\n We present the Recurrent Discounted Attention (RDA) unit that builds on\n the RWA by additionally allowing the discounting of the past.\n We empirically compare our model to RWA, LSTM and GRU units on\n several challenging tasks. On tasks with a single output the RWA, RDA and\n GRU units learn much quicker than the LSTM and with better performance.\n On the multiple sequence copy task our RDA unit learns the task three\n times as quickly as the LSTM or GRU units while the RWA fails to learn at\n all. On the Wikipedia character prediction task the LSTM performs best\n but it followed closely by our RDA unit. Overall our RDA unit performs\n well and is sample efficient on a large variety of sequence tasks.", "target": ["Wir führen die Recurrent Discounted Unit ein, die die Aufmerksamkeit in linearer Zeit auf eine beliebig lange Sequenz anwendet.", "In diesem Papier wird die Recurrent Discounted Attention (RDA) vorgeschlagen, eine Erweiterung des Recurrent Weighted Average (RWA) durch Hinzufügen eines Diskontierungsfaktors.", "Erweitert den rekurrenten Gewichtsdurchschnitt, um die Beschränkungen der ursprünglichen Methode zu überwinden und gleichzeitig ihre Vorteile beizubehalten, und schlägt die Methode vor, Elman-Netze als Basis-RNN zu verwenden."]} +{"source": "Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging. In this paper, we introduce variance layers, a different kind of stochastic layers. Each weight of a variance layer follows a zero-mean distribution and is only parameterized by its variance. It means that each object is represented by a zero-mean distribution in the space of the activations. We show that such layers can learn surprisingly well, can serve as an efficient exploration tool in reinforcement learning tasks and provide a decent defense against adversarial attacks. We also show that a number of conventional Bayesian neural networks naturally converge to such zero-mean posteriors. We observe that in these cases such zero-mean parameterization leads to a much better training objective than more flexible conventional parameterizations where the mean is being learned.", "target": ["Es ist möglich, eine null-zentrierte Gauß-Verteilung über die Gewichte eines neuronalen Netzes zu lernen, indem man nur die Varianzen lernt, und das funktioniert erstaunlich gut.", "Diese Arbeit untersucht die Auswirkungen des Mittelwerts des Variationsposteriores und schlägt eine Varianzschicht vor, die nur die Varianz zur Speicherung von Informationen verwendet.", "Studien über neuronale Netze mit Varianz, die das Posterior von neuronalen Netzen nach Bayes mit Gaußverteilungen mit Null-Mittelwert approximieren."]} +{"source": "Graph Convolutional Networks (GCNs) are a recently proposed architecture which has had success in semi-supervised learning on graph-structured data. At the same time, unsupervised learning of graph embeddings has benefited from the information contained in random walks. In this paper we propose a model, Network of GCNs (N-GCN), which marries these two lines of work. At its core, N-GCN trains multiple instances of GCNs over node pairs discovered at different distances in random walks, and learns a combination of the instance outputs which optimizes the classification objective. Our experiments show that our proposed N-GCN model achieves state-of-the-art performance on all of the challenging node classification tasks we consider: Cora, Citeseer, Pubmed, and PPI. In addition, our proposed method has other desirable properties, including generalization to recently proposed semi-supervised learning methods such as GraphSAGE, allowing us to propose N-SAGE, and resilience to adversarial input perturbations.", "target": ["Wir erstellen ein Netzwerk von Graph Convolution Networks, wobei wir jedes mit einer anderen Potenz der Adjazenzmatrix füttern und alle ihre Repräsentationen zu einem Klassifizierungs-Subnetzwerk kombinieren und so den neuesten Stand der Technik bei der halbüberwachten Knotenklassifizierung erreichen.", "Vorschlagen eines neuen Netzwerks von GCNs mit zwei Ansätzen: eine vollständig verbundene Schicht auf gestapelten Merkmalen und einen Aufmerksamkeitsmechanismus, der skalare Gewichte pro GCN verwendet.", "Stellt ein Netz von Graph Convolutional Networks vor, das mit Hilfe von Random-Walk-Statistiken Informationen von nahen und entfernten Nachbarn im Graphen extrahiert."]} +{"source": "Recent DNN pruning algorithms have succeeded in reducing the number of parameters in fully connected layers often with little or no drop in classification accuracy. However most of the existing pruning schemes either have to be applied during training or require a costly retraining procedure after pruning to regain classification accuracy. In this paper we propose a cheap pruning algorithm based on difference of convex (DC) optimisation. We also provide theoretical analysis for the growth in the Generalisation Error (GE) of the new pruned network. Our method can be used with any convex regulariser and allows for a controlled degradation in classification accuracy while being orders of magnitude faster than competing approaches. Experiments on common feedforward neural networks show that for sparsity levels above 90% our method achieves 10% higher classification accuracy compared to Hard Thresholding.", "target": ["Ein schneller Pruning Algroithmus für vollständig verbundene DNN-Schichten mit theoretischer Analyse der Verschlechterung des Generalisierungsfehlers.", "Stellt einen günstigen Pruning Algorithmus für dichte Schichten von DNNs vor.", "Schlägt eine Lösung für das Problem des Pruning von DNNs vor, indem die Net-trim Zielfunktion als eine Difference of convex (DC) Funktion dargestellt wird."]} +{"source": "Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.", "target": ["Wir schlagen ein neues hybrides temporales Netzwerk vor, das die beste Leistung bei der Segmentierung von Videoaktionen in drei öffentlichen Datensätzen erzielt.", "Erörtert das Problem der Segmentierung von Handlungen in langen Videos, die bis zu 10 Minuten lang sein können, unter Verwendung einer zeitlichen Convolutional Encoder-Decoder Architektur.", "Schlägt eine Kombination aus temporalem Convolutional und rekurrentem Netzwerk für die Segmentierung von Videoaktionen vor."]} +{"source": "Convolutional Neural Networks (CNNs) become deeper and deeper in recent years, making the study of model acceleration imperative. It is a common practice to employ a shallow network, called student, to learn from a deep one, which is termed as teacher. Prior work made many attempts to transfer different types of knowledge from teacher to student, however, there are two problems remaining unsolved. Firstly, the knowledge used by existing methods is highly dependent on task and dataset, limiting their applications. Secondly, there lacks an effective training scheme for the transfer process, leading to degradation of performance. In this work, we argue that feature is the most important knowledge from teacher. It is sufficient for student to just learn good features regardless of the target task. From this discovery, we further present an efficient learning strategy to mimic features stage by stage. Extensive experiments demonstrate the importance of features and show that the proposed approach significantly narrows down the gap between student and teacher, outperforming the state-of-the-art methods.\n", "target": ["In dieser Arbeit wird vorgeschlagen, Wissen von einem tiefen Modell auf ein oberflächliches Modell zu übertragen, indem Merkmale Schritt für Schritt nachgeahmt werden.", "Erläutert eine Methode zur stufenweisen Wissensvermittlung unter Verwendung verschiedener Netzstrukturen.", "In diesem Beitrag wird vorgeschlagen, ein Netz in mehrere Teile zu unterteilen und jeden Teil nacheinander zu destillieren, um die Destillationsleistung in tiefen Lehrernetzen zu verbessern."]} +{"source": "We augment adversarial training (AT) with worst case adversarial training\n (WCAT) which improves adversarial robustness by 11% over the current state-\n of-the-art result in the `2-norm on CIFAR-10. We interpret adversarial training as\n Total Variation Regularization, which is a fundamental tool in mathematical im-\n age processing, and WCAT as Lipschitz regularization, which appears in Image\n Inpainting. We obtain verifiable worst and average case robustness guarantees,\n based on the expected and maximum values of the norm of the gradient of the\n loss.", "target": ["Verbesserungen in der Robustheit von adversarialem Training sowie nachweisbare Robustheitsgarantien werden durch die Ergänzung von adversarialem Training mit einer nachvollziehbaren Lipschitz-Regularisierung erreicht.", "Untersucht die Erweiterung des Trainingsverlustes mit einem zusätzlichen Gradienten-Regularisierungsterm, um die Robustheit der Modelle gegenüber ungünstigen Beispielen zu verbessern.", "Durch einen Trick wird der adversarial Verlust durch einen vereinfacht, bei dem die gegnerische Störung in geschlossener Form erscheint."]} +{"source": "The task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given \\{\\textit{passage, question}\\} pair and select one of the $n$ given options. The current state of the art model for this task first computes a query-aware representation for the passage and then \\textit{selects} the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of \\textit{elimination} and \\textit{selection}. Specifically, a human would first try to eliminate the most irrelevant option and then read the document again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose \\textit{ElimiNet}, a neural network based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the \\{\\textit{document, question}\\} pair and if so it tries to make the document representation orthogonal to this eliminatedd option (akin to ignoring portions of the document corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the document representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further we show that taking an ensemble of our \\textit{elimination-selection} based method with a \\textit{selection} based method gives us an improvement of 7\\% (relative) over the best reported performance on this dataset. \n", "target": ["Ein Modell, das Eliminierung und Auswahl zur Beantwortung von Multiple-Choice-Fragen kombiniert.", "Erläutert den Gated Attention Reader und fügt Gates hinzu, die auf der Eliminierung von Antworten beim Multiple-Choice-Leseverständnis basieren.", "Diese Arbeit schlägt die Verwendung eines Eliminationsgatters in Modellarchitekturen für Leseverständnisaufgaben vor, erzielt aber keine State-of-the-Art-Ergebnisse.", "In diesem Beitrag wird ein neues Modell für das Leseverständnis mit mehreren Auswahlmöglichkeiten vorgestellt, das auf der Idee basiert, dass einige Optionen eliminiert werden sollten, um bessere Darstellungen der Passagen/Fragen zu erhalten."]} +{"source": "Humans are capable of attributing latent mental contents such as beliefs, or intentions to others. The social skill is critical in everyday life to reason about the potential consequences of their behaviors so as to plan ahead. It is known that humans use this reasoning ability recursively, i.e. considering what others believe about their own beliefs. In this paper, we start from level-$1$ recursion and introduce a probabilistic recursive reasoning (PR2) framework for multi-agent reinforcement learning. Our hypothesis is that it is beneficial for each agent to account for how the opponents would react to its future behaviors. Under the PR2 framework, we adopt variational Bayes methods to approximate the opponents' conditional policy, to which each agent finds the best response and then improve their own policy. We develop decentralized-training-decentralized-execution algorithms, PR2-Q and PR2-Actor-Critic, that are proved to converge in the self-play scenario when there is one Nash equilibrium. Our methods are tested on both the matrix game and the differential game, which have a non-trivial equilibrium where common gradient-based methods fail to converge. Our experiments show that it is critical to reason about how the opponents believe about what the agent believes. We expect our work to contribute a new idea of modeling the opponents to the multi-agent reinforcement learning community. \n", "target": ["Wir haben ein neuartiges probabilisitisches rekursives Schlussfolgerungsmodell (PR2) für Multi-Agenten Aufgaben mit tiefem Reinforcement Learning vorgeschlagen.", "Vorschlagen eines neuen Ansatz für vollständig dezentralisiertes Training in Multi-Agenten Reinforcement Learning.", "Befasst sich mit dem Problem, RL-Agenten mit rekursiven Argumentationsfähigkeiten in einem Multi-Agenten Umfeld auszustatten, basierend auf der Hypothese, dass rekursive Argumentation für sie vorteilhaft ist, um zu nicht-trivalen Gleichgewichten zu konvergieren.", "Die Arbeit stellt eine dezentralisierte Trainingsmethode für Multi-Agenten Reinforcement Learning vor, bei der die Agenten die Strategien anderer Agenten ableiten und die abgeleiteten Modelle zur Entscheidungsfindung verwenden. "]} +{"source": "Due to the substantial computational cost, training state-of-the-art deep neural networks for large-scale datasets often requires distributed training using multiple computation workers. However, by nature, workers need to frequently communicate gradients, causing severe bottlenecks, especially on lower bandwidth connections. A few methods have been proposed to compress gradient for efficient communication, but they either suffer a low compression ratio or significantly harm the resulting model accuracy, particularly when applied to convolutional neural networks. To address these issues, we propose a method to reduce the communication overhead of distributed deep learning. Our key observation is that gradient updates can be delayed until an unambiguous (high amplitude, low variance) gradient has been calculated. We also present an efficient algorithm to compute the variance and prove that it can be obtained with negligible additional cost. We experimentally show that our method can achieve very high compression ratio while maintaining the result model accuracy. We also analyze the efficiency using computation and communication cost models and provide the evidence that this method enables distributed deep learning for many scenarios with commodity environments.", "target": ["Ein neuer Algorithmus zur Reduzierung des Kommunikations Overheads bei verteiltem Deep Learning durch Unterscheidung von eindeutigen Gradienten.", "Vorschlag einer varianzbasierten Gradientenkomprimierung zur Reduzierung des Kommunikationsaufwands beim verteilten Deep Learning.", "Schlägt eine neue Methode zur Komprimierung von Gradientenaktualisierungen für verteilte SGD vor, um die Gesamtausführung zu beschleunigen.", "Einführung einer varianzbasierten Gradientenkompressionsmethode für effizientes verteiltes Training neuronaler Netze und Messung der Mehrdeutigkeit."]} +{"source": "In this work, we face the problem of unsupervised domain adaptation with a novel deep learning approach which leverages our finding that entropy minimization is induced by the optimal alignment of second order statistics between source and target domains. We formally demonstrate this hypothesis and, aiming at achieving an optimal alignment in practical cases, we adopt a more principled strategy which, differently from the current Euclidean approaches, deploys alignment along geodesics. Our pipeline can be implemented by adding to the standard classification loss (on the labeled source domain), a source-to-target regularizer that is weighted in an unsupervised and data-driven fashion. We provide extensive experiments to assess the superiority of our framework on standard domain and modality adaptation benchmarks.", "target": ["Ein neues unbeaufsichtigtes Verfahren zur Anpassung von tiefen Bereichen, das Korrelationsabgleich und Entropieminimierung effizient vereint.", "Verbessert den Korrelationsabgleichsansatz zur Domänenanpassung, indem der euklidische Abstand durch den geodätischen Log-Euklidischen Abstand zwischen zwei Kovarianzmatrizen ersetzt wird und die Ausgleichskosten automatisch anhand der Entropie auf der Zieldomäne ausgewählt werden.", "Vorschlag für einen Korrelationsabgleich mit minimaler Entropie, einen unüberwachten Algorithmus zur Bereichsanpassung, der Entropieminimierung und Korrelationsabgleichsmethoden miteinander verbindet."]} +{"source": "Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, \"Conceptor-Aided Backprop\" (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint and permuted MNIST tasks, CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed.", "target": ["Wir schlagen eine Variante des Backpropagation-Algorithmus vor, bei der die Gradienten durch Konzeptoren gegen eine Verschlechterung der zuvor gelernten Aufgaben abgeschirmt werden.", "In diesem Beitrag wird der Begriff der Konzeptoren, eine Art Regularisierer, verwendet, um das Vergessen beim kontinuierlichen Lernen beim Training neuronaler Netze für sequenzielle Aufgaben zu verhindern.", "Es wird eine Methode zum Erlernen neuer Aufgaben vorgestellt, ohne dass frühere Aufgaben beeinträchtigt werden, indem Konzeptoren verwendet werden."]} +{"source": "Recent advances in neural Sequence-to-Sequence (Seq2Seq) models reveal a purely data-driven approach to the response generation task. Despite its diverse variants and applications, the existing Seq2Seq models are prone to producing short and generic replies, which blocks such neural network architectures from being utilized in practical open-domain response generation tasks. In this research, we analyze this critical issue from the perspective of the optimization goal of models and the specific characteristics of human-to-human conversational corpora. Our analysis is conducted by decomposing the goal of Neural Response Generation (NRG) into the optimizations of word selection and ordering. It can be derived from the decomposing that Seq2Seq based NRG models naturally tend to select common words to compose responses, and ignore the semantic of queries in word ordering. On the basis of the analysis, we propose a max-marginal ranking regularization term to avoid Seq2Seq models from producing the generic and uninformative responses. The empirical experiments on benchmarks with several metrics have validated our analysis and proposed methodology.", "target": ["Analysieren Sie den Grund dafür, dass generative Modelle für neuronale Reaktionen universelle Antworten bevorzugen; schlagen Sie eine Methode vor, um dies zu vermeiden.", "Untersucht das Problem der universellen Antworten, das die neuronalen Seq2Seq-Generierungsmodelle plagt.", "Die Arbeit untersucht die Verbesserung der neuronalen Antwort Generierungsaufgabe durch die Vernachlässigung der gemeinsamen Antworten mit Modifikation der Verlustfunktion und Präsentation der gemeinsamen/universellen Antworten während der Trainingsphase."]} +{"source": "The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present code2seq: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding.\n We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to 16M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as general state-of-the-art NMT models. An interactive online demo of our model is available at http://code2seq.org. Our code, data and trained models are available at http://github.com/tech-srl/code2seq.", "target": ["Wir nutzen die syntaktische Struktur des Quellcodes, um natürlichsprachliche Sequenzen zu erzeugen.", "Vorstellen einer Methode zur Erzeugung von Sequenzen aus Code durch Parsing und Erstellung eines Syntaxbaums.", "Diese Arbeit stellt eine AST-basierte Kodierung für Programmiercode vor und zeigt deren Effektivität in den Aufgaben der extremen Codezusammenfassung und des Code Captioning.", "In diesem Beitrag wird ein neues Code-zu-Sequenz-Modell vorgestellt, das die syntaktische Struktur von Programmiersprachen nutzt, um Quellcode-Schnipsel zu kodieren und sie anschließend in natürliche Sprache zu dekodieren."]} +{"source": "We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition. The proposed mechanism reuses CNN feature activations to find the most informative parts of the image at different depths with the help of gating mechanisms and without part annotations. Thus, it can be used to augment any layer of a CNN to extract low- and high-level local information to be more discriminative. \n\n Differently, from other approaches, the mechanism we propose just needs a single pass through the input and it can be trained end-to-end through SGD. As a consequence, the proposed mechanism is modular, architecture-independent, easy to implement, and faster than iterative approaches.\n\n Experiments show that, when augmented with our approach, Wide Residual Networks systematically achieve superior performance on each of five different fine-grained recognition datasets: the Adience age and gender recognition benchmark, Caltech-UCSD Birds-200-2011, Stanford Dogs, Stanford Cars, and UEC Food-100, obtaining competitive and state-of-the-art scores.", "target": ["Wir verbessern CNNs mit einem neuartigen Aufmerksamkeitsmechanismus für feinkörnige Erkennung. Bei 5 Datensätzen wird eine überragende Leistung erzielt.", "Beschreibt einen neuartigen Aufmerksamkeitsmechanismus, der auf die feinkörnige Erkennung angewandt wird und die Erkennungsgenauigkeit der Grundlinie konsequent verbessert.", "In diesem Beitrag wird ein Feed-Forward Attention Mechanismus für die feinkörnige Bildklassifizierung vorgeschlagen.", "In diesem Beitrag wird ein interessanter Aufmerksamkeitsmechanismus für die feinkörnige Bildklassifizierung vorgestellt."]} +{"source": "Most existing GANs architectures that generate images use transposed convolution or resize-convolution as their upsampling algorithm from lower to higher resolution feature maps in the generator. We argue that this kind of fixed operation is problematic for GANs to model objects that have very different visual appearances. We propose a novel adaptive convolution method that learns the upsampling algorithm based on the local context at each location to address this problem. We modify a baseline GANs architecture by replacing normal convolutions with adaptive convolutions in the generator. Experiments on CIFAR-10 dataset show that our modified models improve the baseline model by a large margin. Furthermore, our models achieve state-of-the-art performance on CIFAR-10 and STL-10 datasets in the unsupervised setting.", "target": ["Wir ersetzen normale Convolutions durch adaptive Convolutions, um GAN-Generatoren zu verbessern.", "Es wird vorgeschlagen, die Convolutions im Generator durch einen adaptiven Convolution Block zu ersetzen, der lernt, Convolution Gewichte und Verzerrungen von Upsampling-Operationen adaptiv pro Pixelposition zu erzeugen.", "Verwendet Adaptive Convolution im Kontext von GANs mit einem Block namens AdaConvBlock, der die reguläre Convolution ersetzt. Dies gibt mehr lokalen Kontext pro Kernelgewicht, so dass es lokal flexible Objekte erzeugen kann."]} +{"source": "Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased test-time cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the largest to-date dataset used for neural language modeling, containing $6\\times 10^{11}$ tokens and based on the Common Crawl repository of web data.", "target": ["Wir führen groß angelegte Experimente durch, um zu zeigen, dass eine einfache Online-Variante der Destillation uns helfen kann, das Training verteilter neuronaler Netze auf mehr Maschinen zu skalieren.", "Vorschlag einer Methode zur Skalierung des verteilten Trainings, die über die derzeitigen Grenzen des stochastischen Mini-Batch Gradientenabstiegs hinausgeht.", "Vorschlag für eine Online-Destillationsmethode, die so genannte Co-Destillation, bei der zwei verschiedene Modelle so trainiert werden, dass sie mit den Vorhersagen des anderen Modells übereinstimmen, um den eigenen Verlust zu minimieren.", "Einführung einer Online-Destillationstechnik zur Beschleunigung herkömmlicher Algorithmen für das Training groß angelegter verteilter neuronaler Netze."]} +{"source": "Support Vector Machines (SVMs) are one of the most popular algorithms for classification and regression analysis. Despite their popularity, even efficient implementations have proven to be computationally expensive to train at a large-scale, especially in streaming settings. In this paper, we propose a novel coreset construction algorithm for efficiently generating compact representations of massive data sets to speed up SVM training. A coreset is a weighted subset of the original data points such that SVMs trained on the coreset are provably competitive with those trained on the original (massive) data set. We provide both lower and upper bounds on the number of samples required to obtain accurate approximations to the SVM problem as a function of the complexity of the input data. Our analysis also establishes sufficient conditions on the existence of sufficiently compact and representative coresets for the SVM problem. We empirically evaluate the practical effectiveness of our algorithm against synthetic and real-world data sets.", "target": ["Wir stellen einen Algorithmus zur Beschleunigung des SVM-Trainings auf massiven Datensätzen vor, indem wir kompakte Darstellungen konstruieren, die eine effiziente und nachweislich ungefähre Inferenz ermöglichen.", "Untersucht den Ansatz der Kernmenge für SVM und zielt darauf ab, eine kleine Menge gewichteter Punkte so zu wählen, dass sich die Verlustfunktion über die Punkte nachweislich derjenigen über den gesamten Datensatz annähert.", "Die Arbeit schlägt eine auf Wichtigkeitssampling basierende Coreset-Konstruktion zur Darstellung großer Trainingsdaten für SVMs vor."]} +{"source": "The sign stochastic gradient descent method (signSGD) utilizes only the sign of the stochastic gradient in its updates. Since signSGD carries out one-bit quantization of the gradients, it is extremely practical for distributed optimization where gradients need to be aggregated from different processors. For the first time, we establish convergence rates for signSGD on general non-convex functions under transparent conditions. We show that the rate of signSGD to reach first-order critical points matches that of SGD in terms of number of stochastic gradient calls, up to roughly a linear factor in the dimension. We carry out simple experiments to explore the behaviour of sign gradient descent (without the stochasticity) close to saddle points and show that it often helps completely avoid them without using either stochasticity or curvature information.", "target": ["Wir beweisen eine nicht-konvexe Konvergenzrate für die stochastische Gradientenmethode mit Vorzeichen. Der Algorithmus hat Verbindungen zu Algorithmen wie Adam und Rprop sowie zu Gradientenquantisierungsverfahren, die beim verteilten maschinellen Lernen verwendet werden.", "Bereitstellung einer Konvergenzanalyse des Sign SGD-Algorithmus für nicht-konvexe Fälle.", "Die Arbeit untersucht einen Algorithmus, der das Vorzeichen der Gradienten anstelle der tatsächlichen Gradienten für das Training tiefer Modelle verwendet."]} +{"source": "Deep learning has found numerous applications thanks to its versatility and accuracy on pattern recognition problems such as visual object detection. Learning and inference in deep neural networks, however, are memory and compute intensive and so improving efficiency is one of the major challenges for frameworks such as PyTorch, Tensorflow, and Caffe. While the efficiency problem can be partially addressed with specialized hardware and its corresponding proprietary libraries, we believe that neural network acceleration should be transparent to the user and should support all hardware platforms and deep learning libraries. \n\n To this end, we introduce a transparent middleware layer for neural network acceleration. The system is built around a compiler for deep learning, allowing one to combine device-specific libraries and custom optimizations while supporting numerous hardware devices. In contrast to other projects, we explicitly target the optimization of both prediction and training of neural networks. We present the current development status and some preliminary but encouraging results: on a standard x86 server, using CPUs our system achieves a 11.8x speed-up for inference and a 8.0x for batched-prediction (128); on GPUs we achieve a 1.7x and 2.3x speed-up respectively.", "target": ["Wir stellen eine transparente Middleware für die Beschleunigung neuronaler Netze mit eigener Compiler-Engine vor, die eine bis zu 11,8-fache Beschleunigung auf CPUs und eine 2,3-fache Beschleunigung auf GPUs erreicht.", "In diesem Beitrag wird eine transparente Middleware-Schicht für die Beschleunigung neuronaler Netze vorgeschlagen und es werden einige Beschleunigungsergebnisse auf einfachen CPU- und GPU-Architekturen erzielt."]} +{"source": "Performance of neural networks can be significantly improved by encoding known invariance for particular tasks. Many image classification tasks, such as those related to cellular imaging, exhibit invariance to rotation. In particular, to aid convolutional neural networks in learning rotation invariance, we consider a simple, efficient conic convolutional scheme that encodes rotational equivariance, along with a method for integrating the magnitude response of the 2D-discrete-Fourier transform (2D-DFT) to encode global rotational invariance. We call our new method the Conic Convolution and DFT Network (CFNet). We evaluated the efficacy of CFNet as compared to a standard CNN and group-equivariant CNN (G-CNN) for several different image classification tasks and demonstrated improved performance, including classification accuracy, computational efficiency, and its robustness to hyperparameter selection. Taken together, we believe CFNet represents a new scheme that has the potential to improve many imaging analysis applications.", "target": ["Wir schlagen eine konische Convolution und die 2D-DFT vor, um die Rotationsäquivarianz in einem neuronalen Netz zu kodieren.", "Im Zusammenhang mit der Bildklassifikation wird in diesem Papier eine Architektur für Cpnvolutional Neural Networks mit rotationsäquivarianten Merkmalskarten vorgeschlagen, die schließlich durch Verwendung des Betrags der diskreten 2D-Fouriertransformation (DFT) rotationsinvariant gemacht werden.", "Die Autoren bieten ein rotationsinvariantes neuronales Netzwerk durch die Kombination von konischer Convolution und 2D-DFT."]} +{"source": "The problem of visual metamerism is defined as finding a family of perceptually\n indistinguishable, yet physically different images. In this paper, we propose our\n NeuroFovea metamer model, a foveated generative model that is based on a mixture\n of peripheral representations and style transfer forward-pass algorithms. Our\n gradient-descent free model is parametrized by a foveated VGG19 encoder-decoder\n which allows us to encode images in high dimensional space and interpolate\n between the content and texture information with adaptive instance normalization\n anywhere in the visual field. Our contributions include: 1) A framework for\ncomputing metamers that resembles a noisy communication system via a foveated\nfeed-forward encoder-decoder network – We observe that metamerism arises as a\nbyproduct of noisy perturbations that partially lie in the perceptual null space; 2)\nA perceptual optimization scheme as a solution to the hyperparametric nature of\nour metamer model that requires tuning of the image-texture tradeoff coefficients\neverywhere in the visual field which are a consequence of internal noise; 3) An\n ABX psychophysical evaluation of our metamers where we also find that the rate\n of growth of the receptive fields in our model match V1 for reference metamers\n and V2 between synthesized samples. Our model also renders metamers at roughly\n a second, presenting a ×1000 speed-up compared to the previous work, which now\n allows for tractable data-driven metamer experiments.", "target": ["Wir stellen ein neuartiges Feed-Forward System zur Erzeugung visueller Metamere vor.", "Vorschlag eines NeuroFovea Modells für die Erzeugung von Fixpunkt-Metameren unter Verwendung eines Stiltransfer-Ansatzes über eine Encoder-Decoder Stilarchitektur.", "Eine Analyse des Metamerismus und ein Modell zur schnellen Herstellung von Metameren, die für die experimentelle Psychophysik und andere Bereiche von Nutzen sind.", "Die Arbeit schlägt eine schnelle Methode zur Erzeugung visueller Metamere vor für physikalisch unterschiedliche Bilder, die nicht von einem Original unterschieden werden können mittels foveatierter, schneller, beliebiger Stilübertragung."]} +{"source": "Past works have shown that, somewhat surprisingly, over-parametrization can help generalization in neural networks. Towards explaining this phenomenon, we adopt a margin-based perspective. We establish: 1) for multi-layer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for deep networks. In the case of two-layer networks, an infinite-width neural network enjoys the best generalization guarantees. The typical infinite feature methods are kernel methods; we compare the neural net margin with that of kernel methods and construct natural instances where kernel methods have much weaker generalization guarantees. We validate this gap between the two approaches empirically. Finally, this infinite-neuron viewpoint is also fruitful for analyzing optimization. We show that a perturbed gradient flow on infinite-size networks finds a global optimizer in polynomial time.", "target": ["Wir zeigen, dass das Training von Feedforward-Relu-Netzen mit einem schwachen Regularizer zu einer maximalen Marge führt und analysieren die Auswirkungen dieses Ergebnisses.", "Erforscht die Margentheorie für neuronale Netze und zeigt, dass die maximale Marge mit der Größe des Netzes monoton zunimmt.", "Diese Arbeit untersucht die implizite Verzerrung von Minimierern eines regulierten Kreuzentropieverlustes eines zweischichtigen Netzes mit ReLU-Aktivierungen und erhält eine obere Schranke für die Verallgemeinerung, die nicht mit der Netzgröße zunimmt."]} +{"source": "We propose a distributed architecture for deep reinforcement learning at scale, that enables agents to learn effectively from orders of magnitude more data than previously possible. The algorithm decouples acting from learning: the actors interact with their own instances of the environment by selecting actions according to a shared neural network, and accumulate the resulting experience in a shared experience replay memory; the learner replays samples of experience and updates the neural network. The architecture relies on prioritized experience replay to focus only on the most significant data generated by the actors. Our architecture substantially improves the state of the art on the Arcade Learning Environment, achieving better final performance in a fraction of the wall-clock training time.", "target": ["Eine verteilte Architektur für Deep Reinforcement Learning im großen Maßstab, die parallele Datengenerierung nutzt, um den Stand der Technik beim Arcade Learning Environment Benchmark in einem Bruchteil der Trainingszeit früherer Ansätze zu verbessern.", "Untersucht ein distirbuted Deep RL System, in dem Erfahrungen, anstatt Gradienten, zwischen den parallelen Arbeiten und dem zentralisierten Lernenden geteilt werden.", "Ein paralleler Ansatz für das DQN-Training, der auf der Idee beruht, dass mehrere Akteure parallel Daten sammeln, während ein einzelner Lerner das Modell anhand von Erfahrungen aus dem zentralen Wiedergabespeicher trainiert.", "Dieser Beitrag schlägt eine verteilte Architektur für Deep Reinforcement Learning im großen Maßstab vor, wobei der Schwerpunkt auf der Parallelisierung des Akteursalgorithmus im Prioritized Experience Replay Framework liegt."]} +{"source": "Designing neural networks for continuous-time stochastic processes is challenging, especially when observations are made irregularly. In this article, we analyze neural networks from a frame theoretic perspective to identify the sufficient conditions that enable smoothly recoverable representations of signals in L^2(R). Moreover, we show that, under certain assumptions, these properties hold even when signals are irregularly observed. As we converge to the family of (convolutional) neural networks that satisfy these conditions, we show that we can optimize our convolution filters while constraining them so that they effectively compute a Discrete Wavelet Transform. Such a neural network can efficiently divide the time-axis of a signal into orthogonal sub-spaces of different temporal scale and localization. We evaluate the resulting neural network on an assortment of synthetic and real-world tasks: parsimonious auto-encoding, video classification, and financial forecasting.", "target": ["Neuronale Architekturen, die Darstellungen von unregelmäßig beobachteten Signalen liefern und nachweislich eine Signalrekonstruktion ermöglichen.", "Beweist, dass Convolutional Neural Networks mit Leaky-ReLU Aktivierungsfunktion nichtlineare Rahmen sind, mit ähnlichen Ergebnissen für ungleichmäßig abgetastete Zeitreihen.", "In diesem Artikel werden neuronale Netze über Zeitreihen betrachtet und es wird gezeigt, dass die ersten Convolutional Filter zur Darstellung einer diskreten Wavelet-Transformation gewählt werden können."]} +{"source": "Most state-of-the-art neural machine translation systems, despite being different\n in architectural skeletons (e.g., recurrence, convolutional), share an indispensable\n feature: the Attention. However, most existing attention methods are token-based\n and ignore the importance of phrasal alignments, the key ingredient for the success\n of phrase-based statistical machine translation. In this paper, we propose\n novel phrase-based attention methods to model n-grams of tokens as attention\n entities. We incorporate our phrase-based attentions into the recently proposed\n Transformer network, and demonstrate that our approach yields improvements of\n 1.3 BLEU for English-to-German and 0.5 BLEU for German-to-English translation\n tasks, and 1.75 and 1.35 BLEU points in English-to-Russian and Russian-to-English translation tasks \n on WMT newstest2014 using WMT’16 training data.\n", "target": ["Phrasenbasierte Aufmerksamkeitsmechanismen zur Zuweisung von Aufmerksamkeit auf Phrasen, die zusätzlich zu den bestehenden Token-to-Token-Attentionen Phrase-to-Token- und Phrase-to-Phrase-Attention-Alignments erreichen.", "Die Arbeit stellt einen Aufmerksamkeitsmechanismus vor, der eine gewichtete Summe nicht nur über einzelne Token, sondern auch über Ngramme (Phrasen) berechnet."]} +{"source": "Intuitively, unfamiliarity should lead to lack of confidence. In reality, current algorithms often make highly confident yet wrong predictions when faced with unexpected test samples from an unknown distribution different from training. Unlike domain adaptation methods, we cannot gather an \"unexpected dataset\" prior to test, and unlike novelty detection methods, a best-effort original task prediction is still expected. We compare a number of methods from related fields such as calibration and epistemic uncertainty modeling, as well as two proposed methods that reduce overconfident errors of samples from an unknown novel distribution without drastically increasing evaluation time: (1) G-distillation, training an ensemble of classifiers and then distill into a single model using both labeled and unlabeled examples, or (2) NCR, reducing prediction confidence based on its novelty detection score. Experimentally, we investigate the overconfidence problem and evaluate our solution by creating \"familiar\" and \"novel\" test splits, where \"familiar\" are identically distributed with training and \"novel\" are not. We discover that calibrating using temperature scaling on familiar data is the best single-model method for improving novel confidence, followed by our proposed methods. In addition, some methods' NLL performance are roughly equivalent to a regularly trained model with certain degree of smoothing. Calibrating can also reduce confident errors, for example, in gender recognition by 95% on demographic groups different from the training data.", "target": ["Tiefe Netze liegen mit größerer Wahrscheinlichkeit falsch, wenn sie mit unerwarteten Daten getestet werden. Wir schlagen eine experimentelle Methodik vor, um das Problem zu untersuchen, und zwei Methoden, um die Fehlerwahrscheinlichkeit bei unbekannten Eingangsverteilungen zu verringern.", "Es werden zwei Ideen zur Verringerung übermäßiger falscher Vorhersagen vorgeschlagen: \"G-Distillation\" eines Ensembles mit zusätzlichen unüberwachten Daten und Verringerung des Neuheitsvertrauens mit Hilfe eines Neuheitsdetektors.", "Die Autoren schlagen zwei Methoden zur Schätzung der Klassifizierungssicherheit bei neuartigen, ungesehenen Datenverteilungen vor. Die erste Idee besteht darin, Ensemble-Methoden als Basis zu verwenden, um unsichere Fälle zu identifizieren, und dann Destillationsmethoden zu verwenden, um das Ensemble in ein einzelnes Modell zu reduzieren, das das Verhalten des Ensembles nachahmt. Die zweite Idee besteht darin, einen Neuheitsdetektor-Klassifikator zu verwenden und die Netzwerkausgabe nach dem Neuheitswert zu gewichten."]} +{"source": "Progress in deep learning is slowed by the days or weeks it takes to train large models. The natural solution of using more hardware is limited by diminishing returns, and leads to inefficient use of additional resources. In this paper, we present a large batch, stochastic optimization algorithm that is both faster than widely used algorithms for fixed amounts of computation, and also scales up substantially better as more computational resources become available. Our algorithm implicitly computes the inverse Hessian of each mini-batch to produce descent directions; we do so without either an explicit approximation to the Hessian or Hessian-vector products. We demonstrate the effectiveness of our algorithm by successfully training large ImageNet models (InceptionV3, ResnetV1-50, ResnetV1-101 and InceptionResnetV2) with mini-batch sizes of up to 32000 with no loss in validation error relative to current baselines, and no increase in the total number of steps. At smaller mini-batch sizes, our optimizer improves the validation error in these models by 0.8-0.9\\%. Alternatively, we can trade off this accuracy to reduce the number of training steps needed by roughly 10-30\\%. Our work is practical and easily usable by others -- only one hyperparameter (learning rate) needs tuning, and furthermore, the algorithm is as computationally cheap as the commonly used Adam optimizer.", "target": ["Wir beschreiben einen praktischen Optimierungsalgorithmus für tiefe neuronale Netze, der im Vergleich zu weit verbreiteten Algorithmen schneller arbeitet und bessere Modelle erzeugt.", "Schlägt einen neuen Algorithmus vor, bei dem die Hess'sche Formel implizit verwendet wird und eine Motivation aus Potenzreihen verwendet wird.", "Stellt einen neuen Algorithmus 2. Ordnung vor, der implizit Krümmungsinformationen verwendet, und zeigt die Intuition hinter den Näherungsschemata in den Algorithmen und validiert die Heuristiken in verschiedenen Experimenten."]} +{"source": "Recent work has shown that performing inference with fast, very-low-bitwidth\n (e.g., 1 to 2 bits) representations of values in models can yield surprisingly accurate\n results. However, although 2-bit approximated networks have been shown to\n be quite accurate, 1 bit approximations, which are twice as fast, have restrictively\n low accuracy. We propose a method to train models whose weights are a mixture\n of bitwidths, that allows us to more finely tune the accuracy/speed trade-off. We\n present the “middle-out” criterion for determining the bitwidth for each value, and\n show how to integrate it into training models with a desired mixture of bitwidths.\n We evaluate several architectures and binarization techniques on the ImageNet\n dataset. We show that our heterogeneous bitwidth approximation achieves superlinear\n scaling of accuracy with bitwidth. Using an average of only 1.4 bits, we are\n able to outperform state-of-the-art 2-bit architectures.", "target": ["Wir führen die Annäherung mit gebrochener Bitbreite ein und zeigen, dass sie erhebliche Vorteile hat.", "Vorschlagen einer Methode zur Veränderung des Quantisierungsgrades in einem neuronalen Netz während der Vorwärtsausbreitungsphase.", "Beibehaltung der Genauigkeit eines 2-Bit Networds bei Verwendung von weniger als 2-Bit Gewichten."]} +{"source": "Pruning units in a deep network can help speed up inference and training as well as reduce the size of the model. We show that bias propagation is a pruning technique which consistently outperforms the common approach of merely removing units, regardless of the architecture and the dataset. We also show how a simple adaptation to an existing scoring function allows us to select the best units to prune. Finally, we show that the units selected by the best performing scoring functions are somewhat consistent over the course of training, implying the dead parts of the network appear during the stages of training.", "target": ["Mean Replacement ist eine effiziente Methode, um den Verlust nach dem Pruning zu verbessern, und auf Taylor-Approximation basierende Scoring-Funktionen funktionieren besser mit absoluten Werten. ", "Vorschlag einer einfachen Verbesserung der Methoden für das Pruning von Einheiten unter Verwendung der \"mittleren Ersetzung\".", "In diesem Beitrag wird eine Strategie zum Pruning des Mittelwerts vorgestellt und die absolutwertige Taylor-Erweiterung als Bewertungsfunktion für das Pruning verwendet."]} +{"source": "Due to the phenomenon of “posterior collapse,” current latent variable generative models pose a challenging design choice that either weakens the capacity of the decoder or requires altering the training objective. We develop an alternative that utilizes the most powerful generative models as decoders, optimize the variational lower bound, and ensures that the latent variables preserve and encode useful information. Our proposed δ-VAEs achieve this by constraining the variational family for the posterior to have a minimum distance to the prior. For sequential latent variable models, our approach resembles the classic representation learning approach of slow feature analysis. We demonstrate our method’s efficacy at modeling text on LM1B and modeling images: learning representations, improving sample quality, and achieving state of the art log-likelihood on CIFAR-10 and ImageNet 32 × 32.", "target": [" Vermeiden des posterior Collapse, indem die Rate nach unten begrenzt wird.", "Vorstellung eines Ansatzes zur Verhinderung eines posterior Kollapses bei VAEs durch Begrenzung der Familie der Variationsannäherung auf das Posterior.", "In dieser Arbeit wird eine Einschränkung für die Familie der Variationsposterioren eingeführt, so dass der KL-Term kontrolliert werden kann, um den Posterior-Kollaps in tiefen generativen Modellen wie VAEs zu bekämpfen."]} +{"source": "Mini-batch gradient descent and its variants are commonly used in deep learning. The principle of mini-batch gradient descent is to use noisy gradient calculated on a batch to estimate the real gradient, thus balancing the computation cost per iteration and the uncertainty of noisy gradient. However, its batch size is a fixed hyper-parameter requiring manual setting before training the neural network. Yin et al. (2017) proposed a batch adaptive stochastic gradient descent (BA-SGD) that can dynamically choose a proper batch size as learning proceeds. We extend the BA-SGD to momentum algorithm and evaluate both the BA-SGD and the batch adaptive momentum (BA-Momentum) on two deep learning tasks from natural language processing to image classification. Experiments confirm that batch adaptive methods can achieve a lower loss compared with mini-batch methods after scanning the same epochs of data. Furthermore, our BA-Momentum is more robust against larger step sizes, in that it can dynamically enlarge the batch size to reduce the larger uncertainty brought by larger step sizes. We also identified an interesting phenomenon, batch size boom. The code implementing batch adaptive framework is now open source, applicable to any gradient-based optimization problems.", "target": ["Wir haben ein adaptives Batch-Momentum entwickelt, das im Vergleich zu Mini-Batch-Methoden nach dem Scannen derselben Datenepochen geringere Verluste erzielt und robuster gegenüber großen Schrittweiten ist.", "Diese Arbeit befasst sich mit dem Problem der automatischen Abstimmung der Batch-Größe während des Deep-Learning-Trainings, und behauptet, Batch-adaptive SGD auf adaptive Dynamik zu erweitern und die Algorithmen auf komplexe neuronale Netz Probleme anzupassen.", "In der Arbeit wird vorgeschlagen, einen Algorithmus zu verallgemeinern, der SGD mit adaptiven Losgrößen durchführt, indem der Nutzenfunktion ein Momentum hinzugefügt wird."]} +{"source": "Deep learning models for graphs have advanced the state of the art on many tasks. Despite their recent success, little is known about their robustness. We investigate training time attacks on graph neural networks for node classification that perturb the discrete graph structure. Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks, essentially treating the graph as a hyperparameter to optimize. Our experiments show that small graph perturbations consistently lead to a strong decrease in performance for graph convolutional networks, and even transfer to unsupervised embeddings. Remarkably, the perturbations created by our algorithm can misguide the graph neural networks such that they perform worse than a simple baseline that ignores all relational information. Our attacks do not assume any knowledge about or access to the target classifiers.", "target": ["Wir verwenden Meta-Gradienten, um das Trainingsverfahren von tiefen neuronalen Netzen für Graphen anzugreifen.", "Untersucht das Problem des Erlernens besserer vergifteter Graphparameter, die den Verlust eines neuronalen Graphennetzes maximieren können. ", "Ein Algorithmus zur Veränderung der Graphenstruktur durch Hinzufügen/Löschen von Kanten, um die globale Leistung der Knotenklassifizierung zu verschlechtern, und die Idee, Meta-Learning zur Lösung des zweistufigen Optimierungsproblems einzusetzen."]} +{"source": "Numerous models for grounded language understanding have been recently proposed, including (i) generic models that can be easily adapted to any given task and (ii) intuitively appealing modular models that require background knowledge to be instantiated. We compare both types of models in how much they lend themselves to a particular form of systematic generalization. Using a synthetic VQA test, we evaluate which models are capable of reasoning about all possible object pairs after training on only a small subset of them. Our findings show that the generalization of modular models is much more systematic and that it is highly sensitive to the module layout, i.e. to how exactly the modules are connected. We furthermore investigate if modular models that generalize well could be made more end-to-end by learning their layout and parametrization. We find that end-to-end methods from prior work often learn inappropriate layouts or parametrizations that do not facilitate systematic generalization. Our results suggest that, in addition to modularity, systematic generalization in language understanding may require explicit regularizers or priors.\n", "target": ["Wir zeigen, dass modular strukturierte Modelle die beste systematische Verallgemeinerung bieten und dass ihre Ende-zu-Ende-Versionen nicht so gut verallgemeinern.", "In diesem Beitrag wird die systemische Generalisierung zwischen modularen neuronalen Netzen und anderen generischen Modellen durch die Einführung eines neuen Datensatzes für räumliche Schlussfolgerungen bewertet.", "Eine gezielte empirische Evaluierung der Generalisierung in Modellen für visuelles Denken, die sich auf das Problem der Erkennung von (Objekt, Relation, Objekt) Tripeln in synthetischen Szenen mit Buchstaben und Zahlen konzentriert."]} +{"source": "The behavioral dynamics of multi-agent systems have a rich and orderly structure, which can be leveraged to understand these systems, and to improve how artificial agents learn to operate in them. Here we introduce Relational Forward Models (RFM) for multi-agent learning, networks that can learn to make accurate predictions of agents' future behavior in multi-agent environments. Because these models operate on the discrete entities and relations present in the environment, they produce interpretable intermediate representations which offer insights into what drives agents' behavior, and what events mediate the intensity and valence of social interactions. Furthermore, we show that embedding RFM modules inside agents results in faster learning systems compared to non-augmented baselines. \n As more and more of the autonomous systems we develop and interact with become multi-agent in nature, developing richer analysis tools for characterizing how and why agents make decisions is increasingly necessary. Moreover, developing artificial agents that quickly and safely learn to coordinate with one another, and with humans in shared environments, is crucial.", "target": ["Relationale Vorwärtsmodelle für das Lernen von Multi-Agenten machen genaue Vorhersagen über das zukünftige Verhalten der Agenten, sie erzeugen interpretierbare Repräsentationen und können innerhalb der Agenten verwendet werden.", "Eine Möglichkeit, die Varianz beim modellfreien Lernen zu verringern, indem ein explizites Modell der Aktionen, die andere Agenten ausführen werden, erstellt wird, das eine graphische, netzartige Architektur verwendet. ", "Vorhersage des Verhaltens von Multi-Agenten mit Hilfe eines relationalen Vorwärtsmodells mit einer rekurrenten Komponente, das zwei Basismodelle und zwei Ablationsmodelle übertrifft."]} +{"source": "We show that gradient descent on an unregularized logistic regression\n problem, for almost all separable datasets, converges to the same direction as the max-margin solution. The result generalizes also to other monotone decreasing loss functions with an infimum at infinity, and we also discuss a multi-class generalizations to the cross entropy loss. Furthermore,\n we show this convergence is very slow, and only logarithmic in the\n convergence of the loss itself. This can help explain the benefit\n of continuing to optimize the logistic or cross-entropy loss even\n after the training error is zero and the training loss is extremely\n small, and, as we show, even if the validation loss increases. Our\n methodology can also aid in understanding implicit regularization\n in more complex models and with other optimization methods.", "target": ["Die normalisierte Lösung des Gradientenabstiegs bei der logistischen Regression (oder ein ähnlich abnehmender Verlust) konvergiert bei trennbaren Daten langsam gegen die L2-Max-Margin-Lösung.", "Das Papier bietet einen formalen Beweis dafür, dass der Gradientenabstieg auf dem logistischen Verlust sehr langsam gegen die harte SVM-Lösung konvergiert, wenn die Daten linear separierbar sind. ", "Diese Arbeit konzentriert sich auf die Charakterisierung des Verhaltens der Log-Loss Minimierung auf linear trennbaren Daten und zeigt, dass Log-Loss, minimiert mit Gradientenabstieg, zur Konvergenz der Max-Margin Lösung führt."]} +{"source": "Despite impressive performance as evaluated on i.i.d. holdout data, deep neural networks depend heavily on superficial statistics of the training data and are liable to break under distribution shift. For example, subtle changes to the background or texture of an image can break a seemingly powerful classifier. Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training. This setting is challenging because the model may extract many distribution-specific (superficial) signals together with distribution-agnostic (semantic) signals. To overcome this challenge, we incorporate the gray-level co-occurrence matrix (GLCM) to extract patterns that our prior knowledge suggests are superficial: they are sensitive to the texture but unable to capture the gestalt of an image. Then we introduce two techniques for improving our networks' out-of-sample performance. The first method is built on the reverse gradient method that pushes our model to learn representations from which the GLCM representation is not predictable. The second method is built on the independence introduced by projecting the model's representation onto the subspace orthogonal to GLCM representation's.\n We test our method on the battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training.", "target": ["Aufbauend auf früheren Arbeiten zur Domänengeneralisierung hoffen wir, einen Klassifikator zu entwickeln, der sich auch auf bisher unbekannte Domänen verallgemeinern lässt, selbst wenn während des Trainings keine Domänenidentifikatoren verfügbar sind.", "Ein Ansatz zur Domänengeneralisierung, um semantische Informationen auf der Grundlage eines linearen Projektionsschemas von CNN- und NGLCM-Ausgabeschichten aufzudecken.", "Die Arbeit schlägt einen unüberwachten Ansatz zur Identifizierung von Bildmerkmalen vor, die für Bildklassifizierungsaufgaben nicht aussagekräftig sind."]} +{"source": "In this paper, we conduct an intriguing experimental study about the physical adversarial attack on object detectors in the wild. In particular, we learn a camouflage pattern to hide vehicles from being detected by state-of-the-art convolutional neural network based detectors. Our approach alternates between two threads. In the first, we train a neural approximation function to imitate how a simulator applies a camouflage to vehicles and how a vehicle detector performs given images of the camouflaged vehicles. In the second, we minimize the approximated detection score by searching for the optimal camouflage. Experiments show that the learned camouflage can not only hide a vehicle from the image-based detectors under many test cases but also generalizes to different environments, vehicles, and object detectors.", "target": ["Wir schlagen eine Methode zum Erlernen der physischen Fahrzeugtarnung vor, um Objektdetektoren in der freien Natur anzugreifen. Wir finden unsere Tarnung effektiv und übertragbar.", "Die Autoren untersuchen das Problem des Erlernens eines Tarnmusters, das, wenn es auf ein simuliertes Fahrzeug angewendet wird, verhindert, dass ein Objektdetektor es erkennt.", "In diesem Beitrag geht es um adversariales Lernen für die Erkennung von Störfahrzeugen durch das Lernen von Tarnmustern."]} +{"source": "As deep learning-based classifiers are increasingly adopted in real-world applications, the importance of understanding how a particular label is chosen grows. Single decision trees are an example of a simple, interpretable classifier, but are unsuitable for use with complex, high-dimensional data. On the other hand, the variational autoencoder (VAE) is designed to learn a factored, low-dimensional representation of data, but typically encodes high-likelihood data in an intrinsically non-separable way. We introduce the differentiable decision tree (DDT) as a modular component of deep networks and a simple, differentiable loss function that allows for end-to-end optimization of a deep network to compress high-dimensional data for classification by a single decision tree. We also explore the power of labeled data in a supervised VAE (SVAE) with a Gaussian mixture prior, which leverages label information to produce a high-quality generative model with improved bounds on log-likelihood. We combine the SVAE with the DDT to get our classifier+VAE (C+VAE), which is competitive in both classification error and log-likelihood, despite optimizing both simultaneously and using a very simple encoder/decoder architecture.", "target": ["Wir kombinieren differenzierbare Entscheidungsbäume mit überwachten variationalen Autoencodern, um die Interpretierbarkeit der Klassifizierung zu verbessern. ", "Diese Arbeit schlägt ein hybrides Modell eines variationalen Autoencoders vor, der mit einem differenzierbaren Entscheidungsbaum und einem begleitenden Trainingsschema zusammengesetzt ist. Experimente zeigen die Klassifizierungsleistung des Baums, die negative Log-Likelihood-Leistung und die Interpretierbarkeit des latenten Raums.", "In diesem Beitrag wird versucht, einen interpretierbaren und genauen Klassifikator zu erstellen, indem eine überwachte VAE und ein differenzierbarer Entscheidungsbaum kombiniert werden."]} +{"source": "We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target domain. We first estimate importance weights using labeled source data and unlabeled target data, and then train a classifier on the weighted source samples. We derive a generalization bound for the classifier on the target domain which is independent of the (ambient) data dimensions, and instead only depends on the complexity of the function class. To the best of our knowledge, this is the first generalization bound for the label-shift problem where the labels in the target domain are not available. Based on this bound, we propose a regularized estimator for the small-sample regime which accounts for the uncertainty in the estimated weights. Experiments on the CIFAR-10 and MNIST datasets show that RLLS improves classification accuracy, especially in the low sample and large-shift regimes, compared to previous methods.", "target": ["Ein praktischer und nachweislich garantierter Ansatz für das Training effizienter Klassifikatoren in Anwesenheit von Label-Verschiebungen zwischen Quell- und Zieldatensätzen.", "Die Autoren schlagen einen neuen Algorithmus zur Verbesserung der Stabilität des Schätzverfahrens für die Klassenbedeutungsgewichtung in einem zweistufigen Verfahren vor.", "Die Autoren betrachten das Problem des Lernens unter Label Shifts, bei denen sich die Labelanteile unterscheiden, während die Konditionalitäten gleich sind, und schlagen einen verbesserten Schätzer mit Regularisierung vor."]} +{"source": "The statistics of the real visual world presents a long-tailed distribution: a few classes have significantly more training instances than the remaining classes in a dataset. This is because the real visual world has a few classes that are common while others are rare. Unfortunately, the performance of a convolutional neural network is typically unsatisfactory when trained using a long-tailed dataset. To alleviate this issue, we propose a method that discriminatively learns an embedding in which a simple Bayesian classifier can balance the class-priors to generalize well for rare classes. To this end, the proposed approach uses a Gaussian mixture model to factor out class-likelihoods and class-priors in a long-tailed dataset. The proposed method is simple and easy-to-implement in existing deep learning frameworks. Experiments on publicly available datasets show that the proposed approach improves the performance on classes with few training instances, while maintaining a comparable performance to the state-of-the-art on classes with abundant training examples.", "target": ["Ansatz zur Verbesserung der Klassifizierungsgenauigkeit bei Klassen im Heck.", "Das Hauptziel dieser Arbeit ist es, einen ConvNet-Klassifikator zu erlernen, der für Klassen im hinteren Teil der Klassenhäufigkeitsverteilung bessere Leistungen erbringt.", "Vorschlag für einen Bayes'schen Rahmen mit einem Gauß'schen Mischmodell, um das Problem der unausgewogenen Anzahl von Trainingsdaten aus verschiedenen Klassen bei Klassifizierungsanwendungen zu lösen."]} +{"source": "As deep reinforcement learning is being applied to more and more tasks, there is a growing need to better understand and probe the learned agents. Visualizing and understanding the decision making process can be very valuable to comprehend and identify problems in the learned behavior. However, this topic has been relatively under-explored in the reinforcement learning community. In this work we present a method for synthesizing states of interest for a trained agent. Such states could be situations (e.g. crashing or damaging a car) in which specific actions are necessary. Further, critical states in which a very high or a very low reward can be achieved (e.g. risky states) are often interesting to understand the situational awareness of the system. To this end, we learn a generative model over the state space of the environment and use its latent space to optimize a target function for the state of interest. In our experiments we show that this method can generate insightful visualizations for a variety of environments and reinforcement learning methods. We explore these issues in the standard Atari benchmark games as well as in an autonomous driving simulator. Based on the efficiency with which we have been able to identify significant decision scenarios with this technique, we believe this general approach could serve as an important tool for AI safety applications.", "target": ["Wir stellen eine Methode vor, mit der sich interessante Zustände für Agenten mit Reinforcement Learning synthetisieren lassen, um ihr Verhalten zu analysieren. ", "In diesem Beitrag wird ein generatives Modell für visuelle Beobachtungen in RL vorgeschlagen, das in der Lage ist, Beobachtungen von Interesse zu generieren.", "Ein Ansatz zur Visualisierung interessanter Zustände, der einen variationalen Autoencoder umfasst, der lernt, den Zustandsraum zu rekonstruieren, und einen Optimierungsschritt, der Konditionierungsparameter zur Erzeugung synthetischer Bilder findet."]} +{"source": "We introduce the deep abstaining classifier -- a deep neural network trained with a novel loss function that provides an abstention option during training. This allows the DNN to abstain on confusing or difficult-to-learn examples while improving performance on the non-abstained samples. We show that such deep abstaining classifiers can: (i) learn representations for structured noise -- where noisy training labels or confusing examples are correlated with underlying features -- and then learn to abstain based on such features; (ii) enable robust learning in the presence of arbitrary or unstructured noise by identifying noisy samples; and (iii) be used as an effective out-of-category detector that learns to reliably abstain when presented with samples from unknown classes. We provide analytical results on loss function behavior that enable automatic tuning of accuracy and coverage, and demonstrate the utility of the deep abstaining classifier using multiple image benchmarks, Results indicate significant improvement in learning in the presence of label noise.", "target": ["Ein tiefes neuronales Netzwerk, das mit einer neuartigen Verlustfunktion trainiert wird, die Repräsentationen dafür erlernt, wann man sich enthalten sollte, was robustes Lernen in Gegenwart verschiedener Arten von Rauschen ermöglicht.", "Eine neue Verlustfunktion für die Ausbildung eines tiefen neuronalen Netzes, die sich enthalten kann, mit der Leistung aus den Blickwinkeln bei Vorhandensein von strukturierten Störungen, bei Vorhandensein von unstrukturierten Störungen, und offene Welt Erkennung betrachtet.", "Dieses Manuskript stellt tiefe abstinente Klassifizierer vor, die den klassenübergreifenden Entropieverlust mit einem Abstinenzverlust modifizieren, der dann auf gestörte Bildklassifizierungsaufgaben angewendet wird."]} +{"source": "Temporal Difference learning with function approximation has been widely used recently and has led to several successful results. However, compared with the original tabular-based methods, one major drawback of temporal difference learning with neural networks and other function approximators is that they tend to over-generalize across temporally successive states, resulting in slow convergence and even instability. In this work, we propose a novel TD learning method, Hadamard product Regularized TD (HR-TD), that reduces over-generalization and thus leads to faster convergence. This approach can be easily applied to both linear and nonlinear function approximators. \n HR-TD is evaluated on several linear and nonlinear benchmark domains, where we show improvement in learning behavior and performance.", "target": ["Eine Regularisierungstechnik für TD-Lernen, die zeitliche Übergeneralisierung vermeidet, insbesondere in tiefen Netzen.", "Eine Variante des zeitlichen Differenzlernens für den Fall der Funktionsannäherung, die versucht, das Problem der Übergeneralisierung über zeitlich aufeinanderfolgende Zustände hinweg zu lösen.", "Das Papier stellt HR-TD vor, eine Variation des TD(0)-Algorithmus, die das Problem der Übergeneralisierung in konventionellem TD verbessern soll."]} +{"source": "We present an efficient convolution kernel for Convolutional Neural Networks (CNNs) on unstructured grids using parameterized differential operators while focusing on spherical signals such as panorama images or planetary signals. \n To this end, we replace conventional convolution kernels with linear combinations of differential operators that are weighted by learnable parameters. Differential operators can be efficiently estimated on unstructured grids using one-ring neighbors, and learnable parameters can be optimized through standard back-propagation. As a result, we obtain extremely efficient neural networks that match or outperform state-of-the-art network architectures in terms of performance but with a significantly lower number of network parameters. We evaluate our algorithm in an extensive series of experiments on a variety of computer vision and climate science tasks, including shape classification, climate pattern segmentation, and omnidirectional image semantic segmentation. Overall, we present (1) a novel CNN approach on unstructured grids using parameterized differential operators for spherical signals, and (2) we show that our unique kernel parameterization allows our model to achieve the same or higher accuracy with significantly fewer network parameters.", "target": ["Wir stellen einen neuen CNN-Kernel für unstrukturierte Gitter für sphärische Signale vor und zeigen einen signifikanten Gewinn an Genauigkeit und Parametereffizienz bei Aufgaben wie 3D-Klassifizierung und omnidirektionaler Bildsegmentierung.", "Eine effiziente Methode, die Deep Learning auf sphärischen Daten ermöglicht und mit viel weniger Parametern als gängige Ansätze konkurrenzfähige/aktuelle Zahlen erreicht.", "Das Papier schlägt einen neuartigen Convolutional-Kernel für CNN auf unstrukturierten Gittern vor und formuliert die Convolution durch eine lineare Kombination von Differentialoperatoren."]} +{"source": "Prediction is arguably one of the most basic functions of an intelligent system. In general, the problem of predicting events in the future or between two waypoints is exceedingly difficult. However, most phenomena naturally pass through relatively predictable bottlenecks---while we cannot predict the precise trajectory of a robot arm between being at rest and holding an object up, we can be certain that it must have picked the object up. To exploit this, we decouple visual prediction from a rigid notion of time. While conventional approaches predict frames at regularly spaced temporal intervals, our time-agnostic predictors (TAP) are not tied to specific times so that they may instead discover predictable \"bottleneck\" frames no matter when they occur. We evaluate our approach for future and intermediate frame prediction across three robotic manipulation tasks. Our predictions are not only of higher visual quality, but also correspond to coherent semantic subgoals in temporally extended tasks.", "target": ["Wenn Sie bei visuellen Vorhersageaufgaben Ihr Vorhersagemodell entscheiden lassen, welche Zeiten vorhergesagt werden sollen, hat dies zwei Vorteile: (i) es verbessert die Qualität der Vorhersage und (ii) es führt zu semantisch kohärenten \"Engpasszustand\"-Vorhersagen, die für die Planung nützlich sind.", "Verfahren zur Vorhersage von Einzelbildern in einem Video, wobei der Ansatz beinhaltet, dass die Zielvorhersage fließend ist, aufgelöst durch ein Minimum des Vorhersagefehlers.", "Neuformulierung der Aufgabe der Videovorhersage/Interpolation, so dass ein Vorherseher nicht gezwungen ist, Bilder in festen Zeitintervallen zu erzeugen, sondern trainiert wird, Bilder zu erzeugen, die zu einem beliebigen Zeitpunkt in der Zukunft stattfinden."]} +{"source": "In cities with tall buildings, emergency responders need an accurate floor level location to find 911 callers quickly. We introduce a system to estimate a victim's floor level via their mobile device's sensor data in a two-step process. First, we train a neural network to determine when a smartphone enters or exits a building via GPS signal changes. Second, we use a barometer equipped smartphone to measure the change in barometric pressure from the entrance of the building to the victim's indoor location. Unlike impractical previous approaches, our system is the first that does not require the use of beacons, prior knowledge of the building infrastructure, or knowledge of user behavior. We demonstrate real-world feasibility through 63 experiments across five different tall buildings throughout New York City where our system predicted the correct floor level with 100% accuracy.\n", "target": ["Wir haben ein LSTM verwendet, um zu erkennen, wenn ein Smartphone ein Gebäude betritt. Dann sagen wir anhand der Daten von Sensoren an Bord des Smartphones die Bodenhöhe des Geräts voraus.", "In diesem Beitrag wird ein System vorgestellt, das anhand der Sensordaten eines mobilen Geräts und unter Verwendung eines LSTM sowie von Änderungen des Luftdrucks eine Schätzung der Bodenhöhe vornimmt.", "Vorschlag für ein zweistufiges Verfahren zur Bestimmung des Stockwerks, in dem sich ein Mobiltelefon in einem hohen Gebäude befindet."]} +{"source": "Sparse reward is one of the most challenging problems in reinforcement learning (RL). Hindsight Experience Replay (HER) attempts to address this issue by converting a failure experience to a successful one by relabeling the goals. Despite its effectiveness, HER has limited applicability because it lacks a compact and universal goal representation. We present Augmenting experienCe via TeacheR's adviCE (ACTRCE), an efficient reinforcement learning technique that extends the HER framework using natural language as the goal representation. We first analyze the differences among goal representation, and show that ACTRCE can efficiently solve difficult reinforcement learning problems in challenging 3D navigation tasks, whereas HER with non-language goal representation failed to learn. We also show that with language goal representations, the agent can generalize to unseen instructions, and even generalize to instructions with unseen lexicons. We further demonstrate it is crucial to use hindsight advice to solve challenging tasks, but we also found that little amount of hindsight advice is sufficient for the learning to take off, showing the practical aspect of the method.", "target": ["Kombinieren Sie die Darstellung von Sprachzielen mit rückblickenden Erfahrungswiederholungen.", "In diesem Aufsatz wird die in der Rückschau implizite Annahme berücksichtigt, dass es eine Abbildung von Zuständen auf Ziele gibt, und es wird eine natürlichsprachliche Darstellung von Zielen vorgeschlagen.", "In diesem Beitrag wird das Hindsight Experience Replay Framework mit natürlichsprachlichen Zielen verwendet, um die Beispiel-Effizienz von Modellen zur Befolgung von Anweisungen zu verbessern."]} +{"source": "Learning rich and compact representations is an open topic in many fields such as word embedding, visual question-answering, object recognition or image retrieval. Although deep neural networks (convolutional or not) have made a major breakthrough during the last few years by providing hierarchical, semantic and abstract representations for all of these tasks, these representations are not necessary as rich as needed nor as compact as expected. Models using higher order statistics, such as bilinear pooling, provide richer representations at the cost of higher dimensional features. Factorization schemes have been proposed but without being able to reach the original compactness of first order models, or at a heavy loss in performances. This paper addresses these two points by extending factorization schemes to codebook strategies, allowing compact representations with the same dimensionality as first order representations, but with second order performances. Moreover, we extend this framework with a joint codebook and factorization scheme, granting a reduction both in terms of parameters and computation cost. This formulation leads to state-of-the-art results and compact second-order models with few additional parameters and intermediate representations with a dimension similar to that of first-order statistics.", "target": ["Wir schlagen ein gemeinsames Codebuch- und Faktorisierungsschema zur Verbesserung des Poolings zweiter Ordnung vor.", "In diesem Beitrag wird eine Möglichkeit vorgestellt, bestehende faktorisierte Darstellungen zweiter Ordnung mit einer harten Zuweisung im Stil eines Codebuchs zu kombinieren.", "Vorschlag für eine neuartige bilineare Darstellung auf der Grundlage eines Codebuchmodells und eine effiziente Formulierung, bei der Codebuch-basierte Projektionen durch gemeinsame Projektion faktorisiert werden, um die Parametergröße weiter zu reduzieren."]} +{"source": "Natural language understanding research has recently shifted towards complex Machine Learning and Deep Learning algorithms. Such models often outperform their simpler counterparts significantly. However, their performance relies on the availability of large amounts of labeled data, which are rarely available. To tackle this problem, we propose a methodology for extending training datasets to arbitrarily big sizes and training complex, data-hungry models using weak supervision. We apply this methodology on biomedical relation extraction, a task where training datasets are excessively time-consuming and expensive to create, yet has a major impact on downstream applications such as drug discovery. We demonstrate in two small-scale controlled experiments that our method consistently enhances the performance of an LSTM network, with performance improvements comparable to hand-labeled training data. Finally, we discuss the optimal setting for applying weak supervision using this methodology.", "target": ["Wir schlagen eine Meta-Lernmethode vor und wenden sie an, die auf schwacher Überwachung basiert, um halb-überwachtes und Ensemble-Lernen bei der Extraktion biomedizinischer Beziehungen zu kombinieren.", "Ein halbüberwachtes Verfahren zur Klassifizierung von Beziehungen, bei dem mehrere Basislerner anhand eines kleinen markierten Datensatzes trainiert werden und einige von ihnen dazu verwendet werden, unmarkierte Beispiele für das halbüberwachte Lernen zu annotieren.", "Diese Arbeit befasst sich mit dem Problem der Erzeugung von Trainingsdaten für die Extraktion biologischer Beziehungen und verwendet Vorhersagen von Daten, die von schwachen Klassifikatoren als zusätzliche Trainingsdaten für einen Meta-Lernalgorithmus gekennzeichnet wurden.", "In diesem Papier wird eine Kombination aus halbüberwachtem Lernen und Ensemble-Lernen für die Informationsextraktion vorgeschlagen, wobei Experimente an einer biomedizinischen Beziehungsextraktionsaufgabe durchgeführt werden."]} +{"source": "We introduce contextual explanation networks (CENs)---a class of models that learn to predict by generating and leveraging intermediate explanations. CENs are deep networks that generate parameters for context-specific probabilistic graphical models which are further used for prediction and play the role of explanations. Contrary to the existing post-hoc model-explanation tools, CENs learn to predict and to explain jointly. Our approach offers two major advantages: (i) for each prediction, valid instance-specific explanations are generated with no computational overhead and (ii) prediction via explanation acts as a regularization and boosts performance in low-resource settings. We prove that local approximations to the decision boundary of our networks are consistent with the generated explanations. Our results on image and text classification and survival analysis tasks demonstrate that CENs are competitive with the state-of-the-art while offering additional insights behind each prediction, valuable for decision support.", "target": ["Eine Klasse von Netzwerken, die einfache Modelle im Handumdrehen generieren (sogenannte Erklärungen), die als Regularisierer fungieren und eine konsistente Modelldiagnose und Interpretierbarkeit ermöglichen.", "Die Autoren behaupten, dass der bisherige Stand der Technik neuronale Netze direkt als Komponenten in die grafischen Modelle integriert, was die Modelle uninterpretierbar macht.", "Vorschlag für eine Kombination von neuronalen Netzen und grafischen Modellen durch Verwendung eines tiefen neuronalen Netzes zur Vorhersage der Parameter eines grafischen Modells."]} +{"source": "The goal of imitation learning (IL) is to enable a learner to imitate an expert’s behavior given the expert’s demonstrations. Recently, generative adversarial imitation learning (GAIL) has successfully achieved it even on complex continuous control tasks. However, GAIL requires a huge number of interactions with environment during training. We believe that IL algorithm could be more applicable to the real-world environments if the number of interactions could be reduced. To this end, we propose a model free, off-policy IL algorithm for continuous control. The keys of our algorithm are two folds: 1) adopting deterministic policy that allows us to derive a novel type of policy gradient which we call deterministic policy imitation gradient (DPIG), 2) introducing a function which we call state screening function (SSF) to avoid noisy policy updates with states that are not typical of those appeared on the expert’s demonstrations. Experimental results show that our algorithm can achieve the goal of IL with at least tens of times less interactions than GAIL on a variety of continuous control tasks.", "target": ["Wir schlagen einen modellfreien Algorithmus für das Imitation Learning vor, der die Anzahl der Interaktionen mit der Umgebung im Vergleich zum modernsten Algorithmus für das Imitation Learning, GAIL, reduzieren kann.", "Schlägt vor, den deterministischen Policy-Gradienten-Algorithmus zu erweitern, um aus Demonstrationen zu lernen, während er mit einer Art Dichteabschätzung des Experten kombiniert wird.", "In diesem Beitrag wird das Problem des modellfreien Imitation Learnings betrachtet und eine Erweiterung des generativen adversarial Imitation Learning Algorithmus vorgeschlagen, indem die stochastische Politik des Lerners durch eine deterministische ersetzt wird.", "Das Papier kombiniert IRL, adversariales Training und Ideen aus deterministischen Policy-Gradienten mit dem Ziel, die Komplexität von Beispielen zu verringern."]} +{"source": "Convolution acts as a local feature extractor in convolutional neural networks (CNNs). However, the convolution operation is not applicable when the input data is supported on an irregular graph such as with social networks, citation networks, or knowledge graphs. This paper proposes the topology adaptive graph convolutional network (TAGCN), a novel graph convolutional network that generalizes CNN architectures to graph-structured data and provides a systematic way to design a set of fixed-size learnable filters to perform convolutions on graphs. The topologies of these filters are adaptive to the topology of the graph when they scan the graph to perform convolution, replacing the square filter for the grid-structured data in traditional CNNs. The outputs are the weighted sum of these filters’ outputs, extraction of both vertex features and strength of correlation between vertices. It\n can be used with both directed and undirected graphs. The proposed TAGCN not only inherits the properties of convolutions in CNN for grid-structured data, but it is also consistent with convolution as defined in graph signal processing. Further, as no approximation to the convolution is needed, TAGCN exhibits better performance than existing graph-convolution-approximation methods on a number\n of data sets. As only the polynomials of degree two of the adjacency matrix are used, TAGCN is also computationally simpler than other recent methods.", "target": ["Graph-CNN mit geringer Rechenkomplexität (ohne Approximation) und besserer Klassifizierungsgenauigkeit.", "Schlägt einen neuen CNN-Ansatz zur Graphenklassifizierung vor, der einen Filter verwendet, der auf ausgehenden Spaziergängen mit zunehmender Länge basiert, um Informationen von weiter entfernten Knotenpunkten in einem Fortpflanzungsschritt einzubeziehen.", "Vorschlag für eine neue Architektur eines neuronalen Netzes für die halbüberwachte Graphenklassifizierung, die auf polynomialen Graphenfiltern aufbaut und diese auf aufeinanderfolgenden neuronalen Netzschichten mit ReLU-Aktivierungsfunktionen einsetzt.", "Die Arbeit führt Topology Adaptive GCN ein, um Convolutional Networks auf graph-strukturierte Daten zu verallgemeinern."]} +{"source": "Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a ``forgetting event'' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.", "target": ["Wir zeigen, dass katastrophales Vergessen innerhalb einer einzigen Aufgabe auftritt, und stellen fest, dass Beispiele, die nicht zum Vergessen neigen, ohne Generalisierungsverlust aus der Trainingsmenge entfernt werden können.", "Untersucht das Vergessensverhalten von Trainingsbeispielen während des SGD und zeigt, dass es beim Training neuronaler Netze in verschiedenen Netzarchitekturen \"Unterstützungsbeispiele\" gibt.", "In dieser Arbeit wird analysiert, inwieweit Netzwerke lernen, bestimmte Beispiele richtig zu klassifizieren und diese Beispiele dann im Laufe des Trainings vergessen.", "In dem Beitrag wird untersucht, ob einige Beispiele beim Training neuronaler Netze schwieriger zu lernen sind als andere. Solche Beispiele werden beim Lernen vergessen und mehrfach neu gelernt."]} +{"source": "Discovering objects and their attributes is of great importance for autonomous agents to effectively operate in human environments. This task is particularly challenging due to the ubiquitousness of objects and all their nuances in perceptual and semantic detail. In this paper we present an unsupervised approach for learning disentangled representations of objects entirely from unlabeled monocular videos. These continuous representations are not biased by or limited by a discrete set of labels determined by human labelers. The proposed representation is trained with a metric learning loss, where objects with homogeneous features are pushed together, while those with heterogeneous features are pulled apart. We show these unsupervised embeddings allow to discover object attributes and can enable robots to self-supervise in previously unseen environments. We quantitatively evaluate performance on a large-scale synthetic dataset with 12k object models, as well as on a real dataset collected by a robot and show that our unsupervised object understanding generalizes to previously unseen objects. Specifically, we demonstrate the effectiveness of our approach on robotic manipulation tasks, such as pointing at and grasping of objects. An interesting and perhaps surprising finding in this approach is that given a limited set of objects, object correspondences will naturally emerge when using metric learning without requiring explicit positive pairs.", "target": ["Ein unüberwachter Ansatz für das Erlernen von entflechteten Darstellungen von Objekten aus unmarkierten monokularen Videos.", "Entwirft eine Merkmalsdarstellung aus Videosequenzen, die von einer Szene aus verschiedenen Blickwinkeln aufgenommen wurden.", "Vorschlag für eine unüberwachte Methode zum Erlernen von Repräsentationen für visuelle Eingaben, die einen metrischen Lernansatz beinhaltet, der die nächstgelegenen Paare von Bildfeldern im Einbettungsraum zusammenführt, während andere Paare auseinandergedrängt werden.", "Diese Arbeit untersucht das selbstüberwachte Lernen von Objektrepräsentationen, mit der Hauptidee, Objekte mit ähnlichen Merkmalen zu ermutigen, sich weiter zueinander \"hingezogen\" zu fühlen."]} +{"source": "Learning from a scalar reward in continuous action space environments is difficult and often requires millions if not billions of interactions. We introduce state aligned vector rewards, which are easily defined in metric state spaces and allow our deep reinforcement learning agent to tackle the curse of dimensionality. Our agent learns to map from action distributions to state change distributions implicitly defined in a quantile function neural network. We further introduce a new reinforcement learning technique inspired by quantile regression which does not limit agents to explicitly parameterized action distributions. Our results in high dimensional state spaces show that training with vector rewards allows our agent to learn multiple times faster than an agent training with scalar rewards.", "target": ["Wir trainieren mit zustandsorientierten Vektor-Belohnungen einen Agenten, der Zustandsänderungen aus Aktionsverteilungen vorhersagt, indem wir eine neue, von der Quantilsregression inspirierte Reinforcement Learning Technik verwenden.", "Stellt einen Algorithmus vor, der darauf abzielt, das Reinforcement Learning in Situationen zu beschleunigen, in denen die Belohnung auf den Zustandsraum abgestimmt ist. ", "Diese Arbeit befasst sich mit RL im kontinuierlichen Handlungsraum, unter Verwendung einer neu parametrisierten Strategie und eines neuartigen vektorbasierten Trainingsziels.", "In dieser Arbeit wird vorgeschlagen, die Verteilungs-RL mit einem Netz zu mischen, das die Entwicklung der Welt in Form von Quantilen modelliert, um die Effizienz der Stichprobe zu verbessern."]} +{"source": "We propose Episodic Backward Update - a new algorithm to boost the performance of a deep reinforcement learning agent by fast reward propagation. In contrast to the conventional use of the replay memory with uniform random sampling, our agent samples a whole episode and successively propagates the value of a state into its previous states. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate effectively throughout the sampled episode. We evaluate our algorithm on 2D MNIST Maze Environment and 49 games of the Atari 2600 Environment and show that our agent improves sample efficiency with a competitive computational cost.", "target": ["Wir schlagen Episodic Backward Update vor, einen neuartigen Deep Reinforcement Learning-Algorithmus, der Übergänge episodenweise abtastet und Werte rekursiv rückwärts aktualisiert, um schnelles und stabiles Lernen zu erreichen.", "Schlägt ein neues DQN vor, bei dem die Ziele für eine vollständige Episode durch eine Rückwärtsaktualisierung (von Ende zu Anfang) berechnet werden, um eine schnellere Ausbreitung der Belohnungen bis zum Ende der Episode zu erreichen.", "Die Autoren schlagen vor, den DQN-Algorithmus zu modifizieren, indem sie den Max-Bellman-Operator rekursiv auf eine Trajektorie mit einem gewissen Zerfall anwenden, um die Akkumulation von Fehlern mit dem verschachtelten Max zu verhindern.", "In Deep-Q-Netzen werden die Q-Werte ab dem Ende der Episode aktualisiert, um eine schnelle Ausbreitung der Belohnungen entlang der Episode zu ermöglichen."]} +{"source": "Survival Analysis (time-to-event analysis) in the presence of multiple possible adverse events, i.e., competing risks, is a challenging, yet very important problem in medicine, finance, manufacturing, etc. Extending classical survival analysis to competing risks is not trivial since only one event (e.g. one cause of death) is observed and hence, the incidence of an event of interest is often obscured by other related competing events. This leads to the nonidentifiability of the event times’ distribution parameters, which makes the problem significantly more challenging. In this work we introduce Siamese Survival Prognosis Network, a novel Siamese Deep Neural Network architecture that is able to effectively learn from data in the presence of multiple adverse events. The Siamese Survival Network is especially crafted to issue pairwise concordant time-dependent risks, in which longer event times are assigned lower risks. Furthermore, our architecture is able to directly optimize an approximation to the C-discrimination index, rather than relying on well-known metrics of cross-entropy etc., and which are not able to capture the unique requirements of survival analysis with competing risks. Our results show consistent performance improvements on a number of publicly available medical datasets over both statistical and deep learning state-of-the-art methods.", "target": ["In dieser Arbeit stellen wir eine neuartige Siamese Deep Neural Network Architektur vor, die in der Lage ist, effektiv aus Daten zu lernen, wenn mehrere unerwünschte Ereignisse vorliegen.", "In dieser Arbeit werden Siamesische Neuronale Netze in den Rahmen der konkurrierenden Risiken eingeführt, indem direkt für den c-Index optimiert wird.", "Die Autoren befassen sich mit Fragen der Risikoabschätzung in einem Umfeld der Überlebensanalyse mit konkurrierenden Risiken und schlagen vor, den zeitabhängigen Diskriminierungsindex unter Verwendung eines siamesischen Überlebensnetzwerks direkt zu optimieren"]} +{"source": "The digitization of data has resulted in making datasets available to millions of users in the form of relational databases and spreadsheet tables. However, a majority of these users come from diverse backgrounds and lack the programming expertise to query and analyze such tables. We present a system that allows for querying data tables using natural language questions, where the system translates the question into an executable SQL query. We use a deep sequence to sequence model in wich the decoder uses a simple type system of SQL expressions to structure the output prediction. Based on the type, the decoder either copies an output token from the input question using an attention-based copying mechanism or generates it from a fixed vocabulary. We also introduce a value-based loss function that transforms a distribution over locations to copy from into a distribution over the set of input tokens to improve training of our model. We evaluate our model on the recently released WikiSQL dataset and show that our model trained using only supervised learning significantly outperforms the current state-of-the-art Seq2SQL model that uses reinforcement learning.", "target": ["Wir stellen ein typbasiertes Zeigernetzmodell zusammen mit einer wertbasierten Verlustmethode vor, um ein neuronales Modell zur Übersetzung natürlicher Sprache in SQL effektiv zu trainieren.", "Die Arbeit behauptet, eine neuartige Methode zu entwickeln, um Abfragen in natürlicher Sprache auf SQL abzubilden, indem eine Grammatik zur Anleitung der Dekodierung und eine neue Verlustfunktion für Zeiger/Kopiermechanismen verwendet wird."]} +{"source": "To backpropagate the gradients through stochastic binary layers, we propose the augment-REINFORCE-merge (ARM) estimator that is unbiased, exhibits low variance, and has low computational complexity. Exploiting variable augmentation, REINFORCE, and reparameterization, the ARM estimator achieves adaptive variance reduction for Monte Carlo integration by merging two expectations via common random numbers. The variance-reduction mechanism of the ARM estimator can also be attributed to either antithetic sampling in an augmented space, or the use of an optimal anti-symmetric \"self-control\" baseline function together with the REINFORCE estimator in that augmented space. Experimental results show the ARM estimator provides state-of-the-art performance in auto-encoding variational inference and maximum likelihood estimation, for discrete latent variable models with one or multiple stochastic binary layers. Python code for reproducible research is publicly available.", "target": ["Ein unverzerrter und variantenarmer Gradientenschätzer für diskrete latente Variablenmodelle.", "Vorschlag einer neuen Technik zur Varianzreduzierung, die bei der Berechnung eines erwarteten Verlustgradienten verwendet werden kann, wenn die Erwartung in Bezug auf unabhängige binäre Zufallsvariablen besteht.", "Ein Algorithmus, der Rao-Blackwellization und gewöhnliche Zufallszahlen kombiniert, um die Varianz des Score-Funktions-Gradientenschätzers im speziellen Fall stochastischer binärer Netzwerke zu verringern.", "Ein unvoreingenommener und varianzarmer Augment-REINFORCE-Merge (ARM)-Schätzer für die Berechnung und das Backpropagating von Gradienten in binären neuronalen Netzen."]} +{"source": "Mini-batch stochastic gradient descent (SGD) is state of the art in large scale distributed training. The scheme can reach a linear speed-up with respect to the number of workers, but this is rarely seen in practice as the scheme often suffers from large network delays and bandwidth limits. To overcome this communication bottleneck recent works propose to reduce the communication frequency. An algorithm of this type is local SGD that runs SGD independently in parallel on different workers and averages the sequences only once in a while. This scheme shows promising results in practice, but eluded thorough theoretical analysis.\n \n We prove concise convergence rates for local SGD on convex problems and show that it converges at the same rate as mini-batch SGD in terms of number of evaluated gradients, that is, the scheme achieves linear speed-up in the number of workers and mini-batch size. The number of communication rounds can be reduced up to a factor of T^{1/2}---where T denotes the number of total steps---compared to mini-batch SGD. This also holds for asynchronous implementations.\n\n Local SGD can also be used for large scale training of deep learning models. The results shown here aim serving as a guideline to further explore the theoretical and practical aspects of local SGD in these applications.", "target": ["Wir beweisen, dass paralleles lokales SGD eine lineare Beschleunigung mit viel weniger Kommunikation als paralleles Mini-Batch SGD erreicht.", "Liefert einen Konvergenzbeweis für lokale SGD und beweist, dass lokale SGD die gleichen Geschwindigkeitsgewinne wie Minibatch bieten kann, aber möglicherweise in der Lage ist, deutlich weniger zu kommunizieren.", "Diese Arbeit stellt eine Analyse der lokalen SGD und Grenzen auf, wie häufig die Schätzer, die durch die Ausführung von SGD erhalten werden, gemittelt werden müssen, um lineare Parallelisierung Geschwindigkeitssteigerungen zu erhalten.", "Die Autoren analysieren den lokalen SGD-Algorithmus, bei dem K parallele SGD-Ketten ausgeführt werden und die Iterate gelegentlich über Maschinen hinweg durch Mittelwertbildung synchronisiert werden."]} +{"source": "Extracting relevant information, causally inferring and predicting the future states with high accuracy is a crucial task for modeling complex systems. The endeavor to address these tasks is made even more challenging when we have to deal with high-dimensional heterogeneous data streams. Such data streams often have higher-order inter-dependencies across spatial and temporal dimensions. We propose to perform a soft-clustering of the data and learn its dynamics to produce a compact dynamical model while still ensuring the original objectives of causal inference and accurate predictions. To efficiently and rigorously process the dynamics of soft-clustering, we advocate for an information theory inspired approach that incorporates stochastic calculus and seeks to determine a trade-off between the predictive accuracy and compactness of the mathematical representation. We cast the model construction as a maximization of the compression of the state variables such that the predictive ability and causal interdependence (relatedness) constraints between the original data streams and the compact model are closely bounded. We provide theoretical guarantees concerning the convergence of the proposed learning algorithm. To further test the proposed framework, we consider a high-dimensional Gaussian case study and describe an iterative scheme for updating the new model parameters. Using numerical experiments, we demonstrate the benefits on compression and prediction accuracy for a class of dynamical systems. Finally, we apply the proposed algorithm to the real-world dataset of multimodal sentiment intensity and show improvements in prediction with reduced dimensions.", "target": ["Kompakte Wahrnehmung von dynamischen Prozessen.", "Untersucht das Problem der kompakten Darstellung des Modells eines komplexen dynamischen Systems bei gleichzeitiger Erhaltung der Informationen durch die Verwendung einer Informationsengpassmethode.", "In diesem Beitrag wurde die Gaußsche lineare Dynamik untersucht und ein Algorithmus zur Berechnung der Informationsengpasshierarchie (IBH) vorgeschlagen."]} +{"source": "We propose the dense RNN, which has the fully connections from each hidden state to multiple preceding hidden states of all layers directly. As the density of the connection increases, the number of paths through which the gradient flows can be increased. It increases the magnitude of gradients, which help to prevent the vanishing gradient problem in time. Larger gradients, however, can also cause exploding gradient problem. To complement the trade-off between two problems, we propose an attention gate, which controls the amounts of gradient flows. We describe the relation between the attention gate and the gradient flows by approximation. The experiment on the language modeling using Penn Treebank corpus shows dense connections with the attention gate improve the model’s performance.", "target": ["Dichtes RNN, das von jedem versteckten Zustand direkt zu mehreren vorhergehenden versteckten Zuständen aller Schichten vollständige Verbindungen hat.", "Schlägt eine neue RNN-Architektur vor, die langfristige Abhängigkeiten besser modelliert, eine multiskalige Darstellung von sequentiellen Daten erlernen kann und das Gradientenproblem durch die Verwendung parametrisierter Gating-Einheiten umgeht.", "In diesem Beitrag wird eine vollständig vernetzte dichte RNN-Architektur mit Gated-Verbindungen zu jeder Schicht und Verbindungen zu den vorangehenden Schichten vorgeschlagen und die Ergebnisse auf PTB-Charakterebene modelliert."]} +{"source": "We propose a new algorithm for training generative adversarial networks to jointly learn latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). In practice, this means that by fixing the identity portion of latent codes, we can generate diverse images of the same subject, and by fixing the observation portion we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce images that are both photorealistic, distinct, and appear to depict the same person. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to accommodate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm’s ability to generate convincing, identity-matched photographs.", "target": ["SD-GANs entwirren latente Codes anhand bekannter Gemeinsamkeiten in einem Datensatz (z. B. Fotos, die dieselbe Person abbilden).", "In diesem Beitrag wird das Problem der kontrollierten Bilderzeugung untersucht und ein Algorithmus vorgeschlagen, der ein Bildpaar mit derselben Identität erzeugt.", "Diese Arbeit schlägt SD-GAN vor, eine Methode zum Training von GANs, um die Identitäts- und Nicht-Identitätsinformationen im latenten Eingangsvektor Z zu entflechten."]} +{"source": "The goal of unpaired cross-domain translation is to learn useful mappings between two domains, given unpaired sets of datapoints from these domains. While this formulation is highly underconstrained, recent work has shown that it is possible to learn mappings useful for downstream tasks by encouraging approximate cycle consistency in the mappings between the two domains [Zhu et al., 2017]. In this work, we propose AlignFlow, a framework for unpaired cross-domain translation that ensures exact cycle consistency in the learned mappings. Our framework uses a normalizing flow model to specify a single invertible mapping between the two domains. In contrast to prior works in cycle-consistent translations, we can learn AlignFlow via adversarial training, maximum likelihood estimation, or a hybrid of the two methods. Theoretically, we derive consistency results for AlignFlow which guarantee recovery of desirable mappings under suitable assumptions. Empirically, AlignFlow demonstrates significant improvements over relevant baselines on image-to-image translation and unsupervised domain adaptation tasks on benchmark datasets.", "target": ["Wir schlagen einen Lernrahmen für bereichsübergreifende Übersetzungen vor, der exakt zykluskonsistent ist und über adversariales Training, Maximum-Likelihood-Schätzung oder eine Mischform gelernt werden kann.", "schlägt AlignFlow vor, eine effiziente Methode zur Umsetzung des Prinzips der Zykluskonsistenz unter Verwendung invertierbarer Flüsse.", "Flussmodelle für ungepaarte Bild-zu-Bild-Übersetzung."]} +{"source": "Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, that maps the inputs to their corresponding outputs exactly. Due to its precise and combinatorial nature, it is commonly formulated as a constraint satisfaction problem, where input-output examples are expressed constraints, and solved with a constraint solver. A key challenge of this formulation is that of scalability: While constraint solvers work well with few well-chosen examples, constraining the entire set of example constitutes a significant overhead in both time and memory. In this paper we address this challenge by constructing a representative subset of examples that is both small and is able to constrain the solver sufficiently. We build the subset one example at a time, using a trained discriminator to predict the probability of unchosen input-output examples conditioned on the chosen input-output examples, adding the least probable example to the subset. Experiment on a diagram drawing domain shows our approach produces subset of examples that are small and representative for the constraint solver.", "target": ["In einem Programmsynthesekontext, in dem die Eingabe eine Menge von Beispielen ist, reduzieren wir die Kosten, indem wir eine Teilmenge von repräsentativen Beispielen berechnen.", "Vorschlagen einer Methode zur Identifizierung repräsentativer Beispiele für die Programmsynthese, um die Skalierbarkeit bestehender Constraint-Programming Lösungen zu erhöhen.", "Eine Methode zur Auswahl einer Teilmenge von Beispielen, auf die ein Constraint Solver angewendet wird, um Probleme der Programmsynthese zu lösen.", "In dieser Arbeit wird eine Methode zur Beschleunigung von Mehrzweck-Programmsynthesizern vorgeschlagen."]} +{"source": "Humans possess an ability to abstractly reason about objects and their interactions, an ability not shared with state-of-the-art deep learning models. Relational networks, introduced by Santoro et al. (2017), add the capacity for relational reasoning to deep neural networks, but are limited in the complexity of the reasoning tasks they can address. We introduce recurrent relational networks which increase the suite of solvable tasks to those that require an order of magnitude more steps of relational reasoning. We use recurrent relational networks to solve Sudoku puzzles and achieve state-of-the-art results by solving 96.6% of the hardest Sudoku puzzles, where relational networks fail to solve any. We also apply our model to the BaBi textual QA dataset solving 19/20 tasks which is competitive with state-of-the-art sparse differentiable neural computers. The recurrent relational network is a general purpose module that can augment any neural network model with the capacity to do many-step relational reasoning.", "target": ["Wir stellen Recurrent Relational Networks vor, ein leistungsfähiges und allgemeines neuronales Netzmodul für relationales Denken, und verwenden es, um 96,6% der schwierigsten Sudokus und 19/20 BaBi-Aufgaben zu lösen.", "Einführung rekurrenter relationaler Netze (RRNs), die zu beliebigen neuronalen Netzen hinzugefügt werden können, um die Fähigkeit zur relationalen Schlussfolgerung zu erhöhen.", "Einführung eines tiefen neuronalen Netzes für strukturierte Vorhersagen, das bei Soduku-Rätseln und der BaBi-Aufgabe die beste Leistung erzielt.", "In diesem Beitrag wird eine Methode beschrieben, die als relationales Netzwerk bezeichnet wird, um tiefe neuronale Netze mit relationalen Schlussfolgerungen zu versehen."]} +{"source": "Empirical risk minimization (ERM), with proper loss function and regularization, is the common practice of supervised classification. In this paper, we study training arbitrary (from linear to deep) binary classifier from only unlabeled (U) data by ERM. We prove that it is impossible to estimate the risk of an arbitrary binary classifier in an unbiased manner given a single set of U data, but it becomes possible given two sets of U data with different class priors. These two facts answer a fundamental question---what the minimal supervision is for training any binary classifier from only U data. Following these findings, we propose an ERM-based learning method from two sets of U data, and then prove it is consistent. Experiments demonstrate the proposed method could train deep models and outperform state-of-the-art methods for learning from two sets of U data.", "target": ["Drei Klassenprioritäten sind alles, was Sie brauchen, um tiefe Modelle aus nur U-Daten zu trainieren, während zwei nicht ausreichen sollten.", "Schlägt einen unvoreingenommenen Schätzer vor, der das Training von Modellen mit schwacher Überwachung auf zwei unbeschrifteten Datensätzen mit bekannten Klassenprioritäten ermöglicht, und erörtert die theoretischen Eigenschaften der Schätzer.", "Eine Methodik für das Training eines beliebigen binären Klassifikators aus nur unbeschrifteten Daten und eine empirische Risikominimierungsmethode für zwei Sätze unbeschrifteter Daten, bei denen Klassenprioritäten gegeben sind."]} +{"source": "Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions. We here introduce a high-performance DNN architecture on ImageNet whose decisions are considerably easier to explain. Our model, a simple variant of the ResNet-50 architecture called BagNet, classifies an image based on the occurrences of small local image features without taking into account their spatial ordering. This strategy is closely related to the bag-of-feature (BoF) models popular before the onset of deep learning and reaches a surprisingly high accuracy on ImageNet (87.6% top-5 for 32 x 32 px features and Alexnet performance for 16 x16 px features). The constraint on local features makes it straight-forward to analyse how exactly each part of the image influences the classification. Furthermore, the BagNets behave similar to state-of-the art deep neural networks such as VGG-16, ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts. This suggests that the improvements of DNNs over previous bag-of-feature classifiers in the last few years is mostly achieved by better fine-tuning rather than by qualitatively different decision strategies.", "target": ["Die Aggregation von Klassenevidenz aus vielen kleinen Bildfeldern reicht aus, um ImageNet zu lösen, führt zu besser interpretierbaren Modellen und kann Aspekte der Entscheidungsfindung von populären DNNs erklären.", "In diesem Beitrag wird eine neuartige und kompakte neuronale Netzarchitektur vorgeschlagen, die die Informationen in Bag-of-Words-Merkmalen nutzt. Der vorgeschlagene Algorithmus verwendet nur die Patch-Informationen unabhängig und führt eine Mehrheitsabstimmung mit unabhängig klassifizierten Patches durch."]} +{"source": "Somatic cancer mutation detection at ultra-low variant allele frequencies (VAFs) is an unmet challenge that is intractable with current state-of-the-art mutation calling methods. Specifically, the limit of VAF detection is closely related to the depth of coverage, due to the requirement of multiple supporting reads in extant methods, precluding the detection of mutations at VAFs that are orders of magnitude lower than the depth of coverage. Nevertheless, the ability to detect cancer-associated mutations in ultra low VAFs is a fundamental requirement for low-tumor burden cancer diagnostics applications such as early detection, monitoring, and therapy nomination using liquid biopsy methods (cell-free DNA). Here we defined a spatial representation of sequencing information adapted for convolutional architecture that enables variant detection at VAFs, in a manner independent of the depth of sequencing. This method enables the detection of cancer mutations even in VAFs as low as 10x-4^, >2 orders of magnitude below the current state-of-the-art. We validated our method on both simulated plasma and on clinical cfDNA plasma samples from cancer patients and non-cancer controls. This method introduces a new domain within bioinformatics and personalized medicine – somatic whole genome mutation calling for liquid biopsy.", "target": ["Aktuelle somatische Mutationsmethoden funktionieren nicht mit Flüssigbiopsien (d.h. Sequenzierung mit geringer Abdeckung). Wir wenden eine CNN-Architektur auf eine eindeutige Darstellung einer Lesung und deren Bewertung an und zeigen eine signifikante Verbesserung gegenüber früheren Methoden in der Niedrigfrequenz-Einstellung.", "Vorschlagen einer CNN-basierten Lösung namens Kittyhawk für somatische Mutationserkennung bei extrem niedrigen Allelefrequenzen.", "Ein neuer Algorithmus zur Erkennung von Krebsmutationen aus der Sequenzierung zellfreier DNA, der den Sequenzkontext identifiziert, der Sequenzierungsfehler von echten Mutationen unterscheidet.", "In dieser Arbeit wird ein Deep Learning Framework zur Vorhersage von somatischen Mutationen bei extrem niedrigen Frequenzen vorgeschlagen, die bei der Erkennung von Tumoren aus zellfreier DNA auftreten."]} +{"source": "This paper presents the formal release of {\\em MedMentions}, a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. In addition to the full corpus, a sub-corpus of MedMentions is also presented, comprising annotations for a subset of UMLS 2017 targeted towards document retrieval. To encourage research in Biomedical Named Entity Recognition and Linking, data splits for training and testing are included in the release, and a baseline model and its metrics for entity linking are also described.", "target": ["Die Arbeit stellt ein neues Gold-Standard-Korpus der biomedizinischen wissenschaftlichen Literatur vor, das manuell mit UMLS Konzept Erwähnungen annotiert wurde.", "Einzelheiten zum Aufbau eines manuell annotierten Datensatzes für biomedizinische Konzepte, der größer ist und von einer umfangreicheren Ontologie abgedeckt wird als bisherige Datensätze.", "Dieses Papier verwendet MedMentions, ein TaggerOne Semi-Markov-Modell für die Ende-zu-Ende Konzepterkennung und -Verknüpfung auf einem Satz von Pubmed-Zusammenfassungen, um Arbeiten mit biomedizinischen Konzepten/Entitäten zu kennzeichnen"]} +{"source": "In this paper we propose a Deep Autoencoder Mixture Clustering (DAMIC) algorithm. It is based on a mixture of deep autoencoders where each cluster is represented by an autoencoder. A clustering network transforms the data into another space and then selects one of the clusters. Next, the autoencoder associated with this cluster is used to reconstruct the data-point. The clustering algorithm jointly learns the nonlinear data representation and the set of autoencoders. The optimal clustering is found by minimizing the reconstruction loss of the mixture of autoencoder network. Unlike other deep clustering algorithms, no regularization term is needed to avoid data collapsing to a single point. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.", "target": ["Wir schlagen eine tiefe Clustermethode vor, bei der jedes Cluster anstelle eines Centroids durch einen Autoencoder repräsentiert wird.", "Stellt ein tiefes Clustering vor, das auf einer Mischung von Autoencodern basiert, wobei die Datenpunkte einem Cluster auf der Grundlage des Darstellungsfehlers zugeordnet werden, wenn das Autoencodernetzwerk zu ihrer Darstellung verwendet wurde.", "Ein Deep-Clustering-Ansatz, der ein Autoencoder-Framework verwendet, um eine niedrigdimensionale Einbettung der Daten zu erlernen und gleichzeitig Daten mithilfe eines tiefen neuronalen Netzwerks zu clustern.", "Eine tiefe Clustering-Methode, bei der jedes Cluster mit verschiedenen Autoencodern dargestellt wird, funktioniert durchgängig und kann auch zum Clustern neu eingehender Daten verwendet werden, ohne dass das gesamte Clustering-Verfahren erneut durchgeführt werden muss."]} +{"source": "We propose a new Integral Probability Metric (IPM) between distributions: the Sobolev IPM. The Sobolev IPM compares the mean discrepancy of two distributions for functions (critic) restricted to a Sobolev ball defined with respect to a dominant measure mu. We show that the Sobolev IPM compares two distributions in high dimensions based on weighted conditional Cumulative Distribution Functions (CDF) of each coordinate on a leave one out basis. The Dominant measure mu plays a crucial role as it defines the support on which conditional CDFs are compared. Sobolev IPM can be seen as an extension of the one dimensional Von-Mises Cramer statistics to high dimensional distributions. We show how Sobolev IPM can be used to train Generative Adversarial Networks (GANs). We then exploit the intrinsic conditioning implied by Sobolev IPM in text generation. Finally we show that a variant of Sobolev GAN achieves competitive results in semi-supervised learning on CIFAR-10, thanks to the smoothness enforced on the critic by Sobolev GAN which relates to Laplacian regularization.", "target": ["Wir definieren eine neue Integrale Wahrscheinlichkeitsmetrik (Sobolev IPM) und zeigen, wie sie für das Training von GANs zur Texterzeugung und zum halbüberwachten Lernen verwendet werden kann.", "Schlägt ein neuartiges Regularisierungsschema für GANs vor, das auf einer Sobolev-Norm basiert und Abweichungen zwischen L2-Normen von Ableitungen misst.", "Die Autoren stellen einen anderen GAN-Typ vor, der den typischen Aufbau eines GANs verwendet, aber eine andere Funktionsklasse hat, und geben ein Rezept für das Training von GANs mit dieser Art von Funktionsklasse.", "In dem Beitrag wird eine andere Gradientenstrafe für GAN-Kritiker vorgeschlagen, die die erwartete quadratische Norm des Gradienten auf 1 setzt."]} +{"source": "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.", "target": ["Wir schlagen einen neuen Ansatz vor, um GANs mit einer Mischung von Generatoren zu trainieren, um das Problem des Zusammenbrechens der Modi zu überwinden.", "Bewältigung des Problems des Modus-Kollapses in GANs unter Verwendung einer eingeschränkten Mischungsverteilung für den Generator und eines Hilfsklassifikators, der die Quellmischungskomponente vorhersagt.", "In der Arbeit wird eine Mischung von Generatoren vorgeschlagen, um GANs ohne zusätzliche Rechenkosten zu trainieren", "Die Autoren zeigen, dass die Verwendung von MGAN, das darauf abzielt, das Problem des Modellzusammenbruchs durch Mischungsgeneratoren zu überwinden, hochmoderne Ergebnisse erzielt."]} +{"source": "Allowing humans to interactively train artificial agents to understand language instructions is desirable for both practical and scientific reasons. Though, given the lack of sample efficiency in current learning methods, reaching this goal may require substantial research efforts. We introduce the BabyAI research platform, with the goal of supporting investigations towards including humans in the loop for grounded language learning. The BabyAI platform comprises an extensible suite of 19 levels of increasing difficulty. Each level gradually leads the agent towards acquiring a combinatorially rich synthetic language, which is a proper subset of English. The platform also provides a hand-crafted bot agent, which simulates a human teacher. We report estimated amount of supervision required for training neural reinforcement and behavioral-cloning agents on some BabyAI levels. We put forward strong evidence that current deep learning methods are not yet sufficiently sample-efficient in the context of learning a language with compositional properties.", "target": ["Wir präsentieren die BabyAI-Plattform zur Untersuchung der Dateneffizienz beim Sprachenlernen mit einem Menschen in der Schleife.", "Stellt eine Forschungsplattform mit einem Bot in der Schleife vor, der lernt, Sprachbefehle auszuführen, wobei die Sprache kompositorische Strukturen aufweist.", "Stellt eine Plattform für das Erlernen von Sprachen vor, die jeden Menschen in der Lernschleife durch einen heuristischen Lehrer ersetzt und eine synthetische Sprache verwendet, die einer 2D-Gitterwelt zugeordnet ist."]} +{"source": "Recently, there has been growing interest in methods that perform neural network compression, namely techniques that attempt to substantially reduce the size of a neural network without significant reduction in performance. However, most existing methods are post-processing approaches in that they take a learned neural network as input and output a compressed network by either forcing several parameters to take the same value (parameter tying via quantization) or pruning irrelevant edges (pruning) or both. In this paper, we propose a novel algorithm that jointly learns and compresses a neural network. The key idea in our approach is to change the optimization criteria by adding $k$ independent Gaussian priors over the parameters and a sparsity penalty. We show that our approach is easy to implement using existing neural network libraries, generalizes L1 and L2 regularization and elegantly enforces parameter tying as well as pruning constraints. Experimentally, we demonstrate that our new algorithm yields state-of-the-art compression on several standard benchmarks with minimal loss in accuracy while requiring little to no hyperparameter tuning as compared with related, competing approaches.", "target": ["Ein k-means-Prior in Kombination mit einer L1-Regularisierung führt zu hochmodernen Kompressionsergebnissen.", "Dieses Papier untersucht die weiche Parameterbindung und Kompression von DNNs/CNNs."]} +{"source": "The application of stochastic variance reduction to optimization has shown remarkable recent theoretical and practical success. The applicability of these techniques to the hard non-convex optimization problems encountered during training of modern deep neural networks is an open problem. We show that naive application of the SVRG technique and related approaches fail, and explore why.", "target": ["Die SVRG-Methode versagt bei modernen Deep-Learning-Problemen.", "In diesem Beitrag wird eine Analyse von SVRG-Methoden vorgestellt, die zeigt, dass Dropout, Batch-Norm und Datenerweiterung (zufällige Ernte/Rotation/Translation) dazu neigen, die Verzerrung und/oder Varianz der Aktualisierungen zu erhöhen.", "Diese Arbeit untersucht die Anwendbarkeit von SVGD auf moderne neuronale Netze und zeigt, dass die naive Anwendung von SVGD in der Regel scheitert."]} +{"source": "The ground-breaking performance obtained by deep convolutional neural networks (CNNs) for image processing tasks is inspiring research efforts attempting to extend it for 3D geometric tasks. One of the main challenge in applying CNNs to 3D shape analysis is how to define a natural convolution operator on non-euclidean surfaces. In this paper, we present a method for applying deep learning to 3D surfaces using their spherical descriptors and alt-az anisotropic convolution on 2-sphere. A cascade set of geodesic disk filters rotate on the 2-sphere and collect spherical patterns and so to extract geometric features for various 3D shape analysis tasks. We demonstrate theoretically and experimentally that our proposed method has the possibility to bridge the gap between 2D images and 3D shapes with the desired rotation equivariance/invariance, and its effectiveness is evaluated in applications of non-rigid/ rigid shape classification and shape retrieval.", "target": ["Eine Methode zur Anwendung von Deep Learning auf 3D-Oberflächen unter Verwendung ihrer sphärischen Deskriptoren und alt-az anisotroper Convolution auf der 2-Sphäre.", "Stellt ein polares anisotropes Convolution Schema auf einer Einheitskugel vor, bei dem die Filtertranslation durch Filterrotation ersetzt wird.", "Diese Arbeit untersucht tiefes Lernen von 3D-Formen mit alt-az anisotroper 2-Sphären-Faltung."]} +{"source": "Recent breakthroughs in computer vision make use of large deep neural networks, utilizing the substantial speedup offered by GPUs. For applications running on limited hardware, however, high precision real-time processing can still be a challenge. One approach to solving this problem is training networks with binary or ternary weights, thus removing the need to calculate multiplications and significantly reducing memory size. In this work, we introduce LR-nets (Local reparameterization networks), a new method for training neural networks with discrete weights using stochastic parameters. We show how a simple modification to the local reparameterization trick, previously used to train Gaussian distributed weights, enables the training of discrete weights. Using the proposed training we test both binary and ternary models on MNIST, CIFAR-10 and ImageNet benchmarks and reach state-of-the-art results on most experiments.", "target": ["Training von binären/alternären Netzen durch lokale Umparametrisierung mit der CLT-Approximation.", "Trainiert binäre und ternäre Gewichtsverteilungsnetze unter Verwendung von Backpropagation, um Neuronenvoraktivierungen mit einem Reparametrisierungstrick zu testen.", "In diesem Beitrag wird vorgeschlagen, stochastische Parameter in Kombination mit dem Trick der lokalen Reparametrisierung zu verwenden, um neuronale Netze mit binären oder ternären Gewichten zu trainieren, was zu Ergebnissen auf dem neuesten Stand der Technik führt."]} +{"source": "We present Optimal Completion Distillation (OCD), a training procedure for optimizing sequence to sequence models based on edit distance. OCD is efficient, has no hyper-parameters of its own, and does not require pre-training or joint optimization with conditional log-likelihood. Given a partial sequence generated by the model, we first identify the set of optimal suffixes that minimize the total edit distance, using an efficient dynamic programming algorithm. Then, for each position of the generated sequence, we use a target distribution which puts equal probability on the first token of all the optimal suffixes. OCD achieves the state-of-the-art performance on end-to-end speech recognition, on both Wall Street Journal and Librispeech datasets, achieving $9.3\\%$ WER and $4.5\\%$ WER, respectively.", "target": ["Optimal Completion Distillation (OCD) ist ein Trainingsverfahren zur Optimierung von Sequenz-zu-Sequenz-Modellen auf der Basis von Edit-Distanz, das bei Ende-zu-Ende Spracherkennungsaufgaben den Stand der Technik erreicht.", "Alternativer Ansatz für das Training von seq2seq-Modellen unter Verwendung eines dynamischen Programms zur Berechnung optimaler Fortsetzungen von vorhergesagten Präfixen.", "Ein Trainingsalgorithmus für autoregressive Modelle, der kein MLE-Vortraining benötigt und direkt aus dem Sampling optimieren kann.", "Die Arbeit geht auf einen Mangel von Sequenz-zu-Sequenz-Modellen ein, die mit Hilfe von Maximum-Likelihood-Schätzungen trainiert werden, und schlägt einen Ansatz vor, der auf Edit-Distanzen und der impliziten Verwendung vorgegebener Label-Sequenzen während des Trainings basiert."]} +{"source": "As an emerging field, federated learning has recently attracted considerable attention.\n Compared to distributed learning in the datacenter setting, federated learning\n has more strict constraints on computate efficiency of the learned model and communication\n cost during the training process. In this work, we propose an efficient\n federated learning framework based on variational dropout. Our approach is able\n to jointly learn a sparse model while reducing the amount of gradients exchanged\n during the iterative training process. We demonstrate the superior performance\n of our approach on achieving significant model compression and communication\n reduction ratios with no accuracy loss.", "target": ["Eine gemeinsame Modell- und Gradientensparsamkeitsmethode für föderiertes Lernen.", "Wendet Variational Dropout an, um die Kommunikationskosten beim verteilten Training neuronaler Netze zu reduzieren, und führt Experimente mit den Datensätzen mnist, cifar10 und svhn durch. ", "Die Autoren schlagen einen Algorithmus vor, der die Kommunikationskosten beim föderierten Lernen reduziert, indem spärliche Gradienten vom Gerät zum Server und zurück gesendet werden.", "Kombiniert einen verteilten Optimierungsalgorithmus mit Variational Dropout, um die von den lokalen Lernern an den Master-Server gesendeten Gradienten zu strecken."]} +{"source": "We prove a multiclass boosting theory for the ResNet architectures which simultaneously creates a new technique for multiclass boosting and provides a new algorithm for ResNet-style architectures. Our proposed training algorithm, BoostResNet, is particularly suitable in non-differentiable architectures. Our method only requires the relatively inexpensive sequential training of T \"shallow ResNets\". We prove that the training error decays exponentially with the depth T if the weak module classifiers that we train perform slightly better than some weak baseline. In other words, we propose a weak learning condition and prove a boosting theory for ResNet under the weak learning condition. A generalization error bound based on margin theory is proved and suggests that ResNet could be resistant to overfitting using a network with l_1 norm bounded weights.", "target": ["Wir beweisen eine Multiklassen-Boosting-Theorie für die ResNet-Architekturen, die gleichzeitig eine neue Technik für Multiklassen-Boosting schafft und einen neuen Algorithmus für ResNet-artige Architekturen bietet.", "Präsentiert einen Boosting-Algorithmus für das Training von tiefen Residual Networks, eine Konvergenzanalyse für Trainingsfehler und eine Analyse der Generalisierungsfähigkeit.", "Eine Lernmethode für ResNet unter Verwendung des Boosting-Frameworks, die das Lernen komplexer Netzwerke zerlegt und weniger Rechenaufwand erfordert.", "Die Autoren schlagen das tiefe ResNet als Boosting-Algorithmus vor und behaupten, dieser sei effizienter als die standardmäßige Ende-zu-Ende Backpropagation."]} +{"source": "We consider the problem of learning a one-hidden-layer neural network: we assume the input x is from Gaussian distribution and the label $y = a \\sigma(Bx) + \\xi$, where a is a nonnegative vector and $B$ is a full-rank weight matrix, and $\\xi$ is a noise vector. We first give an analytic formula for the population risk of the standard squared loss and demonstrate that it implicitly attempts to decompose a sequence of low-rank tensors simultaneously. \n\t\n Inspired by the formula, we design a non-convex objective function $G$ whose landscape is guaranteed to have the following properties:\t\n\n1. All local minima of $G$ are also global minima.\n 2. All global minima of $G$ correspond to the ground truth parameters.\n 3. The value and gradient of $G$ can be estimated using samples.\n\t\n With these properties, stochastic gradient descent on $G$ provably converges to the global minimum and learn the ground-truth parameters. We also prove finite sample complexity results and validate the results by simulations.", "target": ["Die Arbeit analysiert die Optimierungslandschaft von einschichtigen neuronalen Netzen und entwirft ein neues Ziel, das nachweislich kein ungewolltes lokales Minimum aufweist. ", "Dieses Papier untersucht das Problem des Lernens von neuronalen Netzen mit einer verborgenen Schicht, stellt eine Verbindung zwischen dem Least squares population loss und dem Hermite-Polynomen her und schlägt eine neue Verlustfunktion vor.", "Eine Tensor-Faktorisierungs-Methode zur Verschlankung eines neuronalen Netzes mit einer versteckten Schicht."]} +{"source": "Open information extraction (OIE) systems extract relations and their\n arguments from natural language text in an unsupervised manner. The resulting\n extractions are a valuable resource for downstream tasks such as knowledge\n base construction, open question answering, or event schema induction. In this\n paper, we release, describe, and analyze an OIE corpus called OPIEC, which was\n extracted from the text of English Wikipedia. OPIEC complements the available\n OIE resources: It is the largest OIE corpus publicly available to date (over\n 340M triples) and contains valuable metadata such as provenance information,\n confidence scores, linguistic annotations, and semantic annotations including\n spatial and temporal information. We analyze the OPIEC corpus by comparing its\n content with knowledge bases such as DBpedia or YAGO, which are also based on\n Wikipedia. We found that most of the facts between entities present in OPIEC\n cannot be found in DBpedia and/or YAGO, that OIE facts \n often differ in the level of specificity compared to knowledge base facts, and\n that OIE open relations are generally highly polysemous. We believe that the\n OPIEC corpus is a valuable resource for future research on automated knowledge\n base construction.", "target": ["Ein offener Korpus zur Informationsextraktion und seine eingehende Analyse.", "Erstellt ein neues Korpus für die Informationsextraktion, das größer ist als die bisherigen öffentlichen Korpora und Informationen enthält, die in den bisherigen Korpora nicht vorhanden sind.", "Präsentiert einen Datensatz von Open-IE-Triples, die mit Hilfe eines neuen Extraktionssystems aus Wikipedia gesammelt wurden. ", "Die Arbeit beschreibt die Erstellung eines Open IE-Korpus über die englische Wikipedia durch eine automatische Methode"]} +{"source": "The process of designing neural architectures requires expert knowledge and extensive trial and error.\n While automated architecture search may simplify these requirements, the recurrent neural network (RNN) architectures generated by existing methods are limited in both flexibility and components.\n We propose a domain-specific language (DSL) for use in automated architecture search which can produce novel RNNs of arbitrary depth and width.\n The DSL is flexible enough to define standard architectures such as the Gated Recurrent Unit and Long Short Term Memory and allows the introduction of non-standard RNN components such as trigonometric curves and layer normalization. Using two different candidate generation techniques, random search with a ranking function and reinforcement learning, \nwe explore the novel architectures produced by the RNN DSL for language modeling and machine translation domains.\n The resulting architectures do not follow human intuition yet perform well on their targeted tasks, suggesting the space of usable RNN architectures is far larger than previously assumed.", "target": ["Wir definieren eine flexible DSL für die Generierung von RNN-Architekturen, die RNNs unterschiedlicher Größe und Komplexität zulässt, und schlagen eine Ranking-Funktion vor, die RNNs als rekursive neuronale Netze darstellt und ihre Leistung simuliert, um die vielversprechendsten Architekturen auszuwählen.", "Es wird eine neue Methode zur Erzeugung von RNN-Architekturen vorgestellt, die eine domänenspezifische Sprache für zwei Arten von Generatoren (zufällig und RL-basiert) zusammen mit einer Ranking-Funktion und einem Evaluator verwendet.", "In dieser Arbeit wird die Suche nach guten RNN Cell-Architekturen als Black-Box-Optimierungsproblem dargestellt, bei dem Beispiele als Operatorbaum dargestellt und auf der Grundlage gelernter Funktionen bewertet oder von einem RL-Agenten erzeugt werden.", "In diesem Beitrag wird eine Meta-Lernstrategie für die automatische Architektursuche im Kontext von RNN untersucht, indem ein DSL verwendet wird, die rekurrente RNN-Operationen spezifiziert."]} +{"source": "Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as ``\"WAGE\" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.", "target": ["Wir trainieren und schlussfolgern nur mit ganzen Zahlen mit geringer Bitbreite in DNNs.", "Eine Methode namens WAGE, die alle Operanden und Operatoren in einem neuronalen Netz quantisiert, um die Anzahl der Bits für die Darstellung in einem Netz zu reduzieren.", "Die Autoren schlagen diskrete Gewichte, Aktivierungen, Gradienten und Fehler sowohl beim Training als auch beim Testen von neuronalen Netzen vor."]} +{"source": "Modern Convolutional Neural Networks (CNNs) are complex, encompassing millions of parameters. Their deployment exerts computational, storage and energy demands, particularly on embedded platforms. Existing approaches to prune or sparsify CNNs require retraining to maintain inference accuracy. Such retraining is not feasible in some contexts. In this paper, we explore the sparsification of CNNs by proposing three model-independent methods. Our methods are applied on-the-fly and require no retraining. We show that the state-of-the-art models' weights can be reduced by up to 73% (compression factor of 3.7x) without incurring more than 5% loss in Top-5 accuracy. Additional fine-tuning gains only 8% in sparsity, which indicates that our fast on-the-fly methods are effective.", "target": ["In diesem Beitrag entwickeln wir schnelle, umschulungsfreie Sparsifizierungsmethoden, die für die fliegende Sparsifizierung von CNNs in vielen industriellen Kontexten eingesetzt werden können.", "In diesem Papier werden Ansätze für das Pruning von CNNs ohne erneutes Training vorgeschlagen, indem drei Schemata zur Bestimmung der Schwellenwerte für die Pruning-Gewichte eingeführt werden.", "Diese Arbeit beschreibt eine Methode zur Sparsifizierung von CNNs ohne Neutraining."]} +{"source": "Curriculum learning and Self paced learning are popular topics in the machine learning that suggest to put the training samples in order by considering their difficulty levels. Studies in these topics show that starting with a small training set and adding new samples according to difficulty levels improves the learning performance. In this paper we experimented that we can also obtain good results by adding the samples randomly without a meaningful order. We compared our method with classical training, Curriculum learning, Self paced learning and their reverse ordered versions. Results of the statistical tests show that the proposed method is better than classical method and similar with the others. These results point a new training regime that removes the process of difficulty level determination in Curriculum and Self paced learning and as successful as these methods.", "target": ["Wir schlagen vor, dass das Training mit stufenweise wachsenden Mengen eine Optimierung für neuronale Netze darstellt.", "Die Autoren vergleichen das Curriculum Learning mit dem Lernen in einer zufälligen Reihenfolge mit Phasen, die eine neue Stichprobe von Beispielen zu der zuvor zufällig zusammengestellten Menge hinzufügen.", "In diesem Beitrag wird der Einfluss der Reihenfolge im Curriculum und beim selbstgesteuerten Lernen untersucht, und es wird gezeigt, dass die Reihenfolge der Trainingsinstanzen bis zu einem gewissen Grad nicht wichtig ist."]} +{"source": "We study the problem of learning to map, in an unsupervised way, between domains $A$ and $B$, such that the samples $\\vb \\in B$ contain all the information that exists in samples $\\va\\in A$ and some additional information. For example, ignoring occlusions, $B$ can be people with glasses, $A$ people without, and the glasses, would be the added information. When mapping a sample $\\va$ from the first domain to the other domain, the missing information is replicated from an independent reference sample $\\vb\\in B$. Thus, in the above example, we can create, for every person without glasses a version with the glasses observed in any face image. \n\n Our solution employs a single two-pathway encoder and a single decoder for both domains. The common part of the two domains and the separate part are encoded as two vectors, and the separate part is fixed at zero for domain $A$. The loss terms are minimal and involve reconstruction losses for the two domains and a domain confusion term. Our analysis shows that under mild assumptions, this architecture, which is much simpler than the literature guided-translation methods, is enough to ensure disentanglement between the two domains. We present convincing results in a few visual domains, such as no-glasses to glasses, adding facial hair based on a reference image, etc.", "target": ["Ein Bild-zu-Bild-Übersetzungsverfahren, das einem Bild den Inhalt eines anderen Bildes hinzufügt und so ein neues Bild erzeugt.", "Dieses Arbeit befasst sich mit der Aufgabe der Übertragung von Inhalten, wobei die Neuigkeit im Verlust liegt."]} +{"source": "Mathematical reasoning---a core ability within human intelligence---presents some unique challenges as a domain: we do not come to understand and solve mathematical problems primarily on the back of experience and evidence, but on the basis of inferring, learning, and exploiting laws, axioms, and symbol manipulation rules. In this paper, we present a new challenge for the evaluation (and eventually the design) of neural architectures and similar system, developing a task suite of mathematics problems involving sequential questions and answers in a free-form textual input/output format. The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test spits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes. Having described the data generation process and its potential future expansions, we conduct a comprehensive analysis of models from two broad classes of the most powerful sequence-to-sequence architectures and find notable differences in their ability to resolve mathematical problems and generalize their knowledge.\n", "target": ["Ein Datensatz zum Testen des mathematischen Denkens (und der algebraischen Verallgemeinerung) sowie Ergebnisse zu aktuellen Sequenz-zu-Sequenz-Modellen.", "Es wird ein neuer synthetischer Datensatz zur Bewertung der mathematischen Argumentationsfähigkeit von Sequenz-zu-Sequenz-Modellen vorgestellt und zur Bewertung verschiedener Modelle verwendet.", "Modell zum Lösen grundlegender mathematischer Probleme."]} +{"source": "Convolutional Neural Networks (CNNs) filter the input data using a series of spatial convolution operators with compactly supported stencils and point-wise nonlinearities.\n Commonly, the convolution operators couple features from all channels.\n For wide networks, this leads to immense computational cost in the training of and prediction with CNNs.\n In this paper, we present novel ways to parameterize the convolution more efficiently, aiming to decrease the number of parameters in CNNs and their computational complexity.\n We propose new architectures that use a sparser coupling between the channels and thereby reduce both the number of trainable weights and the computational cost of the CNN.\n Our architectures arise as new types of residual neural network (ResNet) that can be seen as discretizations of a Partial Differential Equations (PDEs) and thus have predictable theoretical properties. Our first architecture involves a convolution operator with a special sparsity structure, and is applicable to a large class of CNNs. Next, we present an architecture that can be seen as a discretization of a diffusion reaction PDE, and use it with three different convolution operators. We outline in our experiments that the proposed architectures, although considerably reducing the number of trainable weights, yield comparable accuracy to existing CNNs that are fully coupled in the channel dimension.\n", "target": ["In diesem Beitrag werden effiziente und ökonomische Parametrisierungen von Convolutional Neural Networks vorgestellt, die durch partielle Differentialgleichungen motiviert sind.", "Es werden vier \"kostengünstige\" Alternativen zur Standard Convolution Operation vorgestellt, die anstelle der Standard Convolution Operation verwendet werden können, um deren Rechenaufwand zu verringern.", "In diesem Beitrag werden Methoden zur Reduzierung der Rechenkosten von CNN-Implementierungen vorgestellt und neue Parametrisierungen von CNN-ähnlichen Architekturen eingeführt, die die Parameterkopplung begrenzen.", "Die Arbeit schlägt eine PDE-basierte Perspektive zum Verständnis und zur Parametrisierung von CNNs vor."]} +{"source": "In this article we use rate-distortion theory, a branch of information theory devoted to the problem of lossy compression, to shed light on an important problem in latent variable modeling of data: is there room to improve the model? One way to address this question is to find an upper bound on the probability (equivalently a lower bound on the negative log likelihood) that the model can assign to some data as one varies the prior and/or the likelihood function in a latent variable model. The core of our contribution is to formally show that the problem of optimizing priors in latent variable models is exactly an instance of the variational optimization problem that information theorists solve when computing rate-distortion functions, and then to use this to derive a lower bound on negative log likelihood. Moreover, we will show that if changing the prior can improve the log likelihood, then there is a way to change the likelihood function instead and attain the same log likelihood, and thus rate-distortion theory is of relevance to both optimizing priors as well as optimizing likelihood functions. We will experimentally argue for the usefulness of quantities derived from rate-distortion theory in latent variable modeling by applying them to a problem in image modeling.", "target": ["Verwendung der Theorie der Ratenverzerrung, um zu bestimmen, wie stark ein Modell mit latenten Variablen verbessert werden kann.", "Befasst sich mit Problemen der Optimierung des Priors im Modell der latenten Variablen und der Auswahl der Likelihood-Funktion, indem Kriterien vorgeschlagen werden, die auf einer Untergrenze für die negative Log-Likelihood basieren.", "Stellt ein Theorem vor, das eine untere Schranke für die negative logarithmische Wahrscheinlichkeit der Ratenverzerrung bei der Modellierung latenter Variablen liefert.", "Die Autoren argumentieren, dass die Theorie der Ratenverzerrung für verlustbehaftete Kompression ein natürliches Instrumentarium für die Untersuchung von Modellen mit latenten Variablen bietet und schlagen eine untere Grenze vor."]} +{"source": "Backprop is the primary learning algorithm used in many machine learning algorithms. In practice, however, Backprop in deep neural networks is a highly sensitive learning algorithm and successful learning depends on numerous conditions and constraints. One set of constraints is to avoid weights that lead to saturated units. The motivation for avoiding unit saturation is that gradients vanish and as a result learning comes to a halt. Careful weight initialization and re-scaling schemes such as batch normalization ensure that input activity to the neuron is within the linear regime where gradients are not vanished and can flow. Here we investigate backpropagating error terms only linearly. That is, we ignore the saturation that arise by ensuring gradients always flow. We refer to this learning rule as Linear Backprop since in the backward pass the network appears to be linear. In addition to ensuring persistent gradient flow, Linear Backprop is also favorable when computation is expensive since gradients are never computed. Our early results suggest that learning with Linear Backprop is competitive with Backprop and saves expensive gradient computations.", "target": ["Wir ignorieren Nichtlinearitäten und berechnen keine Gradienten im Rückwärtsdurchlauf, um Berechnungen zu sparen und sicherzustellen, dass Gradienten immer fließen. ", "Der Autor schlug lineare Backprop-Algorithmen vor, um den Gradientenfluss für alle Teile während der Backpropagation zu gewährleisten."]} +{"source": "Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQ-VAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete autoencoder with EM and combining it with sequence level knowledge distillation alows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.\n", "target": ["Systematisches Verständnis des diskreten Autoencoders VQ-VAE unter Verwendung von EM und dessen Verwendung zum Entwurf eines nicht-autoregressiven Übersetzungsmodells, das einer starken autoregressiven Basislinie entspricht.", "In diesem Beitrag wird eine neue Art der Interpretation des VQ-VAE vorgestellt und ein neuer Trainingsalgorithmus auf der Grundlage des Soft EM Clustering vorgeschlagen.", "Die Arbeit präsentiert eine alternative Sichtweise auf das Trainingsverfahren für den VQ-VAE unter Verwendung des Soft EM Algorithmus."]} +{"source": "Recent research about margin theory has proved that maximizing the minimum margin like support vector machines does not necessarily lead to better performance, and instead, it is crucial to optimize the margin distribution. In the meantime, margin theory has been used to explain the empirical success of deep network in recent studies. In this paper, we present ODN (the Optimal margin Distribution Network), a network which embeds a loss function in regard to the optimal margin distribution. We give a theoretical analysis for our method using the PAC-Bayesian framework, which confirms the significance of the margin distribution for classification within the framework of deep networks. In addition, empirical results show that the ODN model always outperforms the baseline cross-entropy loss model consistently across different regularization situations. And our ODN\n model also outperforms the cross-entropy loss (Xent), hinge loss and soft hinge loss model in generalization task through limited training data.", "target": ["In diesem Beitrag wird ein tiefes neuronales Netz vorgestellt, in das eine Verlustfunktion in Bezug auf die optimale Randverteilung eingebettet ist, die das Overfitting-Problem theoretisch und empirisch entschärft.", "Präsentiert eine PAC-Bayes'sche Grenze für einen Margenverlust."]} +{"source": "Deep network compression seeks to reduce the number of parameters in the network while maintaining a certain level of performance. Deep network distillation seeks to train a smaller network that matches soft-max performance of a larger network. While both regimes have led to impressive performance for their respective goals, neither provide insight into the importance of a given layer in the original model, which is useful if we are to improve our understanding of these highly parameterized models. In this paper, we present the concept of deep net triage, which individually assesses small blocks of convolution layers to understand their collective contribution to the overall performance, which we call \\emph{criticality}. We call it triage because we assess this criticality by answering the question: what is the impact to the health of the overall network if we compress a block of layers into a single layer.\n We propose a suite of triage methods and compare them on problem spaces of varying complexity. We ultimately show that, across these problem spaces, deep net triage is able to indicate the of relative importance of different layers. Surprisingly, our local structural compression technique also leads to an improvement in overall accuracy when the final model is fine-tuned globally.", "target": ["Wir versuchen, gelernte Repräsentationen in komprimierten Netzen durch ein experimentelles System zu verstehen, das wir Deep Net Triage nennen.", "Vergleich verschiedener Initialisierungs- und Trainingsmethoden zur Übertragung von Wissen von einem VGG-Netz auf ein kleineres Studentennetz durch Ersetzen von Blöcken von Schichten durch einzelne Schichten.", "In dieser Arbeit werden fünf Methoden für das Triaging oder die Komprimierung von Blockschichten für tiefe Netze vorgestellt.", "Die Arbeit schlägt eine Methode zur Komprimierung eines Blocks von Schichten in einem NN vor, bei der mehrere verschiedene Teilansätze bewertet werden."]} +{"source": "In this paper, we show a phenomenon, which we named ``super-convergence'', where residual networks can be trained using an order of magnitude fewer iterations than is used with standard training methods. The existence of super-convergence is relevant to understanding why deep networks generalize well. One of the key elements of super-convergence is training with cyclical learning rates and a large maximum learning rate. Furthermore, we present evidence that training with large learning rates improves performance by regularizing the network. In addition, we show that super-convergence provides a greater boost in performance relative to standard training when the amount of labeled training data is limited. We also derive a simplification of the Hessian Free optimization method to compute an estimate of the optimal learning rate. The architectures to replicate this work will be made available upon publication.\n", "target": ["Der empirische Nachweis eines neuen Phänomens erfordert neue theoretische Erkenntnisse und ist für die aktive Diskussion in der Literatur über SGD und das Verständnis von Generalisierung von Bedeutung.", "In der Arbeit wird ein Phänomen erörtert, bei dem das Training neuronaler Netze in sehr spezifischen Situationen von einem Zeitplan mit hohen Lernraten stark profitieren kann.", "Die Autoren analysieren das Training von Residual Networks mit großen zyklischen Lernraten und zeigen schnelle Konvergenz mit zyklischen Lernraten und Beweise für große Lernraten, die als Regularisierung wirken."]} +{"source": "Infinite-width neural networks have been extensively used to study the theoretical properties underlying the extraordinary empirical success of standard, finite-width neural networks. Nevertheless, until now, infinite-width networks have been limited to at most two hidden layers. To address this shortcoming, we study the initialisation requirements of these networks and show that the main challenge for constructing them is defining the appropriate sampling distributions for the weights. Based on these observations, we propose a principled approach to weight initialisation that correctly accounts for the functional nature of the hidden layer activations and facilitates the construction of arbitrarily many infinite-width layers, thus enabling the construction of arbitrarily deep infinite-width networks. The main idea of our approach is to iteratively reparametrise the hidden-layer activations into appropriately defined reproducing kernel Hilbert spaces and use the canonical way of constructing probability distributions over these spaces for specifying the required weight distributions in a principled way. Furthermore, we examine the practical implications of this construction for standard, finite-width networks. In particular, we derive a novel weight initialisation scheme for standard, finite-width networks that takes into account the structure of the data and information about the task at hand. We demonstrate the effectiveness of this weight initialisation approach on the MNIST, CIFAR-10 and Year Prediction MSD datasets.", "target": ["Wir schlagen eine Methode für die Konstruktion beliebig tiefer Netze mit unendlicher Breite vor, auf deren Grundlage wir ein neuartiges Gewichtungsinitialisierungsschema für Netze mit endlicher Breite ableiten und dessen konkurrenzfähige Leistung demonstrieren.", "Schlägt einen Ansatz zur Initialisierung von Gewichten vor, um unendlich tiefe und unendlich breite Netzwerke zu ermöglichen, mit experimentellen Ergebnissen auf kleinen Datensätzen.", "Vorschlagen tiefer neuronaler Netze von unendlicher Breite."]} +{"source": "Working memory requires information about external stimuli to be represented in the brain even after those stimuli go away. This information is encoded in the activities of neurons, and neural activities change over timescales of tens of milliseconds. Information in working memory, however, is retained for tens of seconds, suggesting the question of how time-varying neural activities maintain stable representations. Prior work shows that, if the neural dynamics are in the ` null space' of the representation - so that changes to neural activity do not affect the downstream read-out of stimulus information - then information can be retained for periods much longer than the time-scale of individual-neuronal activities. The prior work, however, requires precisely constructed synaptic connectivity matrices, without explaining how this would arise in a biological neural network. To identify mechanisms through which biological networks can self-organize to learn memory function, we derived biologically plausible synaptic plasticity rules that dynamically modify the connectivity matrix to enable information storing. Networks implementing this plasticity rule can successfully learn to form memory representations even if only 10% of the synapses are plastic, they are robust to synaptic noise, and they can represent information about multiple stimuli.", "target": ["Wir haben biologisch plausible Lernregeln für die synaptische Plastizität eines rekurrenten neuronalen Netzes zur Speicherung von Reizrepräsentationen abgeleitet. ", "Ein neuronales Netzmodell, das aus rekurrent verbundenen Neuronen und einem oder mehreren Redouts besteht und darauf abzielt, eine bestimmte Ausgabe über die Zeit hinweg beizubehalten.", "In diesem Beitrag wird ein selbstorganisierender Speichermechanismus in einem neuronalen Modell vorgestellt und eine Zielfunktion eingeführt, die die Änderungen des zu speichernden Signals minimiert."]} +{"source": "Generative Adversarial Networks (GANs) have been proposed as an approach to learning generative models. While GANs have demonstrated promising performance on multiple vision tasks, their learning dynamics are not yet well understood, neither in theory nor in practice. In particular, the work in this domain has been focused so far only on understanding the properties of the stationary solutions that this dynamics might converge to, and of the behavior of that dynamics in this solutions’ immediate neighborhood.\n\n To address this issue, in this work we take a first step towards a principled study of the GAN dynamics itself. To this end, we propose a model that, on one hand, exhibits several of the common problematic convergence behaviors (e.g., vanishing gradient, mode collapse, diverging or oscillatory behavior), but on the other hand, is sufficiently simple to enable rigorous convergence analysis.\n\n This methodology enables us to exhibit an interesting phenomena: a GAN with an optimal discriminator provably converges, while guiding the GAN training using only a first order approximation of the discriminator leads to unstable GAN dynamics and mode collapse. This suggests that such usage of the first order approximation of the discriminator, which is a de-facto standard in all the existing GAN dynamics, might be one of the factors that makes GAN training so challenging in practice. Additionally, our convergence result constitutes the first rigorous analysis of a dynamics of a concrete parametric GAN.", "target": ["Um das GAN-Training zu verstehen, definieren wir eine einfache GAN-Dynamik und zeigen die quantitativen Unterschiede zwischen optimalen Updates und Updates erster Ordnung in diesem Modell.", "Die Autoren untersuchen die Auswirkungen von GANs in Situationen, in denen bei jeder Iteration der Diskriminator bis zur Konvergenz trainiert und der Generator mit Gradientenschritten aktualisiert wird, oder in denen einige wenige Gradientenschritte für den Diskriminator und den Generator durchgeführt werden.", "In diesem Beitrag wird die Dynamik des gegnerischen Trainings von GANs auf einem Gaußschen Mischmodell untersucht."]} +{"source": "The machine learning and computer vision community is witnessing an unprecedented rate of new tasks being proposed and addressed, thanks to the power of deep convolutional networks to find complex mappings from X to Y. The advent of each task often accompanies the release of a large-scale human-labeled dataset, for supervised training of the deep network. However, it is expensive and time-consuming to manually label sufficient amount of training data. Therefore, it is important to develop algorithms that can leverage off-the-shelf labeled dataset to learn useful knowledge for the target task. While previous works mostly focus on transfer learning from a single source, we study multi-source transfer across domains and tasks (MS-DTT), in a semi-supervised setting. We propose GradMix, a model-agnostic method applicable to any model trained with gradient-based learning rule. GradMix transfers knowledge via gradient descent, by weighting and mixing the gradients from all sources during training. Our method follows a meta-learning objective, by assigning layer-wise weights to the source gradients, such that the combined gradient follows the direction that can minimize the loss for a small set of samples from the target dataset. In addition, we propose to adaptively adjust the learning rate for each mini-batch based on its importance to the target task, and a pseudo-labeling method to leverage the unlabeled samples in the target domain. We perform experiments on two MS-DTT tasks: digit recognition and action recognition, and demonstrate the advantageous performance of the proposed method against multiple baselines.", "target": ["Wir schlagen eine gradientenbasierte Methode vor, um Wissen aus verschiedenen Quellen über unterschiedliche Bereiche und Aufgaben hinweg zu übertragen.", "In diesem Beitrag wird vorgeschlagen, die Gradienten der Ausgangsdomänen zu kombinieren, um das Lernen in der Zieldomäne zu unterstützen. "]} +{"source": "Bayesian phylogenetic inference is currently done via Markov chain Monte Carlo with simple mechanisms for proposing new states, which hinders exploration efficiency and often requires long runs to deliver accurate posterior estimates. In this paper we present an alternative approach: a variational framework for Bayesian phylogenetic analysis. We approximate the true posterior using an expressive graphical model for tree distributions, called a subsplit Bayesian network, together with appropriate branch length distributions. We train the variational approximation via stochastic gradient ascent and adopt multi-sample based gradient estimators for different latent variables separately to handle the composite latent space of phylogenetic models. We show that our structured variational approximations are flexible enough to provide comparable posterior estimation to MCMC, while requiring less computation due to a more efficient tree exploration mechanism enabled by variational inference. Moreover, the variational approximations can be readily used for further statistical analysis such as marginal likelihood estimation for model comparison via importance sampling. Experiments on both synthetic data and real data Bayesian phylogenetic inference problems demonstrate the effectiveness and efficiency of our methods.", "target": ["Die erste Variational-Bayes-Formulierung der phylogenetischen Inferenz, ein anspruchsvolles Inferenzproblem über Strukturen mit verflochtenen diskreten und kontinuierlichen Komponenten.", "Erforscht eine Näherungslösung für das Problem der Bayes'schen Inferenz von phylogenetischen Bäumen durch die Nutzung kürzlich vorgeschlagener subsplit Bayes'scher Netzwerke und moderner Gradientenschätzer für VI.", "Vorschlagen eines Variationsansatz für die Bayes'sche Posterior-Inferenz in phylogenetischen Bäumen."]} +{"source": "This paper introduces HybridNet, a hybrid neural network to speed-up autoregressive\n models for raw audio waveform generation. As an example, we propose\n a hybrid model that combines an autoregressive network named WaveNet and a\n conventional LSTM model to address speech synthesis. Instead of generating\n one sample per time-step, the proposed HybridNet generates multiple samples per\n time-step by exploiting the long-term memory utilization property of LSTMs. In\n the evaluation, when applied to text-to-speech, HybridNet yields state-of-art performance.\n HybridNet achieves a 3.83 subjective 5-scale mean opinion score on\n US English, largely outperforming the same size WaveNet in terms of naturalness\n and provide 2x speed up at inference.", "target": ["Es handelt sich um eine hybride neuronale Architektur zur Beschleunigung des autoregressiven Modells. ", "Die Schlussfolgerung lautet, dass ein Modell, das mehrere Zeitschritte gleichzeitig vorhersagt, verwendet werden sollte, um die Modellgröße zu erhöhen, ohne die Inferenzzeit für die sequenzielle Vorhersage zu verlängern.", "In diesem Beitrag wird HybridNet vorgestellt, ein neuronales Sprachsynthese- und Audiosynthesesystem, das das WaveNet-Modell mit einem LSTM kombiniert, mit dem Ziel, ein Modell mit schnellerer Inferenzzeit für die Audioerzeugung anzubieten."]} +{"source": "Visual Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. In this paper, we propose a novel scheme for both interpretation as well as explanation in which, given a pretrained model, we automatically identify internal features relevant for the set of classes considered by the model, without relying on additional annotations. We interpret the model through average visualizations of this reduced set of features. Then, at test time, we explain the network prediction by accompanying the predicted class label with supporting visualizations derived from the identified features. In addition, we propose a method to address the artifacts introduced by strided operations in deconvNet-based visualizations. Moreover, we introduce an8Flower , a dataset specifically designed for objective quantitative evaluation of methods for visual explanation. Experiments on the MNIST , ILSVRC 12, Fashion 144k and an8Flower datasets show that our method produces detailed explanations with good coverage of relevant features of the classes of interest.", "target": ["Interpretation durch Identifizierung der vom Modell gelernten Merkmale, die als Indikatoren für die interessierende Aufgabe dienen. Erklären von Modellentscheidungen durch Hervorheben der Reaktion dieser Merkmale in Testdaten. Objektive Evaluierung der Erklärungen anhand eines kontrollierten Datensatzes.", "In diesem Beitrag wird eine Methode zur Erstellung visueller Erklärungen für die Ergebnisse tiefer neuronaler Netze vorgeschlagen und ein neuer synthetischer Datensatz veröffentlicht.", "Eine Methode für tiefe neuronale Netze, die automatisch relevante Merkmale des Klassensatzes identifiziert und die Interpretation und Erklärung unterstützt, ohne auf zusätzliche Annotationen angewiesen zu sein."]} +{"source": "In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform state-of-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time.", "target": ["Ein Rahmenwerk zum effizienten Erlernen hochwertiger Satzrepräsentationen.", "Schlägt einen schnelleren Algorithmus für das Lernen von Satzrepräsentationen im Stil von SkipThought aus Korpora geordneter Sätze vor, der den Decoder auf Wortebene durch einen kontrastiven Klassifikationsverlust ersetzt.", "Diese Arbeit schlägt einen Rahmen für das unbeaufsichtigte Lernen von Satzrepräsentationen vor, indem ein Modell der Wahrscheinlichkeit wahrer Kontextsätze relativ zu zufälligen Kandidatensätzen maximiert wird."]} +{"source": "Many regularization methods have been proposed to prevent overfitting in neural networks. Recently, a regularization method has been proposed to optimize the variational lower bound of the Information Bottleneck Lagrangian. However, this method cannot be generalized to regular neural network architectures. We present the activation norm penalty that is derived from the information bottleneck principle and is theoretically grounded in a variation dropout framework. Unlike in previous literature, it can be applied to any general neural network. We demonstrate that this penalty can give consistent improvements to different state of the art architectures both in language modeling and image classification. We present analyses on the properties of this penalty and compare it to other methods that also reduce mutual information.", "target": ["Aus der Perspektive des Informationsengpasses leiten wir eine Normstrafe für den Ausgang des neuronalen Netzes ab.", "Setzt eine Aktivierungsnorm-Strafe ein, eine Regularisierung vom Typ L_2 auf die Aktivierungen, die sich aus dem Prinzip des Informationsengpasses ableitet.", "In diesem Beitrag wird eine Zuordnung zwischen Aktivierungsnorm Strafen und Informationsengpass Frameworks unter Verwendung des Variational Dropout Frameworks erstellt."]} +{"source": "Unsupervised learning of timeseries data is a challenging problem in machine learning. Here, \nwe propose a novel algorithm, Deep Temporal Clustering (DTC), a fully unsupervised method, to naturally integrate dimensionality reduction and temporal clustering into a single end to end learning framework. The algorithm starts with an initial cluster estimates using an autoencoder for dimensionality reduction and a novel temporal clustering layer for cluster assignment. Then it jointly optimizes the clustering objective and the dimensionality reduction objective. Based on requirement and application, the temporal clustering layer can be customized with any temporal similarity metric. Several similarity metrics are considered and compared. To gain insight into features that the network has learned for its clustering, we apply a visualization method that generates a heat map of regions of interest in the timeseries. The viability of the algorithm is demonstrated using timeseries data from diverse domains, ranging from earthquakes to sensor data from spacecraft. In each case, we show that our algorithm outperforms traditional methods. This performance is attributed to fully integrated temporal dimensionality reduction and clustering criterion.", "target": ["Eine vollständig unbeaufsichtigte Methode, die Dimensionalitätsreduktion und zeitliches Clustering auf natürliche Weise in ein einziges Ende-zu-Ende Lernsystem integriert.", "Schlägt einen Algorithmus vor, der Autoencoder mit Zeitreihendaten-Clustering unter Verwendung einer Netzwerkstruktur integriert, die für Zeitreihendaten geeignet ist.", "Ein Algorithmus für die gemeinsame Durchführung von Dimensionalitätsreduktion und zeitlichem Clustering in einem Deep-Learning-Kontext, der einen Autoencoder und ein Clustering-Ziel verwendet.", "Die Autoren schlugen eine unbeaufsichtigte Zeitreihen-Clustermethode vor, die auf tiefen neuronalen Netzen aufbaut und mit einem Encoder-Decoder und einem Clustermodus ausgestattet ist, um die Zeitreihen zu verkürzen, lokale zeitliche Merkmale zu extrahieren und die kodierten Darstellungen zu erhalten."]} +{"source": "We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning scenarios. Compared to the well-studied many-class many-shot and few-class few-shot problems, MCFS problem commonly occurs in practical applications but is rarely studied. MCFS brings new challenges because it needs to distinguish between many classes, but only a few samples per class are available for training. In this paper, we propose ``memory-augmented hierarchical-classification network (MahiNet)'' for MCFS learning. It addresses the ``many-class'' problem by exploring the class hierarchy, e.g., the coarse-class label that covers a subset of fine classes, which helps to narrow down the candidates for the fine class and is cheaper to obtain. MahiNet uses a convolutional neural network (CNN) to extract features, and integrates a memory-augmented attention module with a multi-layer perceptron (MLP) to produce the probabilities over coarse and fine classes. While the MLP extends the linear classifier, the attention module extends a KNN classifier, both together targeting the ''`few-shot'' problem. We design different training strategies of MahiNet for supervised learning and meta-learning. Moreover, we propose two novel benchmark datasets ''mcfsImageNet'' (as a subset of ImageNet) and ''mcfsOmniglot'' (re-splitted Omniglot) specifically for MCFS problem. In experiments, we show that MahiNet outperforms several state-of-the-art models on MCFS classification tasks in both supervised learning and meta-learning scenarios.", "target": ["Ein gedächtniserweitertes neuronales Netzwerk, das das Many-Class Few-Shot Problem angeht, indem es die Klassenhierarchie sowohl beim überwachten Lernen als auch beim Meta-Lernen nutzt.", "In diesem Beitrag werden Methoden vorgestellt, mit denen ein Klassifikator durch grobe bis feine Vorhersagen entlang einer Klassenhierarchie und durch das Lernen eines speicherbasierten KNN-Klassifikators, der während des Lernens falsch beschriftete Instanzen verfolgt, induktiv beeinflusst werden kann.", "In diesem Beitrag wird das Problem der Many-Class Few-Shot Klassifizierung aus der Perspektive des überwachten Lernens und des Meta-Lernens formuliert."]} +{"source": "Learning a better representation with neural networks is a challenging problem, which has been tackled from different perspectives in the past few years. In this work, we focus on learning a representation that would be useful in a clustering task. We introduce two novel loss components that substantially improve the quality of produced clusters, are simple to apply to arbitrary models and cost functions, and do not require a complicated training procedure. We perform an extensive set of experiments, supervised and unsupervised, and evaluate the proposed loss components on two most common types of models, Recurrent Neural Networks and Convolutional Neural Networks, showing that the approach we propose consistently improves the quality of KMeans clustering in terms of mutual information scores and outperforms previously proposed methods.", "target": ["Eine neuartige Verlustkomponente, die das Netz dazu zwingt, während des Trainings für eine Klassifizierungsaufgabe eine Repräsentation zu erlernen, die für die Clusterbildung gut geeignet ist.", "Dieses Arbeit schlägt zwei Regularisierungsbedingungen vor, die auf einem zusammengesetzten Scharnierverlust über die KL-Divergenz zwischen zwei softmax-normalisierten Eingangsargumenten basieren, um das Lernen von entkoppelten Repräsentationen zu fördern.", "Vorschlag für zwei Regularisierer, die dafür sorgen sollen, dass die in der vorletzten Schicht eines Klassifikators gelernten Repräsentationen besser mit der inhärenten Struktur der Daten übereinstimmen."]} +{"source": "In high dimensions, the performance of nearest neighbor algorithms depends crucially on structure in the data.\n While traditional nearest neighbor datasets consisted mostly of hand-crafted feature vectors, an increasing number of datasets comes from representations learned with neural networks.\n We study the interaction between nearest neighbor algorithms and neural networks in more detail.\n We find that the network architecture can significantly influence the efficacy of nearest neighbor algorithms even when the classification accuracy is unchanged.\n Based on our experiments, we propose a number of training modifications that lead to significantly better datasets for nearest neighbor algorithms.\n Our modifications lead to learned representations that can accelerate nearest neighbor queries by 5x.", "target": ["Wir zeigen, wie man gute Darstellungen aus der Sicht der Ähnlichkeitssuche erhält.", "Untersucht die Auswirkungen eines Wechsels des Bildklassifikationsteils über dem DNN auf die Fähigkeit, die Deskriptoren mit einem LSH- oder einem kd-Baum-Algorithmus zu indizieren.", "Es wird vorgeschlagen, den Softmax-Kreuzentropieverlust zu verwenden, um ein Netzwerk zu lernen, das versucht, die Winkel zwischen den Eingaben und den entsprechenden Klassenvektoren in einem überwachten Rahmen zu reduzieren."]} +{"source": "Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification.", "target": ["Wir stellen eine Technik vor, die ein gradientenbasiertes Training von quantisierten neuronalen Netzen ermöglicht.", "Schlägt eine einheitliche und allgemeine Methode für das Training neuronaler Netze mit quantisierten synaptischen Gewichten und Aktivierungen reduzierter Präzision vor.", "Ein neuer Ansatz zur Quantisierung von Aktivierungen, der bei mehreren realen Bildproblemen Stand der Technik oder wettbewerbsfähig ist.", "Verfahren zum Lernen neuronaler Netze mit quantisierten Gewichten und Aktivierungen durch stochastische Quantisierung von Werten und Ersetzen der resultierenden kategorischen Verteilung durch eine kontinuierliche Entspannung."]} +{"source": "In most current formulations of adversarial training, the discriminators can be expressed as single-input operators, that is, the mapping they define is separable over observations. In this work, we argue that this property might help explain the infamous mode collapse phenomenon in adversarially-trained generative models. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose distributional adversaries that operate on samples, i.e., on sets of multiple points drawn from a distribution, rather than on single observations. We show how they can be easily implemented on top of existing models. Various experimental results show that generators trained in combination with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with observation-wise prediction discriminators. In addition, the application of our framework to domain adaptation results in strong improvement over recent state-of-the-art.", "target": ["Wir zeigen, dass das Problem des Modus-Kollapses in GANs durch einen Mangel an Informationsaustausch zwischen Beobachtungen in einem Trainingsstapel erklärt werden kann, und schlagen einen verteilungsbasierten Rahmen für den globalen Informationsaustausch zwischen Gradienten vor, der zu einem stabileren und effektiveren adversen Training führt.", "Es wird vorgeschlagen, Diskriminatoren, die nur eine einzige Probe berücksichtigen, durch Diskriminatoren zu ersetzen, die explizit mit Verteilungen von Beispielen arbeiten.", "Theorie über Two-Sample Tests und MMD und wie sie vorteilhaft in den GAN-Rahmen integriert werden können."]} +{"source": "Chemical information extraction is to convert chemical knowledge in text into true chemical database, which is a text processing task heavily relying on chemical compound name identification and standardization. Once a systematic name for a chemical compound is given, it will naturally and much simply convert the name into the eventually required molecular formula. However, for many chemical substances, they have been shown in many other names besides their systematic names which poses a great challenge for this task. In this paper, we propose a framework to do the auto standardization from the non-systematic names to the corresponding systematic names by using the spelling error correction, byte pair encoding tokenization and neural sequence to sequence model. Our framework is trained end to end and is fully data-driven. Our standardization accuracy on the test dataset achieves 54.04% which has a great improvement compared to previous state-of-the-art result.", "target": ["Wir haben ein Ende-zu-Ende Framework entwickelt, das ein Sequenz zu Sequenz Modell für die Standardisierung chemischer Namen verwendet.", "Standardisiert nicht systematische Namen in der chemischen Informationsextraktion durch Erstellung eines parallelen Korpus von nicht systematischen und systematischen Namen und Aufbau eines seq2seq-Modells.", "In dieser Arbeit wird eine Methode vorgestellt, mit der nicht systematische Namen chemischer Verbindungen durch eine Kombination von Mechanismen in ihre systematischen Äquivalente übersetzt werden können."]} +{"source": "The training of deep neural networks with Stochastic Gradient Descent (SGD) with a large learning rate or a small batch-size typically ends in flat regions of the weight space, as indicated by small eigenvalues of the Hessian of the training loss. This was found to correlate with a good final generalization performance. In this paper we extend previous work by investigating the curvature of the loss surface along the whole training trajectory, rather than only at the endpoint. We find that initially SGD visits increasingly sharp regions, reaching a maximum sharpness determined by both the learning rate and the batch-size of SGD. At this peak value SGD starts to fail to minimize the loss along directions in the loss surface corresponding to the largest curvature (sharpest directions). To further investigate the effect of these dynamics in the training process, we study a variant of SGD using a reduced learning rate along the sharpest directions which we show can improve training speed while finding both sharper and better generalizing solution, compared to vanilla SGD. Overall, our results show that the SGD dynamics in the subspace of the sharpest directions influence the regions that SGD steers to (where larger learning rate or smaller batch size result in wider regions visited), the overall training speed, and the generalization ability of the final model.", "target": ["Die SGD wird zu Beginn des Trainings in einen Bereich gelenkt, in dem ihr Schritt im Vergleich zur Krümmung zu groß ist, was sich auf den Rest des Trainings auswirkt. ", "Analysiert die Beziehung zwischen der Konvergenz/Generalisierung und der Aktualisierung der größten Eigenvektoren der Hessian der empirischen Verluste von DNNs.", "In dieser Arbeit wird die Beziehung zwischen der SGD-Schrittgröße und der Krümmung der Verlustfläche untersucht"]} +{"source": "We introduce a new approach to estimate continuous actions using actor-critic algorithms for reinforcement learning problems. Policy gradient methods usually predict one continuous action estimate or parameters of a presumed distribution (most commonly Gaussian) for any given state which might not be optimal as it may not capture the complete description of the target distribution. Our approach instead predicts M actions with the policy network (actor) and then uniformly sample one action during training as well as testing at each state. This allows the agent to learn a simple stochastic policy that has an easy to compute expected return. In all experiments, this facilitates better exploration of the state space during training and converges to a better policy.", "target": ["Wir stellen einen neuartigen Reinforcement Learning Algorithmus vor, der mehrere Handlungen vorhersagt und daraus Proben zieht.", "In dieser Arbeit wird eine einheitliche Mischung aus deterministischen Richtlinien eingeführt, und es wird festgestellt, dass diese Parametrisierung stochastischer Richtlinien DDPG bei mehreren OpenAI-Gym-Benchmarks übertrifft.", "Die Autoren untersuchen eine Methode zur Verbesserung der Leistung von Netzwerken, die mit DDPG trainiert wurden, und zeigen eine verbesserte Leistung bei einer großen Anzahl von standardmäßigen kontinuierlichen Kontrollumgebungen."]} +{"source": "Recently convolutional neural networks (CNNs) achieve great accuracy in visual recognition tasks. DenseNet becomes one of the most popular CNN models due to its effectiveness in feature-reuse. However, like other CNN models, DenseNets also face overfitting problem if not severer. Existing dropout method can be applied but not as effective due to the introduced nonlinear connections. In particular, the property of feature-reuse in DenseNet will be impeded, and the dropout effect will be weakened by the spatial correlation inside feature maps. To address these problems, we craft the design of a specialized dropout method from three aspects, dropout location, dropout granularity, and dropout probability. The insights attained here could potentially be applied as a general approach for boosting the accuracy of other CNN models with similar nonlinear connections. Experimental results show that DenseNets with our specialized dropout method yield better accuracy compared to vanilla DenseNet and state-of-the-art CNN models, and such accuracy boost increases with the model depth.", "target": ["Da wir die Nachteile bei der Anwendung des ursprünglichen Dropout-Verfahrens auf DenseNet erkannt haben, haben wir die Dropout-Methode unter drei Aspekten entwickelt, die auch auf andere CNN-Modelle angewendet werden können.", "Anwendung verschiedener binärer Dropout-Strukturen und Zeitpläne mit dem spezifischen Ziel, die DenseNet-Architektur zu regulieren.", "Vorschlagen einer Pre-Dropout-Technik für densenet, die den Dropout vor der nichtlinearen Aktivierungsfunktion implementiert."]} +{"source": "While extremely successful in several applications, especially with low-level representations; sparse, noisy samples and structured domains (with multiple objects and interactions) are some of the open challenges in most deep models. Column Networks, a deep architecture, can succinctly capture such domain structure and interactions, but may still be prone to sub-optimal learning from sparse and noisy samples. Inspired by the success of human-advice guided learning in AI, especially in data-scarce domains, we propose Knowledge-augmented Column Networks that leverage human advice/knowledge for better learning with noisy/sparse samples. Our experiments demonstrate how our approach leads to either superior overall performance or faster convergence.", "target": ["Beziehungsbewusste tiefe Modelle zu besserem Lernen mit menschlichem Wissen anleiten.", "In dieser Arbeit wird eine Variante des Säulennetzes vorgeschlagen, die auf der Injektion menschlicher Führung durch Änderung der Berechnungen im Netz beruht.", "Eine Methode zur Einbeziehung menschlicher Ratschläge in das Deep Learning durch die Erweiterung von Column Network, einem graphischen neuronalen Netz für kollektive Klassifizierung."]} +{"source": "Recent research has shown that one can train a neural network with binary weights and activations at train time by augmenting the weights with a high-precision continuous latent variable that accumulates small changes from stochastic gradient descent. However, there is a dearth of work to explain why one can effectively capture the features in data with binary weights and activations. Our main result is that the neural networks with binary weights and activations trained using the method of Courbariaux, Hubara et al. (2016) work because of the high-dimensional geometry of binary vectors. In particular, the ideal continuous vectors that extract out features in the intermediate representations of these BNNs are well-approximated by binary vectors in the sense that dot products are approximately preserved. Compared to previous research that demonstrated good classification performance with BNNs, our work explains why these BNNs work in terms of HD geometry. Furthermore, the results and analysis used on BNNs are shown to generalize to neural networks with ternary weights and activations. Our theory serves as a foundation for understanding not only BNNs but a variety of methods that seek to compress traditional neural networks. Furthermore, a better understanding of multilayer binary neural networks serves as a starting point for generalizing BNNs to other neural network architectures such as recurrent neural networks.", "target": ["Die jüngsten Erfolge binärer neuronaler Netze lassen sich auf der Grundlage der Geometrie hochdimensionaler binärer Vektoren verstehen.", "Untersucht numerisch und theoretisch die Gründe für den empirischen Erfolg von binarisierten neuronalen Netzen.", "In diesem Beitrag wird die Wirksamkeit binärer neuronaler Netze analysiert und erläutert, warum die Binarisierung die Leistung des Modells erhalten kann."]} +{"source": "In recent years Convolutional Neural Networks (CNN) have been used extensively for Superresolution (SR). In this paper, we use inverse problem and sparse representation solutions to form a mathematical basis for CNN operations. We show how a single neuron is able to provide the optimum solution for inverse problem, given a low resolution image dictionary as an operator. Introducing a new concept called Representation Dictionary Duality, we show that CNN elements (filters) are trained to be representation vectors and then, during reconstruction, used as dictionaries. In the light of theoretical work, we propose a new algorithm which uses two networks with different structures that are separately trained with low and high coherency image patches and show that it performs faster compared to the state-of-the-art algorithms while not sacrificing from performance.", "target": ["Nachdem wir bewiesen haben, dass ein Neuron als inverser Problemlöser für die Superauflösung fungiert und ein Netzwerk von Neuronen garantiert eine Lösung liefert, haben wir eine doppelte Netzwerkarchitektur vorgeschlagen, die schneller als der Stand der Technik ist.", "Erörtert die Verwendung neuronaler Netze für die Superauflösung.", "Eine neue Architektur zur Lösung von Bild-Superauflösungsaufgaben und eine Analyse, die darauf abzielt, eine Verbindung zwischen CNNs zur Lösung von Superauflösung und zur Lösung von spärlichen regularisierten inversen Problemen herzustellen."]} +{"source": "We consider the learning of algorithmic tasks by mere observation of input-output\n pairs. Rather than studying this as a black-box discrete regression problem with\n no assumption whatsoever on the input-output mapping, we concentrate on tasks\n that are amenable to the principle of divide and conquer, and study what are its\n implications in terms of learning.\n This principle creates a powerful inductive bias that we leverage with neural\n architectures that are defined recursively and dynamically, by learning two scale-\n invariant atomic operations: how to split a given input into smaller sets, and how\n to merge two partially solved tasks into a larger partial solution. Our model can be\n trained in weakly supervised environments, namely by just observing input-output\n pairs, and in even weaker environments, using a non-differentiable reward signal.\n Moreover, thanks to the dynamic aspect of our architecture, we can incorporate\n the computational complexity as a regularization term that can be optimized by\n backpropagation. We demonstrate the flexibility and efficiency of the Divide-\n and-Conquer Network on several combinatorial and geometric tasks: convex hull,\n clustering, knapsack and euclidean TSP. Thanks to the dynamic programming\n nature of our model, we show significant improvements in terms of generalization\n error and computational complexity.", "target": ["Dynamisches Modell, das durch schwache Überwachung Teilungs- und Eroberungsstrategien erlernt.", "Schlägt vor, der Architektur eines neuronalen Netzes eine neue induktive Verzerrung hinzuzufügen, indem eine Strategie des Teilens und Eroberns angewendet wird.", "Diese Arbeit untersucht Probleme, die mit einem dynamischen Programmieransatz gelöst werden können, und schlägt eine neuronale Netzwerkarchitektur zur Lösung solcher Probleme vor, die Sequenz-zu-Sequenz Baselines übertrifft.", "Die Arbeit schlägt eine einzigartige Netzwerkarchitektur vor, die Divide-and-Conquer Strategien zur Lösung algorithmischer Aufgaben erlernen kann."]} +{"source": "Within many machine learning algorithms, a fundamental problem concerns efficient calculation of an unbiased gradient wrt parameters $\\boldsymbol{\\gamma}$ for expectation-based objectives $\\mathbb{E}_{q_{\\boldsymbol{\\gamma}} (\\boldsymbol{y})} [f (\\boldsymbol{y}) ]$. Most existing methods either ($i$) suffer from high variance, seeking help from (often) complicated variance-reduction techniques; or ($ii$) they only apply to reparameterizable continuous random variables and employ a reparameterization trick. To address these limitations, we propose a General and One-sample (GO) gradient that ($i$) applies to many distributions associated with non-reparameterizable continuous {\\em or} discrete random variables, and ($ii$) has the same low-variance as the reparameterization trick. We find that the GO gradient often works well in practice based on only one Monte Carlo sample (although one can of course use more samples if desired). Alongside the GO gradient, we develop a means of propagating the chain rule through distributions, yielding statistical back-propagation, coupling neural networks to common random variables.", "target": ["Ein Rep-ähnlicher Gradient für nicht reparametrisierbare kontinuierliche/diskrete Verteilungen; weiter verallgemeinert auf tiefe probabilistische Modelle, was zu statistischer Backpropagation führt.", "Stellt einen Gradientenschätzer für erwartungsbasierte Ziele vor, der unvoreingenommen ist, eine geringe Varianz aufweist und sowohl für kontinuierliche als auch für diskrete Zufallsvariablen gilt.", "Eine verbesserte Methode zur Berechnung von Ableitungen des Erwartungswerts und ein neuer Gradientenschätzer mit geringer Varianz, der das Training von generativen Modellen ermöglicht, bei denen Beobachtungen oder latente Variablen diskret sind.", "Entwirft einen Gradienten mit geringer Varianz für Verteilungen im Zusammenhang mit kontinuierlichen oder diskreten Zufallsvariablen."]} +{"source": "Quantum computers promise significant advantages over classical computers for a number of different applications. We show that the complete loss function landscape of a neural network can be represented as the quantum state output by a quantum computer. We demonstrate this explicitly for a binary neural network and, further, show how a quantum computer can train the network by manipulating this state using a well-known algorithm known as quantum amplitude amplification. We further show that with minor adaptation, this method can also represent the meta-loss landscape of a number of neural network architectures simultaneously. We search this meta-loss landscape with the same method to simultaneously train and design a binary neural network.", "target": ["Wir zeigen, dass NN-Parameter- und Hyperparameter-Kostenlandschaften als Quantenzustände mit einem einzigen Quantenschaltkreis erzeugt werden können und dass diese für Training und Meta-Training verwendet werden können.", "Beschreibt eine Methode, bei der ein Rahmen für tiefes Lernen quantisiert werden kann, indem die Zweizustandsform einer Bloch-Kugel/eines Qubits berücksichtigt und ein binäres neuronales Quantennetzwerk erstellt wird.", "In diesem Beitrag wird die Quantenamplitudenverstärkung vorgeschlagen, ein neuer Algorithmus für das Training und die Modellauswahl in binären neuronalen Netzen.", "Schlägt eine neuartige Idee zur Ausgabe eines Quantenzustands vor, der eine vollständige Kostenlandschaft aller Parameter für ein gegebenes binäres neuronales Netz darstellt, indem ein binäres neuronales Quantennetz (QBNN) konstruiert wird."]} +{"source": "Several recent works have developed methods for training classifiers that are certifiably robust against norm-bounded adversarial perturbations. These methods assume that all the adversarial transformations are equally important, which is seldom the case in real-world applications. We advocate for cost-sensitive robustness as the criteria for measuring the classifier's performance for tasks where some adversarial transformation are more important than others. We encode the potential harm of each adversarial transformation in a cost matrix, and propose a general objective function to adapt the robust training method of Wong & Kolter (2018) to optimize for cost-sensitive robustness. Our experiments on simple MNIST and CIFAR10 models with a variety of cost matrices show that the proposed approach can produce models with substantially reduced cost-sensitive robust error, while maintaining classification accuracy.", "target": ["Ein allgemeines Verfahren zur Ausbildung eines zertifizierten, kostensensitiven und robusten Klassifizierers gegen negative Einflüsse.", "Berechnet und fügt die Kosten eines adversarial Angriffs in das Optimierungsziel ein, um ein Modell zu erhalten, das kostensensitiv gegen feindliche Angriffe robust ist. ", "Baut auf der semnialen Arbeit von Dalvi et al. auf und erweitert den Ansatz zur zertifizierbaren Robustheit um eine Kostenmatrix, die für jedes Paar von Quelle-Ziel-Klassen angibt, ob das Modell gegenüber gegnerischen Beispielen robust sein sollte."]} +{"source": "Retinal prostheses for treating incurable blindness are designed to electrically stimulate surviving retinal neurons, causing them to send artificial visual signals to the brain. However, electrical stimulation generally cannot precisely reproduce normal patterns of neural activity in the retina. Therefore, an electrical stimulus must be selected that produces a neural response as close as possible to the desired response. This requires a technique for computing a distance between the desired response and the achievable response that is meaningful in terms of the visual signal being conveyed. Here we propose a method to learn such a metric on neural responses, directly from recorded light responses of a population of retinal ganglion cells (RGCs) in the primate retina. The learned metric produces a measure of similarity of RGC population responses that accurately reflects the similarity of the visual input. Using data from electrical stimulation experiments, we demonstrate that this metric may improve the performance of a prosthesis.", "target": ["Verwendung von Triplets zum Erlernen einer Metrik für den Vergleich neuronaler Reaktionen und zur Verbesserung der Leistung einer Prothese.", "Die Autoren entwickeln neue Spike-Train-Abstandsmetriken, einschließlich neuronaler Netze und quadratischer Metriken. Diese Metriken übertreffen nachweislich die naive Hamming-Distanz-Metrik und erfassen implizit einige Strukturen im neuronalen Code.", "Mit der Anwendung der Verbesserung der neuronalen Prothese im Auge, die Autoren vorschlagen, eine Metrik zwischen neuronalen Antworten entweder durch die Optimierung einer quadratischen Form oder ein tiefes neuronales Netz zu lernen."]} +{"source": "We introduce a novel workflow, QCue, for providing textual stimulation during mind-mapping. Mind-mapping is a powerful tool whose intent is to allow one to externalize ideas and their relationships surrounding a central problem. The key challenge in mind-mapping is the difficulty in balancing the exploration of different aspects of the problem (breadth) with a detailed exploration of each of those aspects (depth). Our idea behind QCue is based on two mechanisms: (1) computer-generated automatic cues to stimulate the user to explore the breadth of topics based on the temporal and topological evolution of a mind-map and (2) user-elicited queries for helping the user explore the depth for a given topic. We present a two-phase study wherein the first phase provided insights that led to the development of our work-flow for stimulating the user through cues and queries. In the second phase, we present a between-subjects evaluation comparing QCue with a digital mind-mapping work-flow without computer intervention. Finally, we present an expert rater evaluation of the mind-maps created by users in conjunction with user feedback.", "target": ["In diesem Beitrag wird eine Methode vorgestellt, mit der Fragen (Hinweise) und Abfragen (Vorschläge) generiert werden können, um den Benutzer beim Mind-Mapping zu unterstützen.", "Stellt ein Werkzeug vor, das das Mind-Mapping durch vorgeschlagenen Kontext in Bezug auf bestehende Knotenpunkte und durch Fragen, die weniger entwickelte Zweige erweitern, unterstützt.", "In diesem Beitrag wird ein Ansatz zur Unterstützung von Menschen bei Mindmapping-Aufgaben vorgestellt, eine Schnittstelle und algorithmische Funktionen zur Unterstützung von Mindmapping entwickelt und eine Evaluierungsstudie durchgeführt."]} +{"source": "The ability to detect when an input sample was not drawn from the training distribution is an important desirable property of deep neural networks. In this paper, we show that a simple ensembling of first and second order deep feature statistics can be exploited to effectively differentiate in-distribution and out-of-distribution samples. Specifically, we observe that the mean and standard deviation within feature maps differs greatly between in-distribution and out-of-distribution samples. Based on this observation, we propose a simple and efficient plug-and-play detection procedure that does not require re-training, pre-processing or changes to the model. The proposed method outperforms the state-of-the-art by a large margin in all standard benchmarking tasks, while being much simpler to implement and execute. Notably, our method improves the true negative rate from 39.6% to 95.3% when 95% of in-distribution (CIFAR-100) are correctly detected using a DenseNet and the out-of-distribution dataset is TinyImageNet resize. The source code of our method will be made publicly available.", "target": ["Erkennung von Beispielen außerhalb der Verteilung durch Verwendung von Merkmalsstatistiken niedriger Ordnung, ohne dass eine Änderung des zugrunde liegenden DNN erforderlich ist.", "Es wird ein Algorithmus zur Erkennung von Beispielen außerhalb der Verteilung vorgestellt, der die laufende Schätzung von Mittelwert und Varianz innerhalb von BatchNorm-Schichten verwendet, um Merkmalsdarstellungen zu konstruieren, die später in einen linearen Klassifikator eingegeben werden.", "Ein Ansatz zur Erkennung von Beispielen außerhalb der Verteilung, bei dem die Autoren vorschlagen, eine logistische Regression über einfache Statistiken jeder Batch-Normalisierungsschicht von CNN zu verwenden.", "In dem Beitrag wird vorgeschlagen, Z-Scores für den Vergleich von ID- und OOD-Stichproben zu verwenden, um zu bewerten, was Deep Nets zu tun versuchen."]} +{"source": "Due to the sharp increase in the severity of the threat imposed by software vulnerabilities, the detection of vulnerabilities in binary code has become an important concern in the software industry, such as the embedded systems industry, and in the field of computer security. However, most of the work in binary code vulnerability detection has relied on handcrafted features which are manually chosen by a select few, knowledgeable domain experts. In this paper, we attempt to alleviate this severe binary vulnerability detection bottleneck by leveraging recent advances in deep learning representations and propose the Maximal Divergence Sequential Auto-Encoder. In particular, latent codes representing vulnerable and non-vulnerable binaries are encouraged to be maximally divergent, while still being able to maintain crucial information from the original binaries. We conducted extensive experiments to compare and contrast our proposed methods with the baselines, and the results show that our proposed methods outperform the baselines in all performance measures of interest.", "target": ["Wir schlagen eine neue Methode namens Maximal Divergence Sequential AutoEncoder vor, die die variationale AutoEncoder-Darstellung für die Erkennung von Schwachstellen im Binärcode nutzt.", "In diesem Beitrag wird eine auf einem variationalen Autoencoder basierende Architektur für Code-Einbettungen zur Erkennung von Schwachstellen in binärer Software vorgeschlagen, wobei gelernte Einbettungen effektiver zwischen anfälligem und nicht anfälligem Binärcode unterscheiden können als Basisprogramme.", "In diesem Beitrag wird ein Modell zur automatischen Extraktion von Merkmalen für die Erkennung von Schwachstellen mithilfe von Deep Learning Techniken vorgeschlagen. "]} +{"source": "Modern neural architectures critically rely on attention for mapping structured inputs to sequences. In this paper we show that prevalent attention architectures do not adequately model the dependence among the attention and output tokens across a predicted sequence.\n We present an alternative architecture called Posterior Attention Models that after a principled factorization of the full joint distribution of the attention and output variables, proposes two major changes. First, the position where attention is marginalized is changed from the input to the output. Second, the attention propagated to the next decoding stage is a posterior attention distribution conditioned on the output. Empirically on five translation and two morphological inflection tasks the proposed posterior attention models yield better BLEU score and alignment accuracy than existing attention models.", "target": ["Die Berechnung der Aufmerksamkeit auf der Grundlage der posterioren Verteilung führt zu sinnvollerer Aufmerksamkeit und besserer Leistung.", "Diese Arbeit schlägt ein Sequenz-zu-Sequenz Modell vor, bei dem die Aufmerksamkeit als latente Variable behandelt wird, und leitet neuartige Inferenzverfahren für dieses Modell ab, mit denen Verbesserungen in der maschinellen Übersetzung und bei der Generierung morphologischer Flexionen erzielt werden.", "Diese Arbeit stellt ein neuartiges posteriores Aufmerksamkeitsmodell für seq2seq-Probleme vor."]} +{"source": "The growing interest to implement Deep Neural Networks (DNNs) on resource-bound hardware has motivated innovation of compression algorithms. Using these algorithms, DNN model sizes can be substantially reduced, with little to no accuracy degradation. This is achieved by either eliminating components from the model, or penalizing complexity during training. While both approaches demonstrate considerable compressions, the former often ignores the loss function during compression while the later produces unpredictable compressions. In this paper, we propose a technique that directly minimizes both the model complexity and the changes in the loss function. In this technique, we formulate compression as a constrained optimization problem, and then present a solution for it. We will show that using this technique, we can achieve competitive results.", "target": ["Komprimierung trainierter DNN-Modelle durch Minimierung ihrer Komplexität bei gleichzeitiger Begrenzung ihres Verlustes.", "In dieser Arbeit wird eine Methode zur Komprimierung von tiefen neuronalen Netzen mit Genauigkeitseinschränkungen vorgeschlagen.", "In diesem Beitrag wird eine verlustwertbeschränkte k-means Kodierungsmethode für die Netzwerkkompression vorgestellt und ein iterativer Algorithmus zur Modelloptimierung entwickelt."]} +{"source": "Deep neural networks are able to solve tasks across a variety of domains and modalities of data. Despite many empirical successes, we lack the ability to clearly understand and interpret the learned mechanisms that contribute to such effective behaviors and more critically, failure modes. In this work, we present a general method for visualizing an arbitrary neural network's inner mechanisms and their power and limitations. Our dataset-centric method produces visualizations of how a trained network attends to components of its inputs. The computed \"attention masks\" support improved interpretability by highlighting which input attributes are critical in determining output. We demonstrate the effectiveness of our framework on a variety of deep neural network architectures in domains from computer vision and natural language processing. The primary contribution of our approach is an interpretable visualization of attention that provides unique insights into the network's underlying decision-making process irrespective of the data modality.", "target": ["Wir entwickeln eine Technik zur Visualisierung von Aufmerksamkeitsmechanismen in beliebigen neuronalen Netzen. ", "Schlägt vor, ein latentes Aufmerksamkeitsnetz zu erlernen, das helfen kann, die innere Struktur eines tiefen neuronalen Netzes zu visualisieren.", "Die Autoren dieser Arbeit schlagen ein datengesteuertes Black-Box Visualisierungsschema vor. "]} +{"source": "The design of small molecules with bespoke properties is of central importance to drug discovery. However significant challenges yet remain for computational methods, despite recent advances such as deep recurrent networks and reinforcement learning strategies for sequence generation, and it can be difficult to compare results across different works. This work proposes 19 benchmarks selected by subject experts, expands smaller datasets previously used to approximately 1.1 million training molecules, and explores how to apply new reinforcement learning techniques effectively for molecular design. The benchmarks here, built as OpenAI Gym environments, will be open-sourced to encourage innovation in molecular design algorithms and to enable usage by those without a background in chemistry. Finally, this work explores recent development in reinforcement-learning methods with excellent sample complexity (the A2C and PPO algorithms) and investigates their behavior in molecular generation, demonstrating significant performance gains compared to standard reinforcement learning techniques.", "target": ["Wir untersuchen eine Vielzahl von RL-Algorithmen für die Molekülgenerierung und definieren neue Benchmarks (die als OpenAI Gym veröffentlicht werden), wobei wir feststellen, dass PPO und ein Hill-Climbing MLE-Algorithmus am besten funktionieren.", "Betrachtet die Modellevaluation für die Molekülgenerierung, indem 19 Benchmarks vorgeschlagen werden, kleine Datensätze zu einem großen, standardisierten Datensatz erweitert werden und untersucht wird, wie RL-Techniken für das Moleküldesign angewendet werden können.", "Dieser Beitrag zeigt, dass die anspruchsvollsten RL-Methoden bei der Modellierung und Synthese von Molekülen weniger effektiv sind als die einfache Hill-Climbing-Technik, mit PPO als Ausnahme."]} +{"source": "Analogical reasoning has been a principal focus of various waves of AI research. Analogy is particularly challenging for machines because it requires relational structures to be represented such that they can be flexibly applied across diverse domains of experience. Here, we study how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data. We find that the critical factor for inducing such a capacity is not an elaborate architecture, but rather, careful attention to the choice of data and the manner in which it is presented to the model. The most robust capacity for analogical reasoning is induced when networks learn analogies by contrasting abstract relational structures in their input domains, a training method that uses only the input data to force models to learn about important abstract features. Using this technique we demonstrate capacities for complex, visual and symbolic analogy making and generalisation in even the simplest neural network architectures.", "target": ["Die robusteste Fähigkeit zu analogem Denken entsteht, wenn Netze Analogien lernen, indem sie abstrakte relationale Strukturen in ihren Eingabedomänen gegenüberstellen.", "Die Arbeit untersucht die Fähigkeit eines neuronalen Netzes, Analogien zu lernen, und zeigt, dass ein einfaches neuronales Netz in der Lage ist, bestimmte Analogieprobleme zu lösen.", "In diesem Beitrag wird ein Ansatz zum Training neuronaler Netze für analoge Schlussfolgerungen beschrieben, der insbesondere visuelle Analogien und symbolische Analogien berücksichtigt."]} +{"source": "Building chatbots that can accomplish goals such as booking a flight ticket is an unsolved problem in natural language understanding. Much progress has been made to build conversation models using techniques such as sequence2sequence modeling. One challenge in applying such techniques to building goal-oriented conversation models is that maximum likelihood-based models are not optimized toward accomplishing goals. Recently, many methods have been proposed to address this issue by optimizing a reward that contains task status or outcome. However, adding the reward optimization on the fly usually provides little guidance for language construction and the conversation model soon becomes decoupled from the language model. In this paper, we propose a new setting in goal-oriented dialogue system to tighten the gap between these two aspects by enforcing model level information isolation on individual models between two agents. Language construction now becomes an important part in reward optimization since it is the only way information can be exchanged. We experimented our models using self-play and results showed that our method not only beat the baseline sequence2sequence model in rewards but can also generate human-readable meaningful conversations of comparable quality.", "target": ["Ein zielorientiertes neuronales Konversationsmodell durch Selbstspiel.", "Ein Selbstspielmodell zur zielorientierten Dialoggenerierung, das eine stärkere Kopplung zwischen der Aufgabenbelohnung und dem Sprachmodell erzwingen soll.", "Dieser Beitrag beschreibt eine Methode zur Verbesserung eines zielorientierten Dialogsystems durch Selbstspiel. "]} +{"source": "Search engine users nowadays heavily depend on query completion and correction to shape their queries. Typically, the completion is done by database lookup which does not understand the context and cannot generalize to prefixes not in the database . In the paper, we propose to use unsupervised deep language models to complete and correct the queries given an arbitrary prefix . We show how to address two main challenges that renders this method practical for large-scale deployment : 1) we propose a method for integrating error correction into the language model completion via a edit-distance potential and a variant of beam search that can exploit these potential functions; and 2) we show how to efficiently perform CPU-based computation to complete the queries, with error correction, in real time (generating top 10 completions within 16 ms). Experiments show that the method substantially increases hit rate over standard approaches, and is capable of handling tail queries.\n", "target": ["Vervollständigung von Suchanfragen in Echtzeit mit LSTM-Sprachmodellen auf Zeichenebene.", "In diesem Papier werden Methoden zur Abfragevervollständigung vorgestellt, die eine Präfixkorrektur und einige technische Details zur Erfüllung bestimmter Latenzanforderungen auf einer CPU umfassen.", "Die Autoren schlagen einen Algorithmus zur Lösung des Problems der Abfragevervollständigung mit Fehlerkorrektur vor und übernehmen die RNN-basierte Modellierung auf Zeichenebene und optimieren den Inferenzteil, um Ziele in Echtzeit zu erreichen."]} +{"source": "RMSProp and ADAM continue to be extremely popular algorithms for training neural nets but their theoretical convergence properties have remained unclear. Further, recent work has seemed to suggest that these algorithms have worse generalization properties when compared to carefully tuned stochastic gradient descent or its momentum variants. In this work, we make progress towards a deeper understanding of ADAM and RMSProp in two ways. First, we provide proofs that these adaptive gradient algorithms are guaranteed to reach criticality for smooth non-convex objectives, and we give bounds on the running time.\n\n Next we design experiments to empirically study the convergence and generalization properties of RMSProp and ADAM against Nesterov's Accelerated Gradient method on a variety of common autoencoder setups and on VGG-9 with CIFAR-10. Through these experiments we demonstrate the interesting sensitivity that ADAM has to its momentum parameter \\beta_1. We show that at very high values of the momentum parameter (\\beta_1 = 0.99) ADAM outperforms a carefully tuned NAG on most of our experiments, in terms of getting lower training and test losses. On the other hand, NAG can sometimes do better when ADAM's \\beta_1 is set to the most commonly used value: \\beta_1 = 0.9, indicating the importance of tuning the hyperparameters of ADAM to get better generalization performance.\n\n We also report experiments on different autoencoders to demonstrate that NAG has better abilities in terms of reducing the gradient norms, and it also produces iterates which exhibit an increasing trend for the minimum eigenvalue of the Hessian of the loss function at the iterates.", "target": ["In dieser Arbeit beweisen wir die Konvergenz zur Kritikalität von (stochastischem und deterministischem) RMSProp und deterministischem ADAM für glatte, nicht-konvexe Ziele und wir demonstrieren eine interessante beta_1-Sensitivität für ADAM auf Autoencodern. ", "In diesem Beitrag wird eine Konvergenzanalyse von RMSProp und ADAM für glatte, nicht-konvexe Funktionen vorgestellt."]} +{"source": "Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2 algorithm. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples.", "target": ["Die Entwicklung unüberwachter Verteidigungsmechanismen gegen gegnerische Angriffe ist entscheidend, um die Verallgemeinerbarkeit der Verteidigung zu gewährleisten. ", "In dieser Arbeit wird eine Methode zur Erkennung von adversarial Beispielen in einer Deep Learning Klassifizierungsumgebung vorgestellt.", "In diesem Beitrag wird eine unbeaufsichtigte Methode zur Erkennung von adversarial Beispielen in neuronalen Netzen vorgestellt."]} +{"source": "Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 10 4 GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6× fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2× faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.", "target": ["Proxy-less neuronale Architektur Suche nach direkt lernenden Architekturen auf großen Zielaufgaben (ImageNet) bei gleichzeitiger Reduzierung der Kosten auf das gleiche Niveau des normalen Trainings.", "Diese Arbeit befasst sich mit dem Problem der Architektursuche und versucht insbesondere, dies zu tun, ohne auf \"Proxy\"-Aufgaben trainieren zu müssen, bei denen das Problem durch eine geringere Optimierung, architektonische Komplexität oder Datenmenge vereinfacht ist."]} +{"source": "With the recently rapid development in deep learning, deep neural networks have been widely adopted in many real-life applications. However, deep neural networks are also known to have very little control over its uncertainty for test examples, which potentially causes very harmful and annoying consequences in practical scenarios. In this paper, we are particularly interested in designing a higher-order uncertainty metric for deep neural networks and investigate its performance on the out-of-distribution detection task proposed by~\\cite{hendrycks2016baseline}. Our method first assumes there exists a underlying higher-order distribution $\\mathcal{P}(z)$ , which generated label-wise distribution $\\mathcal{P}(y)$ over classes on the K-dimension simplex, and then approximate such higher-order distribution via parameterized posterior function $p_{\\theta}(z|x)$ under variational inference framework, finally we use the entropy of learned posterior distribution $p_{\\theta}(z|x)$ as uncertainty measure to detect out-of-distribution examples. However , we identify the overwhelming over-concentration issue in such a framework, which greatly hinders the detection performance. Therefore , we further design a log-smoothing function to alleviate such issue to greatly increase the robustness of the proposed entropy-based uncertainty measure. Through comprehensive experiments on various datasets and architectures, our proposed variational Dirichlet framework with entropy-based uncertainty measure is consistently observed to yield significant improvements over many baseline systems.", "target": ["Ein neuer Rahmen auf der Grundlage der Variationsinferenz für die Erkennung von Verteilungsabweichungen.", "Beschreibt einen probabilistischen Ansatz zur Quantifizierung der Unsicherheit bei DNN-Klassifizierungsaufgaben, der andere SOTA-Methoden bei der Erkennung von Abweichungen von der Verteilung übertrifft.", "Ein neues Framework für die Erkennung von Out-of-Distribution, basierend auf variabler Inferenz und einer priorisierten Dirichlet-Verteilung, der den aktuellen Stand der Technik anhand verschiedener Datensätze darstellt.", "Out-of-Distribution Erkennung durch eine neue Methode zur Annäherung an die Konfidenzverteilung der Klassifizierungswahrscheinlichkeit unter Verwendung der Variationsinferenz der Dirichlet-Verteilung."]} +{"source": "Intelligent agents can learn to represent the action spaces of other agents simply by observing them act. Such representations help agents quickly learn to predict the effects of their own actions on the environment and to plan complex action sequences. In this work, we address the problem of learning an agent’s action space purely from visual observation. We use stochastic video prediction to learn a latent variable that captures the scene's dynamics while being minimally sensitive to the scene's static content. We introduce a loss term that encourages the network to capture the composability of visual sequences and show that it leads to representations that disentangle the structure of actions. We call the full model with composable action representations Composable Learned Action Space Predictor (CLASP). We show the applicability of our method to synthetic settings and its potential to capture action spaces in complex, realistic visual settings. When used in a semi-supervised setting, our learned representations perform comparably to existing fully supervised methods on tasks such as action-conditioned video prediction and planning in the learned action space, while requiring orders of magnitude fewer action labels. Project website: https://daniilidis-group.github.io/learned_action_spaces", "target": ["Wir lernen eine Repräsentation des Handlungsraums eines Agenten aus rein visuellen Beobachtungen. Wir verwenden einen rekurrenten latenten Variablenansatz mit einem neuartigen Kompositionsverlust.", "Schlägt ein kompositionelles latentes Variablenmodell vor, um Modelle zu erlernen, die vorhersagen, was als Nächstes in Szenarien geschieht, in denen Handlungskennzeichen nicht in großer Zahl verfügbar sind.", "Ein auf variationalem IB basierender Ansatz zum Erlernen von Handlungsrepräsentationen direkt aus Videos von ausgeführten Handlungen, der eine bessere Effizienz nachfolgender Lernmethoden erzielt und gleichzeitig eine geringere Menge an Videos mit Handlungskennzeichnungen erfordert.", "In diesem Beitrag wird ein Ansatz zur Videovorhersage vorgeschlagen, der autonom einen Aktionsraum findet, der Unterschiede zwischen aufeinanderfolgenden Bildern kodiert."]} +{"source": "When autonomous agents interact in the same environment, they must often cooperate to achieve their goals. One way for agents to cooperate effectively is to form a team, make a binding agreement on a joint plan, and execute it. However, when agents are self-interested, the gains from team formation must be allocated appropriately to incentivize agreement. Various approaches for multi-agent negotiation have been proposed, but typically only work for particular negotiation protocols. More general methods usually require human input or domain-specific data, and so do not scale. To address this, we propose a framework for training agents to negotiate and form teams using deep reinforcement learning. Importantly, our method makes no assumptions about the specific negotiation protocol, and is instead completely experience driven. We evaluate our approach on both non-spatial and spatially extended team-formation negotiation environments, demonstrating that our agents beat hand-crafted bots and reach negotiation outcomes consistent with fair solutions predicted by cooperative game theory. Additionally, we investigate how the physical location of agents influences negotiation outcomes.", "target": ["Reinforcement Learning kann verwendet werden, um Agenten zu trainieren, die Teambildung über viele Verhandlungsprotokolle hinweg zu verhandeln.", "Diese Arbeit untersucht tiefes Multi-Agenten-RL in Umgebungen, in denen alle Agenten kooperieren müssen, um eine Aufgabe zu erfüllen (z.B. Suche und Rettung, Multiplayer-Videospiele), und verwendet einfache kooperative gewichtete Abstimmungsspiele, um die Wirksamkeit von tiefem RL zu untersuchen und um Lösungen, die durch tiefes RL gefunden wurden, mit einer fairen Lösung zu vergleichen.", "Ein Ansatz des Reinforcement Learnings für die Aushandlung von Koalitionen in kooperativen spieltheoretischen Kontexten, der in Fällen verwendet werden kann, in denen unbegrenzte Trainingssimulationen verfügbar sind."]} +{"source": "Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.", "target": ["Unüberwachte Methoden zum Auffinden, Analysieren und Kontrollieren wichtiger Neuronen in der NMT.", "In diesem Beitrag werden unbeaufsichtigte Ansätze zur Entdeckung wichtiger Neuronen in neuronalen maschinellen Übersetzungssystemen vorgestellt und die von diesen Neuronen kontrollierten linguistischen Eigenschaften analysiert.", "Unüberwachte Methoden zur Einstufung von Neuronen in der maschinellen Übersetzung, bei denen wichtige Neuronen identifiziert und zur Steuerung der MÜ-Ausgabe verwendet werden."]} +{"source": "Recent state-of-the-art reinforcement learning algorithms are trained under the goal of excelling in one specific task. Hence, both environment and task specific knowledge are entangled into one framework. However, there are often scenarios where the environment (e.g. the physical world) is fixed while only the target task changes. Hence, borrowing the idea from hierarchical reinforcement learning, we propose a framework that disentangles task and environment specific knowledge by separating them into two units. The environment-specific unit handles how to move from one state to the target state; and the task-specific unit plans for the next target state given a specific task. The extensive results in simulators indicate that our method can efficiently separate and learn two independent units, and also adapt to a new task more efficiently than the state-of-the-art methods.", "target": ["Wir schlagen einen DRL-Rahmen vor, der aufgaben- und umgebungsspezifisches Wissen voneinander trennt.", "Die Autoren schlagen vor, das Reinforcement Learning in eine PATH-Funktion und eine GOAL-Funktion zu zerlegen.", "Eine modulare Architektur mit dem Ziel, umweltspezifisches Wissen und aufgabenspezifisches Wissen in verschiedene Module aufzuteilen, die dem Standard-A3C für ein breites Spektrum von Aufgaben entsprechen."]} +{"source": "Modelling 3D scenes from 2D images is a long-standing problem in computer vision with implications in, e.g., simulation and robotics. We propose pix2scene, a deep generative-based approach that implicitly models the geometric properties of a scene from images. Our method learns the depth and orientation of scene points visible in images. Our model can then predict the structure of a scene from various, previously unseen view points. It relies on a bi-directional adversarial learning mechanism to generate scene representations from a latent code, inferring the 3D representation of the underlying scene geometry. We showcase a novel differentiable renderer to train the 3D model in an end-to-end fashion, using only images. We demonstrate the generative ability of our model qualitatively on both a custom dataset and on ShapeNet. Finally, we evaluate the effectiveness of the learned 3D scene representation in supporting a 3D spatial reasoning.", "target": ["pix2scene: ein tiefgreifender generativer Ansatz zur impliziten Modellierung der geometrischen Eigenschaften einer 3D-Szene aus Bildern.", "Untersucht die Erklärung von Szenen mit Surfels in einem neuronalen Erkennungsmodell und demonstriert Ergebnisse zur Bildrekonstruktion, Synthese und mentalen Formrotation.", "Die Autoren stellen eine Methode zur Erstellung eines 3D-Szenenmodells anhand eines 2D-Bildes und einer Kameraposition unter Verwendung eines selbst-superfizierten Modells vor."]} +{"source": "Identifying the relations that connect words is an important step towards understanding human languages and is useful for various NLP tasks such as knowledge base completion and analogical reasoning. Simple unsupervised operators such as vector offset between two-word embeddings have shown to recover some specific relationships between those words, if any. Despite this, how to accurately learn generic relation representations from word representations remains unclear. We model relation representation as a supervised learning problem and learn parametrised operators that map pre-trained word embeddings to relation representations. We propose a method for learning relation representations using a feed-forward neural network that performs relation prediction. Our evaluations on two benchmark datasets reveal that the penultimate layer of the trained neural network-based relational predictor acts as a good representation for the relations between words.", "target": ["Die Identifizierung der Beziehungen, die Wörter verbinden, ist für verschiedene NLP-Aufgaben wichtig. Wir modellieren die Darstellung von Beziehungen als überwachtes Lernproblem und lernen parametrisierte Operatoren, die vortrainierte Worteinbettungen auf Beziehungsrepräsentationen abbilden.", "In diesem Beitrag wird eine neuartige Methode zur Darstellung lexikalischer Beziehungen als Vektoren vorgestellt, bei der nur vortrainierte Worteinbettungen und eine neuartige Verlustfunktion für Wortpaare verwendet werden.", "Eine neuartige Lösung für das Problem der Beziehungskomposition, wenn Sie bereits trainierte Wort-/Entitätseinbettungen haben und nur daran interessiert sind, zu lernen, Beziehungsrepräsentationen zu komponieren."]} +{"source": "Recurrent neural networks (RNNs) are important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets - Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.", "target": ["Wir schlagen vor, zwei identische Kopien eines rekurrenten neuronalen Netzes (mit gemeinsamen Parametern) mit unterschiedlichen Dropout Masken zu trainieren und dabei die Differenz zwischen ihren (pre-softmax) Vorhersagen zu minimieren.", "Präsentiert Fraternal Dropout als Verbesserung gegenüber Expectation-linear Dropout in Bezug auf Konvergenz und demonstriert die Nützlichkeit von Fraternal Dropout an einer Reihe von Aufgaben und Datensätzen."]} +{"source": "We propose a novel approach for deformation-aware neural networks that learn the weighting and synthesis of dense volumetric deformation fields. Our method specifically targets the space-time representation of physical surfaces from liquid simulations. Liquids exhibit highly complex, non-linear behavior under changing simulation conditions such as different initial conditions. Our algorithm captures these complex phenomena in two stages: a first neural network computes a weighting function for a set of pre-computed deformations, while a second network directly generates a deformation field for refining the surface. Key for successful training runs in this setting is a suitable loss function that encodes the effect of the deformations, and a robust calculation of the corresponding gradients. To demonstrate the effectiveness of our approach, we showcase our method with several complex examples of flowing liquids with topology changes. Our representation makes it possible to rapidly generate the desired implicit surfaces. We have implemented a mobile application to demonstrate that real-time interactions with complex liquid effects are possible with our approach.", "target": ["Gewichtung und Verformung von Raum-Zeit-Datensätzen für hocheffiziente Annäherungen an das Flüssigkeitsverhalten lernen.", "Ein auf einem neuronalen Netz basierendes Modell wird zur Interpolation von Simulationen für neuartige Szenenbedingungen aus dicht registrierten impliziten 4D-Oberflächen für eine strukturierte Szene verwendet.", "In diesem Beitrag wird ein gekoppelter Deep-Learning Ansatz zur Erzeugung realistischer Flüssigkeitssimulationsdaten vorgestellt, die für Echtzeit-Entscheidungsunterstützungsanwendungen nützlich sein können.", "In diesem Beitrag wird ein Deep-Learning Ansatz für physikalische Simulationen vorgestellt, der zwei Netzwerke zur Synthese von 4D-Daten kombiniert, die physikalische 3D-Simulationen darstellen."]} +{"source": "This is an empirical paper which constructs color invariant networks and evaluates their performances on a realistic data set. The paper studies the simplest possible case of color invariance: invariance under pixel-wise permutation of the color channels. Thus the network is aware not of the specific color object, but its colorfulness. The data set introduced in the paper consists of images showing crashed cars from which ten classes were extracted. An additional annotation was done which labeled whether the car shown was red or non-red. The networks were evaluated by their performance on the classification task. With the color annotation we altered the color ratios in the training data and analyzed the generalization capabilities of the networks on the unaltered test data. We further split the test data in red and non-red cars and did a similar evaluation. It is shown in the paper that an pixel-wise ordering of the rgb-values of the images performs better or at least similarly for small deviations from the true color ratios. The limits of these networks are also discussed.", "target": ["Wir konstruieren und bewerten farbinvariante neuronale Netze auf einem neuen realistischen Datensatz.", "Schlägt eine Methode vor, um neuronale Netze für die Bilderkennung farbinvariant zu machen, und evaluiert sie anhand des cifar 10-Datensatzes.", "Die Autoren untersuchen eine modifizierte Eingabeschicht, die zu farbinvarianten Netzen führt, und zeigen, dass bestimmte farbinvariante Eingabeschichten die Genauigkeit bei Testbildern mit einer anderen Farbverteilung als die Trainingsbilder verbessern können.", "Die Autoren testen ein CNN auf Bildern mit Farbkanälen, die so verändert wurden, dass sie gegenüber Permutationen invariant sind, wobei die Leistung nicht allzu sehr beeinträchtigt wurde. "]} +{"source": "Expressive efficiency refers to the relation between two architectures A and B, whereby any function realized by B could be replicated by A, but there exists functions realized by A, which cannot be replicated by B unless its size grows significantly larger. For example, it is known that deep networks are exponentially efficient with respect to shallow networks, in the sense that a shallow network must grow exponentially large in order to approximate the functions represented by a deep network of polynomial size. In this work, we extend the study of expressive efficiency to the attribute of network connectivity and in particular to the effect of \"overlaps\" in the convolutional process, i.e., when the stride of the convolution is smaller than its filter size (receptive field).\n To theoretically analyze this aspect of network's design, we focus on a well-established surrogate for ConvNets called Convolutional Arithmetic Circuits (ConvACs), and then demonstrate empirically that our results hold for standard ConvNets as well. Specifically, our analysis shows that having overlapping local receptive fields, and more broadly denser connectivity, results in an exponential increase in the expressive capacity of neural networks. Moreover, while denser connectivity can increase the expressive capacity, we show that the most common types of modern architectures already exhibit exponential increase in expressivity, without relying on fully-connected layers.", "target": ["Wir analysieren, wie der Grad der Überlappungen zwischen den rezeptiven Feldern eines Convolutional Networks seine Ausdruckskraft beeinflusst.", "Die Arbeit untersucht die Ausdruckskraft, die durch \"Überlappung\" in Convolution Layers von DNNs bereitgestellt wird, indem lineare Aktivierungen mit Produktpooling berücksichtigt werden.", "In diesem Beitrag wird die Ausdrucksfähigkeit von Convolutional Arithmetic Circuits analysiert und gezeigt, dass eine exponentiell große Anzahl von nicht überlappenden ConvACs erforderlich ist, um den Gittertensor eines überlappenden ConvACs zu approximieren."]} +{"source": "We provide a theoretical algorithm for checking local optimality and escaping saddles at nondifferentiable points of empirical risks of two-layer ReLU networks. Our algorithm receives any parameter value and returns: local minimum, second-order stationary point, or a strict descent direction. The presence of M data points on the nondifferentiability of the ReLU divides the parameter space into at most 2^M regions, which makes analysis difficult. By exploiting polyhedral geometry, we reduce the total computation down to one convex quadratic program (QP) for each hidden node, O(M) (in)equality tests, and one (or a few) nonconvex QP. For the last QP, we show that our specific problem can be solved efficiently, in spite of nonconvexity. In the benign case, we solve one equality constrained QP, and we prove that projected gradient descent solves it exponentially fast. In the bad case, we have to solve a few more inequality constrained QPs, but we prove that the time complexity is exponential only in the number of inequality constraints. Our experiments show that either benign case or bad case with very few inequality constraints occurs, implying that our algorithm is efficient in most cases.", "target": ["Ein theoretischer Algorithmus zum Testen der lokalen Optimalität und zum Extrahieren von Abstiegsrichtungen an nicht differenzierbaren Punkten von empirischen Risiken von einschichtigen ReLU-Netzen.", "Schlägt einen Algorithmus vor, um zu prüfen, ob ein gegebener Punkt ein verallgemeinerter stationärer Punkt zweiter Ordnung ist.", "Ein theoretischer Algorithmus, der die Lösung von konvexen und nicht-konvexen quadratischen Programmen beinhaltet, um die lokale Optimalität zu überprüfen und Sättigungen beim Training von zweischichtigen ReLU-Netzen zu vermeiden.", "Der Autor schlägt eine Methode vor, mit der geprüft werden kann, ob ein Punkt ein stationärer Punkt ist oder nicht, und klassifiziert stationäre Punkte dann entweder als lokale Minimalpunkte oder als stationäre Punkte zweiter Ordnung."]} +{"source": "We present a new technique for learning visual-semantic embeddings for cross-modal retrieval. Inspired by the use of hard negatives in structured prediction, and ranking loss functions used in retrieval, we introduce a simple change to common loss functions used to learn multi-modal embeddings. That, combined with fine-tuning and the use of augmented data, yields significant gains in retrieval performance. We showcase our approach, dubbed VSE++, on the MS-COCO and Flickr30K datasets, using ablation studies and comparisons with existing methods. On MS-COCO our approach outperforms state-of-the-art methods by 8.8% in caption retrieval, and 11.3% in image retrieval (based on R@1).", "target": ["Ein neuer, auf relativ harten Negativen basierender Verlust, der die beste Leistung bei der Suche nach Bildunterschriften erzielt.", "Erlernen der gemeinsamen Einbettung von Sätzen und Bildern unter Verwendung von Tripelverlusten, die auf die härtesten Negative angewandt werden, anstatt über alle Tripel zu mitteln."]} +{"source": "We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimization problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising when compared to those trained using traditional backpropagation techniques, both in terms of training speed, as well as feature extraction and reconstruction performance.", "target": ["Wir nutzen das Prinzip der alternierenden Minimierung, um eine effektive neue Technik zum Trainieren von Deep Autoencodern bereitzustellen.", "Alternierendes Minimierungs Framework für das Training von Autoencoder und Encoder-Decoder Netzwerken.", "Die Autoren erforschen einen alternierenden Optimierungsansatz für das Training von Autoencodern, wobei jede Schicht als verallgemeinertes lineares Modell behandelt wird, und schlagen vor, den stochastischen normalisierten GD als Minimierungsalgorithmus in jeder Phase zu verwenden."]} +{"source": "We develop new algorithms for estimating heterogeneous treatment effects, combining recent developments in transfer learning for neural networks with insights from the causal inference literature. By taking advantage of transfer learning, we are able to efficiently use different data sources that are related to the same underlying causal mechanisms. We compare our algorithms with those in the extant literature using extensive simulation studies based on large-scale voter persuasion experiments and the MNIST database. Our methods can perform an order of magnitude better than existing benchmarks while using a fraction of the data.", "target": ["Transferlernen zur Schätzung kausaler Effekte unter Verwendung neuronaler Netze.", "Entwicklung von Algorithmen zur Schätzung des bedingten durchschnittlichen Behandlungseffekts anhand von Hilfsdatensätzen in verschiedenen Umgebungen, sowohl mit als auch ohne Basis-Lerner.", "Die Autoren schlagen Methoden vor, um eine neuartige Aufgabe des Transfer-Lernens für die Schätzung der CATE-Funktion anzugehen, und bewerten sie anhand einer synthetischen Umgebung und eines realen experimentellen Datensatzes.", "Verwendung der Regression mit neuronalen Netzen und Vergleich von Transfer-Learning-Konzepten zur Schätzung eines bedingten durchschnittlichen Behandlungseffekts unter der Annahme der String-Ignorierbarkeit."]} +{"source": "Neuronal assemblies, loosely defined as subsets of neurons with reoccurring spatio-temporally coordinated activation patterns, or \"motifs\", are thought to be building blocks of neural representations and information processing. We here propose LeMoNADe, a new exploratory data analysis method that facilitates hunting for motifs in calcium imaging videos, the dominant microscopic functional imaging modality in neurophysiology. Our nonparametric method extracts motifs directly from videos, bypassing the difficult intermediate step of spike extraction. Our technique augments variational autoencoders with a discrete stochastic node, and we show in detail how a differentiable reparametrization and relaxation can be used. An evaluation on simulated data, with available ground truth, reveals excellent quantitative performance. In real video data acquired from brain slices, with no ground truth available, LeMoNADe uncovers nontrivial candidate motifs that can help generate hypotheses for more focused biological investigations.", "target": ["Wir stellen LeMoNADe vor, eine durchgängig erlernte Methode zur Motiverkennung, die direkt auf Calcium Bildgebungsvideos arbeitet.", "Dieses Papier schlägt ein VAE-ähnliches Modell zur Identifizierung von Motiven aus Calcium Bildgebungsvideos vor, das sich auf Bernouli-Variablen stützt und einen Gumbel-Softmax-Trick zur Inferenz benötigt."]} +{"source": "A noisy and diverse demonstration set may hinder the performances of an agent aiming to acquire certain skills via imitation learning. However, state-of-the-art imitation learning algorithms often assume the optimality of the given demonstration set.\n In this paper, we address such optimal assumption by learning only from the most suitable demonstrations in a given set. Suitability of a demonstration is estimated by whether imitating it produce desirable outcomes for achieving the goals of the tasks. For more efficient demonstration suitability assessments, the learning agent should be capable of imitating a demonstration as quick as possible, which shares similar spirit with fast adaptation in the meta-learning regime. Our framework, thus built on top of Model-Agnostic Meta-Learning, evaluates how desirable the imitated outcomes are, after adaptation to each demonstration in the set. The resulting assessments hence enable us to select suitable demonstration subsets for acquiring better imitated skills. The videos related to our experiments are available at: https://sites.google.com/view/deepdj", "target": ["Wir schlagen einen Rahmen vor, um eine gute Strategie durch Imitationslernen aus einer verrauschten Demonstrationsmenge zu erlernen, indem wir ein Meta-Training für die Bewertung der Demonstrationseignung durchführen.", "Trägt einen MAML-basierten Algorithmus zum Imitation Learning bei, der automatisch feststellt, ob die angebotenen Demonstrationen \"geeignet\" sind.", "Eine Methode zum Imitationslernen aus einer Menge von Demonstrationen, die unbrauchbares Verhalten enthält, die die nützlichen Demonstrationen durch ihre Leistungsgewinne zum Zeitpunkt des Meta-Trainings auswählt."]} +{"source": "We introduce causal implicit generative models (CiGMs): models that allow sampling from not only the true observational but also the true interventional distributions. We show that adversarial training can be used to learn a CiGM, if the generator architecture is structured based on a given causal graph. We consider the application of conditional and interventional sampling of face images with binary feature labels, such as mustache, young. We preserve the dependency structure between the labels with a given causal graph. We devise a two-stage procedure for learning a CiGM over the labels and the image. First we train a CiGM over the binary labels using a Wasserstein GAN where the generator neural network is consistent with the causal graph between the labels. Later, we combine this with a conditional GAN to generate images conditioned on the binary labels. We propose two new conditional GAN architectures: CausalGAN and CausalBEGAN. We show that the optimal generator of the CausalGAN, given the labels, samples from the image distributions conditioned on these labels. The conditional GAN combined with a trained CiGM for the labels is then a CiGM over the labels and the generated image. We show that the proposed architectures can be used to sample from observational and interventional image distributions, even for interventions which do not naturally occur in the dataset.", "target": ["Wir führen kausale implizite generative Modelle ein, die aus bedingten und intervenierenden Verteilungen abfragen können, und schlagen außerdem zwei neue bedingte GANs vor, die wir für ihr Training verwenden.", "Verfahren zur Kombination eines zufälligen Graphen, der die Abhängigkeitsstruktur von Etiketten beschreibt, mit zwei bedingten GAN-Architekturen, die Bilder erzeugen, die sich auf das binäre Label beziehen.", "Die Autoren befassen sich mit der Frage des Lernens eines kausalen Modells zwischen Bildvariablen und dem Bild selbst aus Beobachtungsdaten, wenn eine kausale Struktur zwischen Bildbezeichnungen gegeben ist."]} +{"source": "Self-normalizing discriminative models approximate the normalized probability of a class without having to compute the partition function. This property is useful to computationally-intensive neural network classifiers, as the cost of computing the partition function grows linearly with the number of classes and may become prohibitive. In particular, since neural language models may deal with up to millions of classes, their self-normalization properties received notable attention. Several\n recent studies empirically found that language models, trained using Noise Contrastive Estimation (NCE), exhibit self-normalization, but could not explain why. In this study, we provide a theoretical justification to this property by viewing\n NCE as a low-rank matrix approximation. Our empirical investigation compares NCE to the alternative explicit approach for self-normalizing language models. It also uncovers a surprising negative correlation between self-normalization and\n perplexity, as well as some regularity in the observed errors that may potentially be used for improving self-normalization algorithms in the future.", "target": ["Wir beweisen, dass NCE selbstnormiert ist und demonstrieren dies an Datensätzen.", "Es wird ein Beweis für die Selbstnormalisierung der NCE als Ergebnis einer rangniedrigen Matrixapproximation der rangniedrigen Approximation der normalisierten bedingten Wahrscheinlichkeitsmatrix vorgelegt.", "In diesem Beitrag wird das Problem der selbstnormalisierenden Modelle betrachtet und der Mechanismus der Selbstnormalisierung durch Interpretation der NCE im Sinne der Matrixfaktorisierung erklärt."]} +{"source": "Learning word representations from large available corpora relies on the distributional hypothesis that words present in similar contexts tend to have similar meanings. Recent work has shown that word representations learnt in this manner lack sentiment information which, fortunately, can be leveraged using external knowledge. Our work addresses the question: can affect lexica improve the word representations learnt from a corpus? In this work, we propose techniques to incorporate affect lexica, which capture fine-grained information about a word's psycholinguistic and emotional orientation, into the training process of Word2Vec SkipGram, Word2Vec CBOW and GloVe methods using a joint learning approach. We use affect scores from Warriner's affect lexicon to regularize the vector representations learnt from an unlabelled corpus. Our proposed method outperforms previously proposed methods on standard tasks for word similarity detection, outlier detection and sentiment detection. We also demonstrate the usefulness of our approach for a new task related to the prediction of formality, frustration and politeness in corporate communication.", "target": ["Die Anreicherung von Worteinbettungen mit Affektinformationen verbessert deren Leistung bei Sentiment Vorhersageaufgaben.", "Es wird vorgeschlagen, Affekt-Lexika zur Verbesserung der Worteinbettung zu verwenden, um die Standardlösungen Word2vec und Glove zu übertreffen.", "In diesem Beitrag wird vorgeschlagen, Informationen aus einer semantischen Ressource, die den Affekt von Wörtern quantifiziert, in einen textbasierten Worteinbettungsalgorithmus zu integrieren, um Sprachmodelle besser auf semantische und pragmatische Phänomene abzustimmen.", "In diesem Beitrag werden Modifikationen der Verlustfunktionen word2vec und GloVe vorgestellt, um Affekt-Lexika einzubeziehen und das Lernen affektsensitiver Worteinbettungen zu erleichtern."]} +{"source": "Different kinds of representation learning techniques on graph have shown significant effect in downstream machine learning tasks. Recently, in order to inductively learn representations for graph structures that is unobservable during training, a general framework with sampling and aggregating (GraphSAGE) was proposed by Hamilton and Ying and had been proved more efficient than transductive methods on fileds like transfer learning or evolving dataset. However, GraphSAGE is uncapable of selective neighbor sampling and lack of memory of known nodes that've been trained. To address these problems, we present an unsupervised method that samples neighborhood information attended by co-occurring structures and optimizes a trainable global bias as a representation expectation for each node in the given graph. Experiments show that our approach outperforms the state-of-the-art inductive and unsupervised methods for representation learning on graphs.", "target": ["Für die unbeaufsichtigte und induktive Netzwerkeinbettung schlagen wir einen neuartigen Ansatz vor, um die relevantesten Nachbarn zu erkunden und das zuvor gelernte Wissen über die Knoten zu erhalten, indem wir eine Bi-Attention-Architektur verwenden bzw. eine globale Verzerrung einführen", "Hier wird eine Erweiterung von GraphSAGE vorgeschlagen, die eine globale Einbettungsmatrix in den lokalen Aggregationsfunktionen und eine Methode zur Auswahl interessanter Knoten verwendet."]} +{"source": "Learning distributed representations for nodes in graphs is a crucial primitive in network analysis with a wide spectrum of applications. Linear graph embedding methods learn such representations by optimizing the likelihood of both positive and negative edges while constraining the dimension of the embedding vectors. We argue that the generalization performance of these methods is not due to the dimensionality constraint as commonly believed, but rather the small norm of embedding vectors. Both theoretical and empirical evidence are provided to support this argument: (a) we prove that the generalization error of these methods can be bounded by limiting the norm of vectors, regardless of the embedding dimension; (b) we show that the generalization performance of linear graph embedding methods is correlated with the norm of embedding vectors, which is small due to the early stopping of SGD and the vanishing gradients. We performed extensive experiments to validate our analysis and showcased the importance of proper norm regularization in practice.", "target": ["Wir argumentieren, dass die Verallgemeinerung der linearen Grapheneinbettung nicht auf die Dimensionalitätsbeschränkung zurückzuführen ist, sondern vielmehr auf die kleine Norm der Einbettungsvektoren.", "Die Autoren zeigen, dass der Generalisierungsfehler von linearen Grapheneinbettungsmethoden durch die Norm der Einbettungsvektoren und nicht durch Dimensionalitätsbeschränkungen begrenzt wird.", "Die Autoren schlagen eine theoretische Schranke für die Generalisierungsleistung des Lernens von Grapheneinbettungen vor und argumentieren, dass die Norm der Koordinaten den Erfolg der gelernten Darstellung bestimmt."]} +{"source": "Momentum-based acceleration of stochastic gradient descent (SGD) is widely used in deep learning. We propose the quasi-hyperbolic momentum algorithm (QHM) as an extremely simple alteration of momentum SGD, averaging a plain SGD step with a momentum step. We describe numerous connections to and identities with other algorithms, and we characterize the set of two-state optimization algorithms that QHM can recover. Finally, we propose a QH variant of Adam called QHAdam, and we empirically demonstrate that our algorithms lead to significantly improved training in a variety of settings, including a new state-of-the-art result on WMT16 EN-DE. We hope that these empirical results, combined with the conceptual and practical simplicity of QHM and QHAdam, will spur interest from both practitioners and researchers. Code is immediately available.", "target": ["Mischen Sie SGD und Momentum (oder machen Sie etwas Ähnliches mit Adam), um große Gewinne zu erzielen.", "Die Arbeit schlägt einfache Modifikationen von SGD und Adam vor, sogenannte QH-Varianten, die die Eltern-Methode und eine Reihe anderer Optimierungstricks wiederherstellen können.", "Eine Variante des klassischen Impulses, die ein gewichtetes Mittel aus Impuls- und Gradientenaktualisierung verwendet, sowie eine Bewertung der Beziehungen zwischen anderen Impuls-basierten Optimierungsverfahren."]} +{"source": "Reinforcement Learning (RL) can model complex behavior policies for goal-directed sequential decision making tasks. A hallmark of RL algorithms is Temporal Difference (TD) learning: value function for the current state is moved towards a bootstrapped target that is estimated using the next state's value function. lambda-returns define the target of the RL agent as a weighted combination of rewards estimated by using multiple many-step look-aheads. Although mathematically tractable, the use of exponentially decaying weighting of n-step returns based targets in lambda-returns is a rather ad-hoc design choice. Our major contribution is that we propose a generalization of lambda-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner. In contrast to lambda-returns wherein the RL agent is restricted to use an exponentially decaying weighting scheme, CAR allows the agent to learn to decide how much it wants to weigh the n-step returns based targets. Our experiments, in addition to showing the efficacy of CAR, also empirically demonstrate that using sophisticated weighted mixtures of multi-step returns (like CAR and lambda-returns) considerably outperforms the use of n-step returns. We perform our experiments on the Asynchronous Advantage Actor Critic (A3C) algorithm in the Atari 2600 domain.", "target": ["Ein neuartiger Weg zur Verallgemeinerung von Lambda-Renditen, indem der RL-Agent entscheiden kann, wie stark er jede der n-Schritt-Renditen gewichten möchte.", "Erweitert den A3C-Algorithmus mit Lambda-Rückgaben und schlägt einen Ansatz zum Lernen der Gewichte der Rückgaben vor.", "Die Autoren stellen konfidenzbasierte autodidaktische Rückgaben vor, eine Deep Learning RL-Methode zur Anpassung der Gewichte eines Eignungsvektors in TD(lambda)-ähnlichen Wertschätzungen, um stabilere Schätzungen des Zustands zu begünstigen."]} +{"source": "Current end-to-end deep learning driving models have two problems: (1) Poor\n generalization ability of unobserved driving environment when diversity of train-\n ing driving dataset is limited (2) Lack of accident explanation ability when driving\n models don’t work as expected. To tackle these two problems, rooted on the be-\n lieve that knowledge of associated easy task is benificial for addressing difficult\n task, we proposed a new driving model which is composed of perception module\n for see and think and driving module for behave, and trained it with multi-task\n perception-related basic knowledge and driving knowledge stepwisely. Specifi-\n cally segmentation map and depth map (pixel level understanding of images) were\n considered as what & where and how far knowledge for tackling easier driving-\n related perception problems before generating final control commands for difficult\n driving task. The results of experiments demonstrated the effectiveness of multi-\n task perception knowledge for better generalization and accident explanation abil-\n ity. With our method the average sucess rate of finishing most difficult navigation\n tasks in untrained city of CoRL test surpassed current benchmark method for 15\n percent in trained weather and 20 percent in untrained weathers.", "target": ["Wir haben ein neues selbstfahrendes Modell vorgeschlagen, das sich aus einem Wahrnehmungsmodul für das Sehen und Denken und einem Fahrmodul für das Verhalten zusammensetzt, um eine bessere Generalisierungs- und Unfallerklärungsfähigkeit zu erreichen.", "Vorgestellt wird eine Multitasking-Lernarchitektur für die Schätzung von Tiefen- und Segmentierungskarten und die Fahrvorhersage unter Verwendung eines Wahrnehmungsmoduls und eines Fahrentscheidungsmoduls.", "Eine Methode für eine modifizierte Ende-zu-Ende Architektur, die eine bessere Verallgemeinerungs- und Erklärungsfähigkeit aufweist, robuster gegenüber verschiedenen Testumgebungen ist und über eine Decoderausgabe verfügt, die bei der Fehlersuche im Modell helfen kann.", "Die Autoren stellen ein neuronales Multi-Task Convolutional Neural Network für durchgängiges Fahren vor und liefern Auswertungen mit dem Open-Source-Simulator CARLA, die eine bessere Generalisierungsleistung unter neuen Fahrbedingungen zeigen als die Grundlinien."]} +{"source": "Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarsest graph and refines the embeddings to the original graph through a novel graph convolution neural network that it learns. The proposed MILE framework is agnostic to the underlying graph embedding techniques and can be applied to many existing graph embedding methods without modifying them. We employ our framework on several popular graph embedding techniques and conduct embedding for real-world graphs. Experimental results on five large-scale datasets demonstrate that MILE significantly boosts the speed (order of magnitude) of graph embedding while also often generating embeddings of better quality for the task of node classification. MILE can comfortably scale to a graph with 9 million nodes and 40 million edges, on which existing methods run out of memory or take too long to compute on a modern workstation.", "target": ["Ein allgemeiner Rahmen zur Skalierung bestehender Grapheneinbettungstechniken auf große Graphen.", "In diesem Beitrag wird ein mehrstufiges Einbettungsframework vorgeschlagen, das zusätzlich zu den bestehenden Netzwerkeinbettungsmethoden angewendet werden kann, um große Netzwerke mit höherer Geschwindigkeit zu skalieren.", "Die Autoren schlagen ein dreistufiges Framework für die Einbettung großer Graphen mit verbesserter Einbettungsqualität vor."]} +{"source": "Anomaly detection discovers regular patterns in unlabeled data and identifies the non-conforming data points, which in some cases are the result of malicious attacks by adversaries. Learners such as One-Class Support Vector Machines (OCSVMs) have been successfully in anomaly detection, yet their performance may degrade significantly in the presence of sophisticated adversaries, who target the algorithm itself by compromising the integrity of the training data. With the rise in the use of machine learning in mission critical day-to-day activities where errors may have significant consequences, it is imperative that machine learning systems are made secure. To address this, we propose a defense mechanism that is based on a contraction of the data, and we test its effectiveness using OCSVMs. The proposed approach introduces a layer of uncertainty on top of the OCSVM learner, making it infeasible for the adversary to guess the specific configuration of the learner. We theoretically analyze the effects of adversarial perturbations on the separating margin of OCSVMs and provide empirical evidence on several benchmark datasets, which show that by carefully contracting the data in low dimensional spaces, we can successfully identify adversarial samples that would not have been identifiable in the original dimensional space. The numerical results show that the proposed method improves OCSVMs performance significantly (2-7%)", "target": ["Eine neuartige Methode zur Erhöhung der Widerstandsfähigkeit von OCSVMs gegen gezielte Integritätsangriffe durch selektive nichtlineare Transformationen von Daten in niedrigere Dimensionen.", "Die Autoren schlagen eine Verteidigung gegen Angriffe auf die Sicherheit von einklassigen SVM-basierten Anomalie-Detektoren vor.", "In diesem Papier wird untersucht, wie zufällige Projektionen verwendet werden können, um OCSVM robust gegenüber adversarially gestörten Trainingsdaten zu machen."]} +{"source": "In this paper, we present a layer-wise learning of stochastic neural networks (SNNs) in an information-theoretic perspective. In each layer of an SNN, the compression and the relevance are defined to quantify the amount of information that the layer contains about the input space and the target space, respectively. We jointly optimize the compression and the relevance of all parameters in an SNN to better exploit the neural network's representation. Previously, the Information Bottleneck (IB) framework (\\cite{Tishby99}) extracts relevant information for a target variable. Here, we propose Parametric Information Bottleneck (PIB) for a neural network by utilizing (only) its model parameters explicitly to approximate the compression and the relevance. We show that, as compared to the maximum likelihood estimate (MLE) principle, PIBs : (i) improve the generalization of neural networks in classification tasks, (ii) push the representation of neural networks closer to the optimal information-theoretical representation in a faster manner. ", "target": ["Lernen einer besseren Darstellung neuronaler Netze mit dem Prinzip des Informationsengpasses.", "Schlägt eine Lernmethode vor, die auf dem Informationsengpass Framework basiert, bei dem verborgene Schichten von tiefen Netzen die Eingabe X komprimieren und gleichzeitig genügend Informationen zur Vorhersage der Ausgabe Y beibehalten.", "In diesem Beitrag wird eine neue Methode für das Training stochastischer neuronaler Netze vorgestellt, die auf der Grundlage von Informationsrelevanz und -kompression arbeitet, ähnlich wie beim Information Bottleneck."]} +{"source": "The maximum mean discrepancy (MMD) between two probability measures P\n and Q is a metric that is zero if and only if all moments of the two measures\n are equal, making it an appealing statistic for two-sample tests. Given i.i.d. samples\n from P and Q, Gretton et al. (2012) show that we can construct an unbiased\n estimator for the square of the MMD between the two distributions. If P is a\n distribution of interest and Q is the distribution implied by a generative neural\n network with stochastic inputs, we can use this estimator to train our neural network.\n However, in practice we do not always have i.i.d. samples from our target\n of interest. Data sets often exhibit biases—for example, under-representation of\n certain demographics—and if we ignore this fact our machine learning algorithms\n will propagate these biases. Alternatively, it may be useful to assume our data has\n been gathered via a biased sample selection mechanism in order to manipulate\n properties of the estimating distribution Q.\n In this paper, we construct an estimator for the MMD between P and Q when we\n only have access to P via some biased sample selection mechanism, and suggest\n methods for estimating this sample selection mechanism when it is not already\n known. We show that this estimator can be used to train generative neural networks\n on a biased data sample, to give a simulator that reverses the effect of that\n bias.", "target": ["Wir schlagen einen Schätzer für die maximale mittlere Diskrepanz vor, der geeignet ist, wenn eine Zielverteilung nur über ein verzerrtes Stichprobenauswahlverfahren zugänglich ist, und zeigen, dass er in einem generativen Netzwerk verwendet werden kann, um diese Verzerrung zu korrigieren.", "Schlägt einen Wichtigkeits-gewichteten Schätzer der MMD vor, um die MMD zwischen Verteilungen zu schätzen, die auf Stichproben basieren, die nach einem bekannten oder geschätzten unbekannten Schema verzerrt sind.", "Die Autoren befassen sich mit dem Problem der Verzerrung der Stichprobenauswahl bei MMD-GANs und schlagen eine Schätzung der MMD zwischen zwei Verteilungen unter Verwendung der gewichteten maximalen mittleren Diskrepanz vor.", "In diesem Papier wird eine Modifikation des Ziels für das Training generativer Netze mit einem MMD-Gegner vorgestellt."]} +{"source": "We propose Bayesian Deep Q-Network (BDQN), a practical Thompson sampling based Reinforcement Learning (RL) Algorithm. Thompson sampling allows for targeted exploration in high dimensions through posterior sampling but is usually computationally expensive. We address this limitation by introducing uncertainty only at the output layer of the network through a Bayesian Linear Regression (BLR) model, which can be trained with fast closed-form updates and its samples can be drawn efficiently through the Gaussian distribution. We apply our method to a wide range of Atari Arcade Learning Environments. Since BDQN carries out more efficient exploration, it is able to reach higher rewards substantially faster than a key baseline, DDQN.", "target": ["Verwendung von Bayes'scher Regression zur Schätzung des Posterior über Q-Funktionen und Einsatz von Thompson Sampling als gezielte Explorationsstrategie mit effizientem Kompromiss zwischen Exploration und Exploitation.", "Die Autoren schlagen einen neuen Algorithmus für die Exploration in Deep RL vor, bei dem sie eine lineare Bayes'sche Regression mit Merkmalen aus der letzten Schicht eines DQN-Netzwerks anwenden, um die Q-Funktion für jede Aktion zu schätzen.", "Die Autoren beschreiben, wie Bayes'sche neuronale Netze mit Thompson-Sampling für eine effiziente Exploration beim q-Lernen eingesetzt werden können, und schlagen einen Ansatz vor, der die Epsilon-Greedy Explorationsansätze übertrifft."]} +{"source": "In this work, we propose the polynomial convolutional neural network (PolyCNN), as a new design of a weight-learning efficient variant of the traditional CNN. The biggest advantage of the PolyCNN is that at each convolutional layer, only one convolutional filter is needed for learning the weights, which we call the seed filter, and all the other convolutional filters are the polynomial transformations of the seed filter, which is termed as an early fan-out. Alternatively, we can also perform late fan-out on the seed filter response to create the number of response maps needed to be input into the next layer. Both early and late fan-out allow the PolyCNN to learn only one convolutional filter at each layer, which can dramatically reduce the model complexity by saving 10x to 50x parameters during learning. While being efficient during both training and testing, the PolyCNN does not suffer performance due to the non-linear polynomial expansion which translates to richer representational power within the convolutional layers. By allowing direct control over model complexity, PolyCNN provides a flexible trade-off between performance and efficiency. We have verified the on-par performance between the proposed PolyCNN and the standard CNN on several visual datasets, such as MNIST, CIFAR-10, SVHN, and ImageNet.", "target": ["PolyCNN muss nur einen Seed convolutional Filter auf jeder Schicht lernen. Dies ist eine effiziente Variante des traditionellen CNN mit gleichwertiger Leistung.", "Versuche, die Anzahl der Parameter des CNN-Modells zu reduzieren, indem die polynomiale Transformation von Filtern verwendet wird, um die Filterantworten zu vergrößern.", "Die Autoren schlagen eine Weight-Sharing-Architektur vor, um die Anzahl der Parameter eines Convolutional Neural Networks mit Seed-Filtern zu reduzieren."]} +{"source": "Detecting the emergence of abrupt property changes in time series is a challenging problem. Kernel two-sample test has been studied for this task which makes fewer assumptions on the distributions than traditional parametric approaches. However, selecting kernels is non-trivial in practice. Although kernel selection for the two-sample test has been studied, the insufficient samples in change point detection problem hinder the success of those developed kernel selection algorithms. In this paper, we propose KL-CPD, a novel kernel learning framework for time series CPD that optimizes a lower bound of test power via an auxiliary generative model. With deep kernel parameterization, KL-CPD endows kernel two-sample test with the data-driven kernel to detect different types of change-points in real-world applications. The proposed approach significantly outperformed other state-of-the-art methods in our comparative evaluation of benchmark datasets and simulation studies.", "target": ["In dieser Arbeit schlagen wir KL-CPD vor, ein neuartiges Kernel-Lernverfahren für Zeitreihen-CPD, das eine untere Schranke der Testleistung über ein generatives Hilfsmodell als Ersatz für die abnormale Verteilung optimiert. ", "Beschreibt einen neuartigen Ansatz zur Optimierung der Wahl des Kernels im Hinblick auf eine höhere Testleistung und zeigt, dass er Verbesserungen gegenüber Alternativen bietet."]} +{"source": "Theories in cognitive psychology postulate that humans use similarity as a basis\n for object categorization. However, work in image classification generally as-\n sumes disjoint and equally dissimilar classes to achieve super-human levels of\n performance on certain datasets. In our work, we adapt notions of similarity using\n weak labels over multiple hierarchical levels to boost classification performance.\n Instead of pitting clustering directly against classification, we use a warm-start\n based evaluation to explicitly provide value to a clustering representation by its\n ability to aid classification. We evaluate on CIFAR10 and a fine-grained classifi-\n cation dataset to show improvements in performance with the procedural addition\n of intermediate losses and weak labels based on multiple hierarchy levels. Further-\n more, we show that pretraining AlexNet on hierarchical weak labels in conjunc-\n tion with intermediate losses outperforms a classification baseline by over 17% on\n a subset of Birdsnap dataset. Finally, we show improvement over AlexNet trained\n using ImageNet pre-trained weights as initializations which further supports our \n claim of the importance of similarity.", "target": ["Clustern, bevor Sie klassifizieren; Verwendung schwacher Kennzeichnungen zur Verbesserung der Klassifizierung.", "Vorschlagen der Verwendung einer auf Clustering basierenden Verlustfunktion auf mehreren Ebenen eines Deepnets, sowie die Verwendung einer hierarchischen Struktur des Beschriftungsraums, um bessere Darstellungen zu trainieren.", "In diesem Beitrag werden hierarchische Label-Informationen verwendet, um zusätzliche Verluste auf Zwischenrepräsentationen beim Training neuronaler Netze zu erheben."]} +{"source": "Deep reinforcement learning algorithms that estimate state and state-action value functions have been shown to be effective in a variety of challenging domains, including learning control strategies from raw image pixels. However, algorithms that estimate state and state-action value functions typically assume a fully observed state and must compensate for partial or non-Markovian observations by using finite-length frame-history observations or recurrent networks. In this work, we propose a new deep reinforcement learning algorithm based on counterfactual regret minimization that iteratively updates an approximation to a cumulative clipped advantage function and is robust to partially observed state. We demonstrate that on several partially observed reinforcement learning tasks, this new class of algorithms can substantially outperform strong baseline methods: on Pong with single-frame observations, and on the challenging Doom (ViZDoom) and Minecraft (Malmö) first-person navigation benchmarks.", "target": ["Advantage-basierte Regret-Minimierung ist ein neuer Deep Reinforcement Learning-Algorithmus, der besonders effektiv bei teilweise beobachtbaren Aufgaben ist, wie z.B. 1st Person Navigation in Doom und Minecraft.", "In diesem Beitrag werden die Konzepte der kontrafaktischen Bedauernsminimierung im Bereich des Deep RL und ein Algorithmus namens ARM vorgestellt, der besser mit partieller Beobachtbarkeit umgehen kann.", "Die Arbeit bietet eine spieltheoretisch inspirierte Variante des Policy-Gradienten-Algorithmus, die auf der Idee der kontrafaktischen Bedauernsminimierung basiert, und behauptet, dass der Ansatz mit dem teilweise beobachtbaren Bereich besser umgehen kann als Standardmethoden."]} +{"source": "Recent deep multi-task learning (MTL) has been witnessed its success in alleviating data scarcity of some task by utilizing domain-specific knowledge from related tasks. Nonetheless, several major issues of deep MTL, including the effectiveness of sharing mechanisms, the efficiency of model complexity and the flexibility of network architectures, still remain largely unaddressed. To this end, we propose a novel generalized latent-subspace based knowledge sharing mechanism for linking task-specific models, namely tensor ring multi-task learning (TRMTL). TRMTL has a highly compact representation, and it is very effective in transferring task-invariant knowledge while being super flexible in learning task-specific features, successfully mitigating the dilemma of both negative-transfer in lower layers and under-transfer in higher layers. Under our TRMTL, it is feasible for each task to have heterogenous input data dimensionality or distinct feature sizes at different hidden layers. Experiments on a variety of datasets demonstrate our model is capable of significantly improving each single task’s performance, particularly favourable in scenarios where some of the tasks have insufficient data.", "target": ["Ein tiefes Multi-Task-Lernmodell, das die Tensor-Ring-Darstellung anpasst.", "Eine Variante der Tensor-Ring-Formulierung für Multi-Task-Lernen, bei der einige TT-Kerne für das Lernen einer \"gemeinsamen Aufgabe\" gemeinsam genutzt werden, während für jede einzelne Aufgabe individuelle TT-Kerne gelernt werden."]} +{"source": "Neural Processes (NPs) (Garnelo et al., 2018) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions. Each function models the distribution of the output given an input, conditioned on the context. NPs have the benefit of fitting observed data efficiently with linear complexity in the number of context input-output pairs, and can learn a wide family of conditional distributions; they learn predictive distributions conditioned on context sets of arbitrary size. Nonetheless, we show that NPs suffer a fundamental drawback of underfitting, giving inaccurate predictions at the inputs of the observed data they condition on. We address this issue by incorporating attention into NPs, allowing each input location to attend to the relevant context points for the prediction. We show that this greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled.", "target": ["Ein Regressionsmodell, das die bedingten Verteilungen eines stochastischen Prozesses lernt, indem es die Aufmerksamkeit in neuronale Prozesse einbezieht.", "Es wird vorgeschlagen, das Problem der unzureichenden Anpassung bei der neuronalen Prozessmethode zu lösen, indem dem deterministischen Pfad ein Aufmerksamkeitsmechanismus hinzugefügt wird.", "Eine Erweiterung des Rahmens der Neuronalen Prozesse, die einen aufmerksamkeitsbasierten Konditionierungsmechanismus hinzufügt, der es dem Modell ermöglicht, Abhängigkeiten in der Konditionierungsmenge besser zu erfassen.", "Die Autoren erweitern neuronale Prozesse, indem sie die self-attention zur Anreicherung der Merkmale der Kontextpunkte und die cross-attention zur Erzeugung einer abfragespezifischen Repräsentation einbeziehen. Sie lösen das Underfitting-Problem von NPs und zeigen, dass ANPs besser und schneller konvergieren als NPs."]} +{"source": "Deconvolutional layers have been widely used in a variety of deep\n models for up-sampling, including encoder-decoder networks for\n semantic segmentation and deep generative models for unsupervised\n learning. One of the key limitations of deconvolutional operations\n is that they result in the so-called checkerboard problem. This is\n caused by the fact that no direct relationship exists among adjacent\n pixels on the output feature map. To address this problem, we\n propose the pixel deconvolutional layer (PixelDCL) to establish\n direct relationships among adjacent pixels on the up-sampled feature\n map. Our method is based on a fresh interpretation of the regular\n deconvolution operation. The resulting PixelDCL can be used to\n replace any deconvolutional layer in a plug-and-play manner without\n compromising the fully trainable capabilities of original models.\n The proposed PixelDCL may result in slight decrease in efficiency,\n but this can be overcome by an implementation trick. Experimental\n results on semantic segmentation demonstrate that PixelDCL can\n consider spatial features such as edges and shapes and yields more\n accurate segmentation outputs than deconvolutional layers. When used\n in image generation tasks, our PixelDCL can largely overcome the\n checkerboard problem suffered by regular deconvolution operations.", "target": ["Lösung des Schachbrettproblems in der Deconvolutional Layer durch Aufbau von Abhängigkeiten zwischen Pixeln.", "In dieser Arbeit werden Pixel Deconvolutional Layers für Convolutional Neural Networks vorgeschlagen, um den Schachbretteffekt zu mildern.", "Eine neuartige Technik zur Verallgemeinerung von Deconvolution Operationen, die in Standard CNN Architekturen verwendet werden, die eine sequentielle Vorhersage von benachbarten Pixelmerkmalen vorschlägt, was zu räumlich glatteren Ausgaben für Deconvolution Layers führt."]} +{"source": "In this paper, the preparation of a neural network for pruning and few-bit quantization is formulated as a variational inference problem. To this end, a quantizing prior that leads to a multi-modal, sparse posterior distribution over weights, is introduced and a differentiable Kullback-Leibler divergence approximation for this prior is derived. After training with Variational Network Quantization, weights can be replaced by deterministic quantization values with small to negligible loss of task accuracy (including pruning by setting weights to 0). The method does not require fine-tuning after quantization. Results are shown for ternary quantization on LeNet-5 (MNIST) and DenseNet (CIFAR-10).", "target": ["Wir quantisieren und prunen die Gewichte des neuronalen Netzes mit Hilfe von Bayes'scher Variationsinferenz mit einem multimodalen, Seltenheit induzierenden Prior.", "Schlägt vor, eine Mischung aus kontinuierlichem Spike-Propto 1/abs als Prior für ein Bayes'sches neuronales Netz zu verwenden und demonstriert die gute Leistung mit relativ sparsamen Convnets für Minist und Cifar-10.", "In diesem Beitrag wird ein variationaler Bayes'scher Ansatz vorgestellt, mit dem die Gewichte neuronaler Netze nach dem Training auf prinzipielle Weise auf ternäre Werte quantifiziert werden können."]} +{"source": "Deep neural networks (DNNs) although achieving human-level performance in many domains, have very large model size that hinders their broader applications on edge computing devices. Extensive research work have been conducted on DNN model compression or pruning. However, most of the previous work took heuristic approaches. This work proposes a progressive weight pruning approach based on ADMM (Alternating Direction Method of Multipliers), a powerful technique to deal with non-convex optimization problems with potentially combinatorial constraints. Motivated by dynamic programming, the proposed method reaches extremely high pruning rate by using partial prunings with moderate pruning rates. Therefore, it resolves the accuracy degradation and long convergence time problems when pursuing extremely high pruning ratios. It achieves up to 34× pruning rate for ImageNet dataset and 167× pruning rate for MNIST dataset, significantly higher than those reached by the literature work. Under the same number of epochs, the proposed method also achieves faster convergence and higher compression rates. The codes and pruned DNN models are released in the anonymous link bit.ly/2zxdlss.", "target": ["Wir implementieren einen DNN-Gewichts Pruning Ansatz, der die höchsten Pruning Raten erzielt.", "Diese Arbeit konzentriert sich auf das Pruning von Gewichten für die Kompression neuronaler Netze und erreicht eine 30-fache Kompressionsrate für AlexNet und VGG für ImageNet.", "Eine progressive Pruning Technik, die den Gewichtsparametern eine strukturelle Seltenheitsbeschränkung auferlegt und die Optimierung als ADMM Framework umschreibt, wodurch eine höhere Genauigkeit als beim projizierten Gradientenabstieg erreicht wird."]} +{"source": "In this paper, we present a new deep learning architecture for addressing the problem of supervised learning with sparse and irregularly sampled multivariate time series. The architecture is based on the use of a semi-parametric interpolation network followed by the application of a prediction network. The interpolation network allows for information to be shared across multiple dimensions of a multivariate time series during the interpolation stage, while any standard deep learning model can be used for the prediction network. This work is motivated by the analysis of physiological time series data in electronic health records, which are sparse, irregularly sampled, and multivariate. We investigate the performance of this architecture on both classification and regression tasks, showing that our approach outperforms a range of baseline and recently proposed models.\n", "target": ["In dieser Arbeit wird eine neue Deep Learning Architektur zur Lösung des Problems des überwachten Lernens mit spärlichen und unregelmäßig abgetasteten multivariaten Zeitreihen vorgestellt.", "Schlägt einen Rahmen für die Erstellung von Vorhersagen für spärliche, unregelmäßig abgetastete Zeitreihendaten unter Verwendung eines Interpolationsmoduls vor, das die fehlenden Werte durch glatte Interpolation, nicht glatte Interpolation und Intensität modelliert. ", "Löst das Problem des überwachten Lernens mit spärlichen und unregelmäßig abgetasteten multivariaten Zeitreihen mit Hilfe eines semiparametrischen Interpolationsnetzwerks, gefolgt von einem Vorhersagenetzwerk."]} +{"source": "We introduce an analytic distance function for moderately sized point sets of known cardinality that is shown to have very desirable properties, both as a loss function as well as a regularizer for machine learning applications. We compare our novel construction to other point set distance functions and show proof of concept experiments for training neural networks end-to-end on point set prediction tasks such as object detection.", "target": ["Permutationsinvariante Verlustfunktion für die Punktmengenvorhersage.", "Schlägt einen neuen Verlust für die Punktregistrierung (Ausrichten von zwei Punktmengen) mit vorteilhafter permutationsinvarianter Eigenschaft vor. ", "In diesem Beitrag wird eine neuartige Distanzfunktion zwischen Punktmengen eingeführt, zwei weitere Permutationsdistanzen in einer durchgängigen Objekterkennungsaufgabe angewendet und gezeigt, dass in zwei Dimensionen alle lokalen Minima des holographischen Verlustes globale Minima sind.", "Vorschlagen permutationsinvarianter Verlustfunktionen, die von der Entfernung der Mengen abhängen."]} +{"source": "We introduce a hierarchical model for efficient placement of computational graphs onto hardware devices, especially in heterogeneous environments with a mixture of CPUs, GPUs, and other computational devices. Our method learns to assign graph operations to groups and to allocate those groups to available devices. The grouping and device allocations are learned jointly. The proposed method is trained with policy gradient and requires no human intervention. Experiments with widely-used\n computer vision and natural language models show that our algorithm can find optimized, non-trivial placements for TensorFlow computational graphs with over 80,000 operations. In addition, our approach outperforms placements by human\n experts as well as a previous state-of-the-art placement method based on deep reinforcement learning. Our method achieves runtime reductions of up to 60.6% per training step when applied to models such as Neural Machine Translation.", "target": ["Wir stellen ein hierarchisches Modell für die effiziente, durchgängige Platzierung von Berechnungsgraphen auf Hardwaregeräten vor.", "Es wird vorgeschlagen, Gruppen von Operatoren gemeinsam zu erlernen und auf Geräten zu platzieren, um Operationen für tiefes Lernen durch Reinforcement Learning zu verteilen.", "Die Autoren verwenden ein vollständig verbundenes Netzwerk, um den Schritt der Kollokation in einer automatischen Platzierungsmethode zu ersetzen, die zur Beschleunigung der Laufzeit eines TensorFlow-Modells vorgeschlagen wurde.", "Schlägt einen Geräteplatzierungsalgorithmus vor, um Operationen von Tensorflow auf Geräten zu platzieren."]} +{"source": "Motion is an important signal for agents in dynamic environments, but learning to represent motion from unlabeled video is a difficult and underconstrained problem. We propose a model of motion based on elementary group properties of transformations and use it to train a representation of image motion. While most methods of estimating motion are based on pixel-level constraints, we use these group properties to constrain the abstract representation of motion itself. We demonstrate that a deep neural network trained using this method captures motion in both synthetic 2D sequences and real-world sequences of vehicle motion, without requiring any labels. Networks trained to respect these constraints implicitly identify the image characteristic of motion in different sequence types. In the context of vehicle motion, this method extracts information useful for localization, tracking, and odometry. Our results demonstrate that this representation is useful for learning motion in the general setting where explicit labels are difficult to obtain.", "target": ["Wir schlagen eine Methode vor, die Gruppeneigenschaften nutzt, um eine Darstellung von Bewegung ohne Labels zu erlernen, und demonstrieren die Verwendung dieser Methode zur Darstellung von 2D- und 3D-Bewegungen.", "Schlägt vor, die starre Bewegungsgruppe aus einer latenten Repräsentation von Bildsequenzen zu lernen, ohne dass explizite Beschriftungen erforderlich sind, und demonstriert die Methode experimentell an Sequenzen von MINST-Ziffern und dem KITTI-Datensatz.", "Diese Arbeit schlägt einen Ansatz für das Erlernen von Video-Bewegungsmerkmalen in einer unbeaufsichtigten Art und Weise, unter Verwendung von Einschränkungen zur Optimierung des neuronalen Netzes, um Merkmale zu erzeugen, die zur Regression der Odometrie verwendet werden können."]} +{"source": "This paper introduces the concept of continuous convolution to neural networks and deep learning applications in general. Rather than directly using discretized information, input data is first projected into a high-dimensional Reproducing Kernel Hilbert Space (RKHS), where it can be modeled as a continuous function using a series of kernel bases. We then proceed to derive a closed-form solution to the continuous convolution operation between two arbitrary functions operating in different RKHS. Within this framework, convolutional filters also take the form of continuous functions, and the training procedure involves learning the RKHS to which each of these filters is projected, alongside their weight parameters. This results in much more expressive filters, that do not require spatial discretization and benefit from properties such as adaptive support and non-stationarity. Experiments on image classification are performed, using classical datasets, with results indicating that the proposed continuous convolutional neural network is able to achieve competitive accuracy rates with far fewer parameters and a faster convergence rate.", "target": ["In dieser Arbeit wird eine neuartige Convolutional Layer vorgeschlagen, die in einem kontinuierlichen Reproduzierenden Kernel-Hilbert-Raum arbeitet.", "Projektion von Beispielen in einen RK-Hilbert-Raum und Durchführung von Convolution und Filterung in diesem Raum.", "Diese Arbeit formuliert eine Variante von Convolutional Neural Networks, die beides, Aktivierungen und Filter, als kontinuierliche Funktionen modelliert, die aus Kernel-Basen zusammengesetzt sind."]} +{"source": "Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes. Some recent studies suggest a more important role of image textures. We here put these conflicting hypotheses to a quantitative test by evaluating CNNs and human observers on images with a texture-shape cue conflict. We show that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies. We then demonstrate that the same standard architecture (ResNet-50) that learns a texture-based representation on ImageNet is able to learn a shape-based representation instead when trained on 'Stylized-ImageNet', a stylized version of ImageNet. This provides a much better fit for human behavioural performance in our well-controlled psychophysical lab setting (nine experiments totalling 48,560 psychophysical trials across 97 observers) and comes with a number of unexpected emergent benefits such as improved object detection performance and previously unseen robustness towards a wide range of image distortions, highlighting advantages of a shape-based representation.", "target": ["ImageNet-trainierte CNNs sind auf die Textur von Objekten ausgerichtet (statt auf die Form wie beim Menschen). Die Überwindung dieses großen Unterschieds zwischen menschlichem und maschinellem Sehen führt zu einer verbesserten Erkennungsleistung und einer bisher nicht gekannten Robustheit gegenüber Bildverzerrungen.", "Verwendung von Bildstilisierung zur Erweiterung der Trainingsdaten für ImageNet-trainierte CNNs, um die resultierenden Netzwerke besser an die menschlichen Urteile anzugleichen.", "Diese Arbeit untersucht CNNs wie AlexNet, VGG, GoogleNet und ResNet50, zeigt, dass diese Modelle in Richtung Textur voreingenommen sind, wenn sie auf ImageNet trainiert werden, und schlägt einen neuen ImageNet-Datensatz vor."]} +{"source": "In this work, we exploited different strategies to provide prior knowledge to commonly used generative modeling approaches aiming to obtain speaker-dependent low dimensional representations from short-duration segments of speech data, making use of available information of speaker identities. Namely, convolutional variational autoencoders are employed, and statistics of its learned posterior distribution are used as low dimensional representations of fixed length short-duration utterances. In order to enforce speaker dependency in the latent layer, we introduced a variation of the commonly used prior within the variational autoencoders framework, i.e. the model is simultaneously trained for reconstruction of inputs along with a discriminative task performed on top of latent layers outputs. The effectiveness of both triplet loss minimization and speaker recognition are evaluated as implicit priors on the challenging cross-language NIST SRE 2016 setting and compared against fully supervised and unsupervised baselines.", "target": ["Wir evaluieren die Effektivität von zusätzlichen diskriminierenden Aufgaben, die zusätzlich zu den Statistiken der Posterior-Verteilung durchgeführt werden, die von variativen Autoencodern gelernt wurden, um die Sprecherabhängigkeit zu erzwingen.", "Vorschlag eines Autoencoder Modells zum Erlernen einer Repräsentation für die Sprecherverifikation unter Verwendung von Analysefenstern kurzer Dauer.", "Eine modifizierte Version des variationalen Autoencoder Modells, das das Problem der Sprechererkennung im Zusammenhang mit kurzen Segmenten angeht."]} +{"source": "The importance-weighted autoencoder (IWAE) approach of Burda et al. defines a sequence of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently, Cremer et al. reinterpreted the IWAE bounds as ordinary variational evidence lower bounds (ELBO) applied to increasingly accurate variational distributions. In this work, we provide yet another perspective on the IWAE bounds. We interpret each IWAE bound as a biased estimator of the true marginal likelihood where for the bound defined on $K$ samples we show the bias to be of order O(1/K). In our theoretical analysis of the IWAE objective we derive asymptotic bias and variance expressions. Based on this analysis we develop jackknife variational inference (JVI),\n a family of bias-reduced estimators reducing the bias to $O(K^{-(m+1)})$ for any given m < K while retaining computational efficiency. Finally, we demonstrate that JVI leads to improved evidence estimates in variational autoencoders. We also report first results on applying JVI to learning variational autoencoders.\n\n Our implementation is available at https://github.com/Microsoft/jackknife-variational-inference", "target": ["Die Variationsinferenz ist verzerrt, wir sollten sie entzerren.", "Einführung in die Jackknife-Variationsinferenz, eine Methode zur Entschärfung von Monte Carlo Zielen, wie z. B. dem nach Wichtigkeit gewichteten Autoencoder.", "Die Autoren analysieren die Verzerrung und Varianz der IWAE-Schranke und leiten einen Jacknife-Ansatz zur Schätzung von Momenten als eine Möglichkeit zur Entlastung von IWAE für endliche, nach Wichtigkeit gewichtete Stichproben ab."]} +{"source": "In this paper, we consider the problem of autonomous lane changing for self driving vehicles in a multi-lane, multi-agent setting. We present a framework that demonstrates a more structured and data efficient alternative to end-to-end complete policy learning on problems where the high-level policy is hard to formulate using traditional optimization or rule based methods but well designed low-level controllers are available. Our framework uses deep reinforcement learning solely to obtain a high-level policy for tactical decision making, while still maintaining a tight integration with the low-level controller, thus getting the best of both worlds. We accomplish this with Q-masking, a technique with which we are able to incorporate prior knowledge, constraints, and information from a low-level controller, directly in to the learning process thereby simplifying the reward function and making learning faster and data efficient. We provide preliminary results in a simulator and show our approach to be more efficient than a greedy baseline, and more successful and safer than human driving.", "target": ["Ein Framework, das eine Strategie für den autonomen Fahrspurwechsel bereitstellt, indem es mit Hilfe von Deep Reinforcement Learning taktische Entscheidungen auf hoher Ebene trifft und eine enge Integration mit einer Low-Level Steuerung aufrechterhält, um Aktionen auf niedriger Ebene durchzuführen.", "Betrachtet das Problem des autonomen Spurwechsels für selbstfahrende Autos in einer mehrspurigen Multi-Agenten Slotcar Umgebung und schlägt eine neue Lernstrategie Q-Masking vor, die eine definierte Steuerung auf niedriger Ebene mit taktischen Entscheidungsregeln auf hoher Ebene verbindet.", "In diesem Beitrag wird ein Deep-Q-Learning-Ansatz für das Problem des Spurwechsels mit Hilfe von \"Q-Masking\" vorgeschlagen, der den Aktionsraum entsprechend den Einschränkungen oder dem Vorwissen reduziert.", "Die Autoren schlagen eine Methode vor, bei der eine auf Q-Learning basierende High-Level Regelwerk mit einer kontextuellen Maske kombiniert wird, die aus Sicherheitsbedingungen und Low-Level Steuerungen abgeleitet wird, die bestimmte Aktionen in bestimmten Zuständen nicht auswählbar machen. "]} +{"source": "Despite the recent successes in robotic locomotion control, the design of robot relies heavily on human engineering. Automatic robot design has been a long studied subject, but the recent progress has been slowed due to the large combinatorial search space and the difficulty in evaluating the found candidates. To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space. We propose Neural Graph Evolution (NGE), which performs selection on current candidates and evolves new ones iteratively. Different from previous approaches, NGE uses graph neural networks to parameterize the control policies, which reduces evaluation cost on new candidates with the help of skill transfer from previously evaluated designs. In addition, NGE applies Graph Mutation with Uncertainty (GM-UC) by incorporating model uncertainty, which reduces the search space by balancing exploration and exploitation. We show that NGE significantly outperforms previous methods by an order of magnitude. As shown in experiments, NGE is the first algorithm that can automatically discover kinematically preferred robotic graph structures, such as a fish with two symmetrical flat side-fins and a tail, or a cheetah with athletic front and back legs. Instead of using thousands of cores for weeks, NGE efficiently solves searching problem within a day on a single 64 CPU-core Amazon EC2\n machine.\n", "target": ["Automatische Robotersuche mit graphischen neuronalen Netzen.", "Vorschlagen eines Ansatz für die automatische Entwicklung von Robotern auf der Grundlage der Evolution neuronaler Graphen. Die Experimente zeigen, dass die Optimierung von Steuerung und Hardware besser ist als nur die Optimierung der Steuerung.", "Die Autoren schlagen ein Schema vor, das auf einer graphischen Darstellung der Roboterstruktur und einem graphisch-neuronalen Netz als Steuerung basiert, um Roboterstrukturen in Kombination mit ihren Steuerungen zu optimieren. "]} +{"source": "Deep learning on graphs has become a popular research topic with many applications. However, past work has concentrated on learning graph embedding tasks only, which is in contrast with advances in generative models for images and text. Is it possible to transfer this progress to the domain of graphs? We propose to sidestep hurdles associated with linearization of such discrete structures by having a decoder output a probabilistic fully-connected graph of a predefined maximum size directly at once. Our method is formulated as a variational autoencoder. We evaluate on the challenging task of conditional molecule generation.", "target": ["Wir demonstrieren einen Autoencoder für Graphen.", "Erlernen der Erstellung von Graphen mit Hilfe von Deep-Learning-Methoden in \"one shot\", wobei die Wahrscheinlichkeit des Vorhandenseins von Knoten und Kanten sowie Knotenattributvektoren direkt ausgegeben werden.", "Ein automatischer variationaler Autoencoder zur Erzeugung von Graphen."]} +{"source": "Long Short-Term Memory (LSTM) is one of the most widely used recurrent structures in sequence modeling. Its goal is to use gates to control the information flow (e.g., whether to skip some information/transformation or not) in the recurrent computations, although its practical implementation based on soft gates only partially achieves this goal and is easy to overfit. In this paper, we propose a new way for LSTM training, which pushes the values of the gates towards 0 or 1. By doing so, we can (1) better control the information flow: the gates are mostly open or closed, instead of in a middle state; and (2) avoid overfitting to certain extent: the gates operate at their flat regions, which is shown to correspond to better generalization ability. However, learning towards discrete values of the gates is generally difficult. To tackle this challenge, we leverage the recently developed Gumbel-Softmax trick from the field of variational methods, and make the model trainable with standard backpropagation. Experimental results on language modeling and machine translation show that (1) the values of the gates generated by our method are more reasonable and intuitively interpretable, and (2) our proposed method generalizes better and achieves better accuracy on test sets in all tasks. Moreover, the learnt models are not sensitive to low-precision approximation and low-rank approximation of the gate parameters due to the flat loss surface.", "target": ["Wir schlagen einen neuen Algorithmus für das LSTM Training durch Lernen in Richtung binärwertiger Gatter vor, von dem wir zeigen, dass er viele gute Eigenschaften hat.", "Eine neue \"Gate\"-Funktion für LSTM vorschlagen, um die Werte der Gates auf 0 oder 1 zu setzen. ", "Die Arbeit zielt darauf ab, LSTM-Gatter zu binären Gattern zu machen, indem es den neuen Gumbel-Softmax Trick anwendet, um eine durchgängig trainierbare kategoriale Verteilung zu erhalten."]} +{"source": "We present a personalized recommender system using neural network for recommending\n products, such as eBooks, audio-books, Mobile Apps, Video and Music.\n It produces recommendations based on customer’s implicit feedback history such\n as purchases, listens or watches. Our key contribution is to formulate recommendation\n problem as a model that encodes historical behavior to predict the future\n behavior using soft data split, combining predictor and auto-encoder models. We\n introduce convolutional layer for learning the importance (time decay) of the purchases\n depending on their purchase date and demonstrate that the shape of the time\n decay function can be well approximated by a parametrical function. We present\n offline experimental results showing that neural networks with two hidden layers\n can capture seasonality changes, and at the same time outperform other modeling\n techniques, including our recommender in production. Most importantly, we\n demonstrate that our model can be scaled to all digital categories, and we observe\n significant improvements in an online A/B test. We also discuss key enhancements\n to the neural network model and describe our production pipeline. Finally\n we open-sourced our deep learning library which supports multi-gpu model parallel\n training. This is an important feature in building neural network based recommenders\n with large dimensionality of input and output data.", "target": ["Verbesserung der Empfehlungen durch zeitabhängige Modellierung mit neuronalen Netzen in mehreren Produktkategorien auf einer Einzelhandelswebsite.", "In dem Beitrag wird eine neue, auf einem neuronalen Netz basierende Methode für Empfehlungen vorgeschlagen.", "Die Autoren beschreiben ein Verfahren, mit dem sie ihr Empfehlungssystem von Grund auf neu aufbauen und den zeitlichen Verfall von Käufen in den Lern Frameworks integrieren."]} +{"source": "Deep Learning (DL) algorithms based on Generative Adversarial Network (GAN) have demonstrated great potentials in computer vision tasks such as image restoration. Despite the rapid development of image restoration algorithms using DL and GANs, image restoration for specific scenarios, such as medical image enhancement and super-resolved identity recognition, are still facing challenges. How to ensure visually realistic restoration while avoiding hallucination or mode- collapse? How to make sure the visually plausible results do not contain hallucinated features jeopardizing downstream tasks such as pathology identification and subject identification?\n Here we propose to resolve these challenges by coupling the GAN based image restoration framework with another task-specific network. With medical imaging restoration as an example, the proposed model conducts additional pathology recognition/classification task to ensure the preservation of detailed structures that are important to this task. Validated on multiple medical datasets, we demonstrate the proposed method leads to improved deep learning based image restoration while preserving the detailed structure and diagnostic features. Additionally, the trained task network show potentials to achieve super-human level performance in identifying pathology and diagnosis.\n Further validation on super-resolved identity recognition tasks also show that the proposed method can be generalized for diverse image restoration tasks.", "target": ["Kopplung des GAN-basierten Rahmens für die Bildwiederherstellung mit einem anderen aufgabenspezifischen Netzwerk, um ein realistisches Bild zu erzeugen und gleichzeitig die aufgabenspezifischen Merkmale zu erhalten.", "Eine neuartige Methode der Task-GAN Bildkopplung, die GAN und ein aufgabenspezifisches Netzwerk koppelt, um Halluzinationen oder einen Moduskollaps zu vermeiden.", "Die Autoren schlagen vor, die GAN-basierte Bildrestauration mit einem anderen aufgabenspezifischen Zweig, wie z. B. Klassifizierungsaufgaben, zu erweitern, um weitere Verbesserungen zu erzielen."]} +{"source": "Unsupervised anomaly detection on multi- or high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score.", "target": ["Ein durchgängig trainiertes tiefes neuronales Netzwerk, das Gaussian Mixture Modeling nutzt, um Dichteabschätzungen und unbeaufsichtigte Anomalieerkennung in einem niedrigdimensionalen Raum durchzuführen, der von einem tiefen Autoencoder gelernt wird.", "Die Arbeit präsentiert ein gemeinsames Deep Learning Framework für Dimensionsreduktion Clustering, das zu einer wettbewerbsfähigen Anomalieerkennung führt.", "Ein neues Verfahren zur Erkennung von Anomalien, bei dem die Schritte Dimensionsreduzierung und Dichteschätzung gemeinsam optimiert werden."]} +{"source": "Generalization from limited examples, usually studied under the umbrella of meta-learning, equips learning techniques with the ability to adapt quickly in dynamical environments and proves to be an essential aspect of lifelong learning. In this paper, we introduce the Projective Subspace Networks (PSN), a deep learning paradigm that learns non-linear embeddings from limited supervision. In contrast to previous studies, the embedding in PSN deems samples of a given class to form an affine subspace. We will show that such modeling leads to robust solutions, yielding competitive results on supervised and semi-supervised few-shot classification. Moreover, our PSN approach has the ability of end-to-end learning. In contrast to previous works, our projective subspace can be thought of as a richer representation capturing higher-order information datapoints for modeling new concepts.", "target": ["Wir haben projektive Unterraumnetzwerke für das Few-Shot Learning und das halbüberwachte Few-Shot Learning vorgeschlagen.", "In diesem Beitrag wird ein neuer, auf Einbettung basierender Ansatz für das Problem des Few-Shot Learnings und eine Erweiterung dieses Modells auf das semiüberwachte Few-Shot Learning vorgeschlagen.", "Neue Methode zur voll- und halbüberwachten Few-Shot Klassifizierung, die auf dem Erlernen einer allgemeinen Einbettung und dem anschließenden Erlernen eines Unterraums davon für jede Klasse basiert."]} +{"source": "This paper investigates whether learning contingency-awareness and controllable aspects of an environment can lead to better exploration in reinforcement learning. To investigate this question, we consider an instantiation of this hypothesis evaluated on the Arcade Learning Element (ALE). In this study, we develop an attentive dynamics model (ADM) that discovers controllable elements of the observations, which are often associated with the location of the character in Atari games. The ADM is trained in a self-supervised fashion to predict the actions taken by the agent. The learned contingency information is used as a part of the state representation for exploration purposes. We demonstrate that combining actor-critic algorithm with count-based exploration using our representation achieves impressive results on a set of notoriously challenging Atari games due to sparse rewards. For example, we report a state-of-the-art score of >11,000 points on Montezuma's Revenge without using expert demonstrations, explicit high-level information (e.g., RAM states), or supervisory data. Our experiments confirm that contingency-awareness is indeed an extremely powerful concept for tackling exploration problems in reinforcement learning and opens up interesting research questions for further investigations.", "target": ["Wir untersuchen kontingenzbewusste und kontrollierbare Aspekte bei der Erkundung und erreichen die beste Leistung auf Montezumas Rache ohne Expertendemonstrationen.", "In diesem Beitrag wird das Problem der Extraktion einer aussagekräftigen Zustandsrepräsentation untersucht, die bei der Erkundung einer spärlichen Belohnungsaufgabe hilft, indem kontrollierbare (erlernte) Merkmale des Zustands identifiziert werden.", "In diesem Beitrag wird die neuartige Idee vorgestellt, Kontingenzbewusstsein zur Unterstützung der Exploration bei spärlich belohnten Reinforcement Learning Aufgaben zu verwenden, und es werden Ergebnisse auf dem neuesten Stand der Technik erzielt."]} +{"source": "Disentangling factors of variation has always been a challenging problem in representation learning. Existing algorithms suffer from many limitations, such as unpredictable disentangling factors, bad quality of generated images from encodings, lack of identity information, etc. In this paper, we proposed a supervised algorithm called DNA-GAN trying to disentangle different attributes of images. The latent representations of images are DNA-like, in which each individual piece represents an independent factor of variation. By annihilating the recessive piece and swapping a certain piece of two latent representations, we obtain another two different representations which could be decoded into images. In order to obtain realistic images and also disentangled representations, we introduced the discriminator for adversarial training. Experiments on Multi-PIE and CelebA datasets demonstrate the effectiveness of our method and the advantage of overcoming limitations existing in other methods.", "target": ["Wir haben einen überwachten Algorithmus, DNA-GAN, vorgeschlagen, um mehrere Attribute von Bildern zu entwirren.", "Diese Arbeit untersucht das Problem der attributbedingten Bilderzeugung mit Hilfe von generativen adversen Netzwerken und schlägt vor, Bilder aus Attributen und latentem Code als High-Level-Repräsentation zu erzeugen.", "In diesem Beitrag wird eine neue Methode zur Entflechtung verschiedener Bildattribute unter Verwendung einer neuartigen DNA-Struktur GAN vorgeschlagen."]} +{"source": "Representations learnt through deep neural networks tend to be highly informative, but opaque in terms of what information they learn to encode. We introduce an approach to probabilistic modelling that learns to represent data with two separate deep representations: an invariant representation that encodes the information of the class from which the data belongs, and an equivariant representation that encodes the symmetry transformation defining the particular data point within the class manifold (equivariant in the sense that the representation varies naturally with symmetry transformations). This approach to representation learning is conceptually transparent, easy to implement, and in-principle generally applicable to any data comprised of discrete classes of continuous distributions (e.g. objects in images, topics in language, individuals in behavioural data). We demonstrate qualitatively compelling representation learning and competitive quantitative performance, in both supervised and semi-supervised settings, versus comparable modelling approaches in the literature with little fine tuning.", "target": ["In diesem Beitrag wird ein neuartiges generatives Modellierungsverfahren für latente Variablen vorgestellt, das die Darstellung globaler Informationen in einer latenten Variable und lokaler Informationen in einer anderen latenten Variable ermöglicht.", "Die Arbeit stellt eine VAE vor, die Etiketten verwendet, um die gelernte Repräsentation in einen invarianten und einen kovarianten Teil zu trennen."]} +{"source": "Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (i.e. active learning).\n\n Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs when applied in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, i.e. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.\n", "target": ["Wir nähern uns dem Problem des aktiven Lernens als einem Kernmengenauswahlproblem und zeigen, dass dieser Ansatz besonders in der Batch Active Learning Umgebung nützlich ist, die beim Training von CNNs entscheidend ist.", "Die Autoren stellen einen Algorithmus-agnostischen aktiven Lernalgorithmus für die Mehrklassen-Klassifizierung vor.", "In dem Beitrag wird ein aktiver Lernalgorithmus im Batch-Modus für CNN als Kernmengenproblem vorgeschlagen, der Zufallsstichproben und Unsicherheitsstichproben übertrifft.", "Untersucht aktives Lernen für Convolutional Neural Networks und formuliert das aktive Lernproblem als Kernmengenauswahl und stellt eine neue Strategie vor."]} +{"source": "Recurrent neural networks are known for their notorious exploding and vanishing gradient problem (EVGP). This problem becomes more evident in tasks where the information needed to correctly solve them exist over long time scales, because EVGP prevents important gradient components from being back-propagated adequately over a large number of steps. We introduce a simple stochastic algorithm (\\textit{h}-detach) that is specific to LSTM optimization and targeted towards addressing this problem. Specifically, we show that when the LSTM weights are large, the gradient components through the linear path (cell state) in the LSTM computational graph get suppressed. Based on the hypothesis that these components carry information about long term dependencies (which we show empirically), their suppression can prevent LSTMs from capturing them. Our algorithm\\footnote{Our code is available at https://github.com/bhargav104/h-detach. } prevents gradients flowing through this path from getting suppressed, thus allowing the LSTM to capture such dependencies better. We show significant improvements over vanilla LSTM gradient based training in terms of convergence speed, robustness to seed and learning rate, and generalization using our modification of LSTM gradient on various benchmark datasets.", "target": ["Ein einfacher Algorithmus zur Verbesserung der Optimierung und Handhabung langfristiger Abhängigkeiten in LSTM.", "In diesem Beitrag wird ein einfacher stochastischer Algorithmus namens h-detach vorgestellt, der speziell für die LSTM-Optimierung entwickelt wurde und auf die Lösung dieses Problems ausgerichtet ist.", "Vorschlagen einer einfachen Änderung des LSTM-Trainingsverfahrens, um die Gradientenfortpflanzung entlang der Zellzustände oder den \"linearen zeitlichen Pfad\" zu erleichtern."]} +{"source": "Convolutional Neural Networks (CNNs) significantly improve the state-of-the-art for many applications, especially in computer vision. However, CNNs still suffer from a tendency to confidently classify out-distribution samples from unknown classes into pre-defined known classes. Further, they are also vulnerable to adversarial examples. We are relating these two issues through the tendency of CNNs to over-generalize for areas of the input space not covered well by the training set. We show that a CNN augmented with an extra output class can act as a simple yet effective end-to-end model for controlling over-generalization. As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples. To help select a representative natural out-distribution set among available ones, we propose a simple measurement to assess an out-distribution set's fitness. We also demonstrate that training such an augmented CNN with representative out-distribution natural datasets and some interpolated samples allows it to better handle a wide range of unseen out-distribution samples and black-box adversarial examples without training it on any adversaries. Finally, we show that generation of white-box adversarial attacks using our proposed augmented CNN can become harder, as the attack algorithms have to get around the rejection regions when generating actual adversaries.", "target": ["Das richtige Training von CNNs mit der Dustbin-Klasse erhöht ihre Robustheit gegenüber adversarial Angriffen und ihre Fähigkeit, mit out-of-distribution Beispielen umzugehen.", "In dieser Arbeit wird vorgeschlagen, ein zusätzliches Label zur Erkennung von OOD-Proben und adversarial Beispielen in CNN-Modellen hinzuzufügen.", "In dem Beitrag wird eine zusätzliche Klasse vorgeschlagen, die natürliche out-of-distribution Bilder und interpolierte Bilder für adversarial und Out-of-Distribution Beispiele in CNNs einbezieht."]} +{"source": "Modern deep artificial neural networks have achieved impressive results through models with very large capacity---compared to the number of training examples---that control overfitting with the help of different forms of regularization. Regularization can be implicit, as is the case of stochastic gradient descent or parameter sharing in convolutional layers, or explicit. Most common explicit regularization techniques, such as dropout and weight decay, reduce the effective capacity of the model and typically require the use of deeper and wider architectures to compensate for the reduced capacity. Although these techniques have been proven successful in terms of results, they seem to waste capacity. In contrast, data augmentation techniques reduce the generalization error by increasing the number of training examples and without reducing the effective capacity. In this paper we systematically analyze the effect of data augmentation on some popular architectures and conclude that data augmentation alone---without any other explicit regularization techniques---can achieve the same performance or higher as regularized models, especially when training with fewer examples.", "target": ["In einem tiefen Convolutional Neural Network, das mit einem ausreichenden Maß an Datenerweiterung trainiert und durch SGD optimiert wurde, könnten explizite Regularisierer (Gewichtsverfall und Dropout) keine zusätzliche Verbesserung der Generalisierung bewirken.", "In diesem Beitrag wird die Datenerweiterung als Alternative zu den gängigen Regularisierungstechniken vorgeschlagen. Es wird gezeigt, dass für einige wenige Referenzmodelle/Aufgaben die gleiche Generalisierungsleistung nur durch Datenerweiterung erreicht werden kann.", "In dieser Arbeit wird eine systematische Studie zur Datenerweiterung bei der Bildklassifizierung mit tiefen neuronalen Netzen vorgestellt, die zeigt, dass die Datenerweiterung einige gängige Regularisierer wie Gewichtsverfall und Dropout ersetzen kann."]} +{"source": "Text editing on mobile devices can be a tedious process. To perform various editing operations, a user must repeatedly move his or her fingers between the text input area and the keyboard, making multiple round trips and breaking the flow of typing. In this work, we present Gedit, a system of on-keyboard gestures for convenient mobile text editing. Our design includes a ring gesture and flicks for cursor control, bezel gestures for mode switching, and four gesture shortcuts for copy, paste, cut, and undo. Variations of our gestures exist for one and two hands. We conducted an experiment to compare Gedit with the de facto touch+widget based editing interactions. Our results showed that Gedit’s gestures were easy to learn, 24% and 17% faster than the de facto interactions for one- and two-handed use, respectively, and preferred by participants.", "target": ["In dieser Arbeit stellen wir Gedit vor, ein System von Gesten auf der Tastatur zur bequemen mobilen Textbearbeitung.", "Berichtet über den Entwurf und die Bewertung der Gedit-Interaktionstechniken.", "Präsentiert eine neue Reihe von Touch-Gesten für den nahtlosen Übergang zwischen Texteingabe und Textbearbeitung auf mobilen Geräten."]} +{"source": "Deep learning achieves remarkable generalization capability with overwhelming number of model parameters. Theoretical understanding of deep learning generalization receives recent attention yet remains not fully explored. This paper attempts to provide an alternative understanding from the perspective of maximum entropy. We first derive two feature conditions that softmax regression strictly apply maximum entropy principle. DNN is then regarded as approximating the feature conditions with multilayer feature learning, and proved to be a recursive solution towards maximum entropy principle. The connection between DNN and maximum entropy well explains why typical designs such as shortcut and regularization improves model generalization, and provides instructions for future model development.", "target": ["Wir beweisen, dass DNN eine rekursiv approximierte Lösung für das Prinzip der maximalen Entropie ist.", "Es wird eine Herleitung vorgestellt, die ein DNN mit der rekursiven Anwendung der maximalen Entropie-Modellanpassung verbindet.", "Die Arbeit zielt darauf ab, das Deep Learning aus der Perspektive des Prinzips der maximalen Entropie zu betrachten."]} +{"source": " As people learn to navigate the world, autonomic nervous system (e.g., ``fight or flight) responses provide intrinsic feedback about the potential consequence of action choices (e.g., becoming nervous when close to a cliff edge or driving fast around a bend.) Physiological changes are correlated with these biological preparations to protect one-self from danger. We present a novel approach to reinforcement learning that leverages a task-independent intrinsic reward function trained on peripheral pulse measurements that are correlated with human autonomic nervous system responses. Our hypothesis is that such reward functions can circumvent the challenges associated with sparse and skewed rewards in reinforcement learning settings and can help improve sample efficiency. We test this in a simulated driving environment and show that it can increase the speed of learning and reduce the number of collisions during the learning stage.", "target": ["Wir stellen einen neuartigen Ansatz zum Reinforcement Learning vor, der eine aufgabenunabhängige intrinsische Belohnungsfunktion nutzt, die anhand von peripheren Pulsmessungen trainiert wird, die mit den Reaktionen des menschlichen autonomen Nervensystems korreliert sind. ", "Schlägt einen Rahmen für das Reinforcement Learning vor, der auf der menschlichen emotionalen Reaktion im Kontext des autonomen Fahrens basiert.", "Die Autoren schlagen vor, Signale, wie z.B. grundlegende autonome viszerale Reaktionen, die die Entscheidungsfindung beeinflussen, innerhalb des RL-Rahmens zu verwenden, indem RL-Belohnungsfunktionen mit einem Modell ergänzt werden, das direkt aus den Reaktionen des menschlichen Nervensystems gelernt wurde.", "Schlägt vor, physiologische Signale zu nutzen, um die Leistung von Algorithmen des Reinforcement Learnings zu verbessern und eine intrinsische Belohnungsfunktion zu erstellen, die weniger spärlich ist, indem die Herzpulsamplitude gemessen wird."]} +{"source": "Deep convolutional neural networks (CNNs) are known to be robust against label noise on extensive datasets. However, at the same time, CNNs are capable of memorizing all labels even if they are random, which means they can memorize corrupted labels. Are CNNs robust or fragile to label noise? Much of researches focusing on such memorization uses class-independent label noise to simulate label corruption, but this setting is simple and unrealistic. In this paper, we investigate the behavior of CNNs under class-dependently simulated label noise, which is generated based on the conceptual distance between classes of a large dataset (i.e., ImageNet-1k). Contrary to previous knowledge, we reveal CNNs are more robust to such class-dependent label noise than class-independent label noise. We also demonstrate the networks under class-dependent noise situations learn similar representation to the no noise situation, compared to class-independent noise situations.", "target": ["Sind CNNs robust oder anfällig für Labelstörungen? Praktisch gesehen sind sie robust.", "Die Autoren testen die Robustheit von CNNs gegenüber Labelstörungen anhand des ImageNet 1k-Baums von WordNet.", "Eine Analyse der Leistung des Modells eines Convolutional Neural Networks, wenn klassenabhängige und klassenunabhängige Störungen eingeführt werden.", "Zeigt, dass CNNs robuster gegenüber klassenrelevantem Labelstörungen sind und argumentiert, dass reale Störungen klassenrelevant sein sollten."]} +{"source": "Efficient audio synthesis is an inherently difficult machine learning task, as human perception is sensitive to both global structure and fine-scale waveform coherence. Autoregressive models, such as WaveNet, model local structure at the expense of global latent structure and slow iterative sampling, while Generative Adversarial Networks (GANs), have global latent conditioning and efficient parallel sampling, but struggle to generate locally-coherent audio waveforms. Herein, we demonstrate that GANs can in fact generate high-fidelity and locally-coherent audio by modeling log magnitudes and instantaneous frequencies with sufficient frequency resolution in the spectral domain. Through extensive empirical investigations on the NSynth dataset, we demonstrate that GANs are able to outperform strong WaveNet baselines on automated and human evaluation metrics, and efficiently generate audio several orders of magnitude faster than their autoregressive counterparts.\n", "target": ["Hochwertige Audiosynthese mit GANs.", "Schlägt einen Ansatz vor, der das GAN-Framework nutzt, um Audio durch die Modellierung von logarithmischen Größen und momentanen Frequenzen mit ausreichender Frequenzauflösung im Spektralbereich zu erzeugen. ", "Eine Strategie zur Erzeugung von Audio-Samples aus Rauschen mit GANs, mit Änderungen an der Architektur und der Darstellung, die notwendig sind, um überzeugende Audios zu erzeugen, die einen interpretierbaren latenten Code enthalten.", "Stellt eine einfache Idee zur besseren Darstellung von Audiodaten vor, so dass Convolutional Modelle wie generative adversarische Netze angewendet werden können."]} +{"source": "In this work we propose a novel approach for learning graph representation of the data using gradients obtained via backpropagation. Next we build a neural network architecture compatible with our optimization approach and motivated by graph filtering in the vertex domain. We demonstrate that the learned graph has richer structure than often used nearest neighbors graphs constructed based on features similarity. Our experiments demonstrate that we can improve prediction quality for several convolution on graphs architectures, while others appeared to be insensitive to the input graph.", "target": ["Graph-Optimierung mit Signalfilterung im Vertex-Bereich.", "Die Arbeit untersucht das Lernen der Adjazenzmatrix eines Graphen mit spärlich verbunden ungerichteten Graphen mit nicht-negativen Rand Gewichten unter Verwendung eines projizierten Sub-Gradient Descent Algorithmus.", "Entwicklung eines neuen Verfahrens zur Rückkopplung auf der Adjazenzmatrix eines neuronalen Netzes"]} +{"source": "The use of AR in an industrial context could help for the training of new operators. To be able to use an AR guidance system, we need a tool to quickly create a 3D representation of the assembly line and of its AR annotations. This tool should be very easy to use by an operator who is not an AR or VR specialist: typically the manager of the assembly line. This is why we proposed WAAT, a 3D authoring tool allowing user to quickly create 3D models of the workstations, and also test the AR guidance placement. WAAT makes on-site authoring possible, which should really help to have an accurate 3D representation of the assembly line. The verification of AR guidance should also be very useful to make sure everything is visible and doesn't interfere with technical tasks. In addition to these features, our future work will be directed in the deployment of WAAT into a real boiler assembly line to assess the usability of this solution.", "target": ["Dieses Papier beschreibt ein 3D Authoring Tool für die Bereitstellung von AR in Montagelinien der Industrie 4.0.", "Das Papier befasst sich mit der Frage, wie AR Authoring Tools das Training von Fließbandsystemen unterstützen und schlägt einen Ansatz vor.", "Ein AR-Leitsystem für industrielle Montagelinien, das die Erstellung von AR-Inhalten vor Ort ermöglicht.", "Stellt ein System vor, mit dem Fabrikarbeiter mithilfe eines Augmented-Reality-Systems effizienter geschult werden können. "]} +{"source": "Generative adversarial network (GAN) is one of the best known unsupervised learning techniques these days due to its superior ability to learn data distributions. In spite of its great success in applications, GAN is known to be notoriously hard to train. The tremendous amount of time it takes to run the training algorithm and its sensitivity to hyper-parameter tuning have been haunting researchers in this area. To resolve these issues, we need to first understand how GANs work. Herein, we take a step toward this direction by examining the dynamics of GANs. We relate a large class of GANs including the Wasserstein GANs to max-min optimization problems with the coupling term being linear over the discriminator. By developing new primal-dual optimization tools, we show that, with a proper stepsize choice, the widely used first-order iterative algorithm in training GANs would in fact converge to a stationary solution with a sublinear rate. The same framework also applies to multi-task learning and distributional robust learning problems. We verify our analysis on numerical examples with both synthetic and real data sets. We hope our analysis shed light on future studies on the theoretical properties of relevant machine learning problems.", "target": ["Wir zeigen, dass der beim Training von GANs weit verbreitete iterative Algorithmus erster Ordnung bei richtiger Wahl der Schrittweite tatsächlich mit einer sublinearen Rate zu einer stationären Lösung konvergiert.", "Diese Arbeit verwendet GANs und Multi-Task-Lernen, um eine Konvergenzgarantie für primär-duale Algorithmen auf bestimmten Min-Max-Problemen zu geben.", "Analysiert die Lerndynamik von GANs durch Formulierung des Problems als primär-duales Optimierungsproblem unter Annahme einer begrenzten Klasse von Modellen."]} +{"source": "Social dilemmas, where mutual cooperation can lead to high payoffs but participants face incentives to cheat, are ubiquitous in multi-agent interaction. We wish to construct agents that cooperate with pure cooperators, avoid exploitation by pure defectors, and incentivize cooperation from the rest. However, often the actions taken by a partner are (partially) unobserved or the consequences of individual actions are hard to predict. We show that in a large class of games good strategies can be constructed by conditioning one's behavior solely on outcomes (ie. one's past rewards). We call this consequentialist conditional cooperation. We show how to construct such strategies using deep reinforcement learning techniques and demonstrate, both analytically and experimentally, that they are effective in social dilemmas beyond simple matrix games. We also show the limitations of relying purely on consequences and discuss the need for understanding both the consequences of and the intentions behind an action.", "target": ["Wir zeigen, wie man Deep RL nutzt, um Agenten zu konstruieren, die soziale Dilemmata jenseits von Matrixspielen lösen können.", "Lernen von Zwei-Spieler General-Summen-Spielen mit unvollkommenen Informationen.", "Spezifiziert eine Auslösestrategie (CCC) und einen entsprechenden Algorithmus, der die Konvergenz zu effizienten Ergebnissen in sozialen Dilemmas zeigt, ohne dass die Agenten die Handlungen der anderen beobachten müssen."]} +{"source": "In distributed training, the communication cost due to the transmission of gradients\n or the parameters of the deep model is a major bottleneck in scaling up the number\n of processing nodes. To address this issue, we propose dithered quantization for\n the transmission of the stochastic gradients and show that training with Dithered\n Quantized Stochastic Gradients (DQSG) is similar to the training with unquantized\n SGs perturbed by an independent bounded uniform noise, in contrast to the other\n quantization methods where the perturbation depends on the gradients and hence,\n complicating the convergence analysis. We study the convergence of training\n algorithms using DQSG and the trade off between the number of quantization\n levels and the training time. Next, we observe that there is a correlation among the\n SGs computed by workers that can be utilized to further reduce the communication\n overhead without any performance loss. Hence, we develop a simple yet effective\n quantization scheme, nested dithered quantized SG (NDQSG), that can reduce the\n communication significantly without requiring the workers communicating extra\n information to each other. We prove that although NDQSG requires significantly\n less bits, it can achieve the same quantization variance bound as DQSG. Our\n simulation results confirm the effectiveness of training using DQSG and NDQSG\n in reducing the communication bits or the convergence time compared to the\n existing methods without sacrificing the accuracy of the trained model.", "target": ["Die Arbeit schlägt zwei Quantisierungsschemata für die Kommunikation von stochastischen Gradienten beim verteilten Lernen vor und analysiert diese, um die Kommunikationskosten im Vergleich zum Stand der Technik zu reduzieren und gleichzeitig die gleiche Genauigkeit zu erhalten. ", "Die Autoren schlagen vor, die stochastischen Gradienten, die durch den Trainingsprozess berechnet werden, mit einer Dither-Quantisierung zu versehen, um den Quantisierungsfehler zu verbessern und im Vergleich zu den Grundlinien bessere Ergebnisse zu erzielen, und schlagen ein verschachteltes Schema zur Reduzierung der Kommunikationskosten vor.", "Die Autoren stellen eine Verbindung zwischen der Reduzierung der Kommunikation bei der verteilten Optimierung und der Dither-Quantisierung her und entwickeln zwei neue verteilte Trainingsalgorithmen, bei denen der Kommunikationsaufwand erheblich reduziert wird."]} +{"source": "Deep neural networks have been shown to perform well in many classical machine learning problems, especially in image classification tasks. However, researchers have found that neural networks can be easily fooled, and they are surprisingly sensitive to small perturbations imperceptible to humans. Carefully crafted input images (adversarial examples) can force a well-trained neural network to provide arbitrary outputs. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. In this paper we propose a new defensive mechanism under the generative adversarial network~(GAN) framework. We model the adversarial noise using a generative network, trained jointly with a classification discriminative network as a minimax game. We show empirically that our adversarial network approach works well against black box attacks, with performance on par with state-of-art methods such as ensemble adversarial training and adversarial training with projected gradient descent.\n", "target": ["Gemeinsames Trainieren eines Netzes zur Erzeugung von Störgeräuschen und eines Klassifizierungsnetzes, um eine bessere Robustheit gegenüber adversarial Angriffen zu erreichen.", "Eine GAN-Lösung für tiefe Klassifizierungsmodelle, die gegenüber White- und Blackbox-Angriffen robuste Modelle erzeugt. ", "Die Arbeit schlägt einen Verteidigungsmechanismus gegen adversarial Angriffe vor, der GANs verwendet, wobei generierte Störungen als adversarial Beispiele und ein Diskriminator zur Unterscheidung zwischen ihnen verwendet werden."]} +{"source": "Deep learning has become the state of the art approach in many machine learning problems such as classification. It has recently been shown that deep learning is highly vulnerable to adversarial perturbations. Taking the camera systems of self-driving cars as an example, small adversarial perturbations can cause the system to make errors in important tasks, such as classifying traffic signs or detecting pedestrians. Hence, in order to use deep learning without safety concerns a proper defense strategy is required. We propose to use ensemble methods as a defense strategy against adversarial perturbations. We find that an attack leading one model to misclassify does not imply the same for other networks performing the same task. This makes ensemble methods an attractive defense strategy against adversarial attacks. We empirically show for the MNIST and the CIFAR-10 data sets that ensemble methods not only improve the accuracy of neural networks on test data but also increase their robustness against adversarial perturbations.", "target": ["Verwendung von Ensemble-Methoden zur Verteidigung gegen adversarial Störungen bei tiefen neuronalen Netzen.", "In diesem Beitrag wird vorgeschlagen, Ensembling als gegnerischen Verteidigungsmechanismus zu verwenden.", "Empirische Untersuchung der Robustheit verschiedener Ensembles tiefer neuronaler Netze gegenüber den beiden Arten von Angriffen, FGSM und BIM, auf zwei populären Datensätzen, MNIST und CIFAR10."]} +{"source": "In this paper, we propose the Associative Conversation Model that generates visual information from textual information and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. In research on Neural Machine Translation, there are studies that generate translated sentences using both images and sentences, and these studies show that visual information improves translation performance. However, it is not possible to use sentence generation algorithms using images for the dialogue systems since many text-based dialogue systems only accept text input. Our approach generates (associates) visual information from input text and generates response text using context vector fusing associative visual information and sentence textual information. A comparative experiment between our proposed model and a model without association showed that our proposed model is generating useful sentences by associating visual information related to sentences. Furthermore, analysis experiment of visual association showed that our proposed model generates (associates) visual information effective for sentence generation.", "target": ["Vorschlag für eine Methode zur Erzeugung von Sätzen, die auf der Fusion von Textinformationen und visuellen Informationen, die mit den Textinformationen verbunden sind, basiert.", "Diese Arbeit beschreibt ein Deep Learning Modell für Dialogsysteme, das sich visuelle Informationen zunutze macht.", "In diesem Beitrag wird ein neuartiger Datensatz für geerdete Dialoge vorgeschlagen und eine rechnerische Beobachtung gemacht, dass er helfen könnte, auch bei textbasierten Dialogen über das Sehen nachzudenken.", "Schlägt vor, die traditionellen textbasierten Ansätze zur Satzgenerierung/Dialog durch die Einbeziehung visueller Informationen zu ergänzen, indem ein Datenpaket gesammelt wird, das sowohl aus Text als auch aus zugehörigen Bildern oder Videos besteht."]} +{"source": "Feedforward convolutional neural network has achieved a great success in many computer vision tasks. While it validly imitates the hierarchical structure of biological visual system, it still lacks one essential architectural feature: contextual recurrent connections with feedback, which widely exists in biological visual system. In this work, we designed a Contextual Recurrent Convolutional Network with this feature embedded in a standard CNN structure. We found that such feedback connections could enable lower layers to ``rethink\" about their representations given the top-down contextual information. We carefully studied the components of this network, and showed its robustness and superiority over feedforward baselines in such tasks as noise image classification, partially occluded object recognition and fine-grained image classification. We believed this work could be an important step to help bridge the gap between computer vision models and real biological visual system.", "target": ["Wir haben ein neuartiges kontextuelles rekurrentes Convolutional Network mit robusten Eigenschaften für visuelles Lernen vorgeschlagen.", "In diesem Beitrag wird eine Feedback Verbindung vorgestellt, um das Lernen von Merkmalen durch die Einbeziehung von Kontextinformationen zu verbessern.", "In der Arbeit wird vorgeschlagen, \"rekurrente\" Verbindungen in ein Convolution Network mit Gating-Mechanismus einzufügen."]} +{"source": "Deep neural networks have led to a series of breakthroughs, dramatically improving the state-of-the-art in many domains. The techniques driving these advances, however, lack a formal method to account for model uncertainty. While the Bayesian approach to learning provides a solid theoretical framework to handle uncertainty, inference in Bayesian-inspired deep neural networks is difficult. In this paper, we provide a practical approach to Bayesian learning that relies on a regularization technique found in nearly every modern network, batch normalization. We show that training a deep network using batch normalization is equivalent to approximate inference in Bayesian models, and we demonstrate how this finding allows us to make useful estimates of the model uncertainty. Using our approach, it is possible to make meaningful uncertainty estimates using conventional architectures without modifying the network or the training procedure. Our approach is thoroughly validated in a series of empirical experiments on different tasks and using various measures, showing it to outperform baselines on a majority of datasets with strong statistical significance.", "target": ["Wir zeigen, dass das Training eines tiefen Netzwerks unter Verwendung von Batch-Normalisierung gleichbedeutend ist mit approximativer Inferenz in Bayes'schen Modellen, und wir demonstrieren, wie diese Erkenntnis es uns ermöglicht, nützliche Schätzungen der Modellunsicherheit in konventionellen Netzwerken vorzunehmen.", "In diesem Beitrag wird vorgeschlagen, die Batch-Normalisierung zum Testzeitpunkt zu verwenden, um die Vorhersageunsicherheit zu erhalten, und es wird gezeigt, dass die Monte-Carlo Vorhersage zum Testzeitpunkt unter Verwendung der Batch-Norm besser ist als der Dropout.", "Schlägt vor, dass das Regularisierungsverfahren, das als Batch-Normalisierung bezeichnet wird, als Durchführung einer approximativen Bayes'schen Inferenz verstanden werden kann, die in Bezug auf die Schätzungen der Unsicherheit, die sie produziert, ähnlich wie MC-Dropout funktioniert."]} +{"source": "Data-parallel neural network training is network-intensive, so gradient dropping was designed to exchange only large gradients. However, gradient dropping has been shown to slow convergence. We propose to improve convergence by having each node combine its locally computed gradient with the sparse global gradient exchanged over the network. We empirically confirm with machine translation tasks that gradient dropping with local gradients approaches convergence 48% faster than non-compressed multi-node training and 28% faster compared to vanilla gradient dropping. We also show that gradient dropping with a local gradient update does not reduce the model's final quality.", "target": ["Wir verbessern das Gradient Dropping (eine Technik, bei der nur große Gradienten bei verteiltem Training ausgetauscht werden), indem wir lokale Gradienten bei der Aktualisierung der Parameter einbeziehen, um den Qualitätsverlust zu verringern und die Trainingszeit weiter zu verkürzen.", "In dieser Arbeit werden 3 Modi für die Kombination von lokalen und globalen Gradienten vorgeschlagen, um mehr Rechenknoten besser zu nutzen.", "Befasst sich mit dem Problem der Verringerung des Kommunikationsbedarfs bei der Umsetzung verteilter Optimierungsverfahren, insbesondere SGD."]} +{"source": " We establish the relation between Distributional RL and the Upper Confidence Bound (UCB) approach to exploration.\n In this paper we show that the density of the Q function estimated by Distributional RL can be successfully used for the estimation of UCB. This approach does not require counting and, therefore, generalizes well to the Deep RL. We also point to the asymmetry of the empirical densities estimated by the Distributional RL algorithms like QR-DQN. This observation leads to the reexamination of the variance's performance in the UCB type approach to exploration. We introduce truncated variance as an alternative estimator of the UCB and a novel algorithm based on it. We empirically show that newly introduced algorithm achieves better performance in multi-armed bandits setting. Finally, we extend this approach to high-dimensional setting and test it on the Atari 2600 games. New approach achieves better performance compared to QR-DQN in 26 of games, 13 ties out of 49 games.", "target": ["Exploration mit Distributional RL und trunkierter Varianz.", "Stellt eine RL-Methode vor, um mit Hilfe von UCB-Techniken Kompromisse zwischen Entdeckung und Ausnutzen zu verwalten.", "Eine Methode zur Verwendung der durch die Quantile Regression DQN gelernten Verteilung für die Entdeckung, anstelle der üblichen Epsilon-Greedy-Strategie.", "Schlägt neue Algorithmen (QUCB und QUCB+) vor, um den Kompromiss der Entdeckung bei mehrarmigen Banditen und allgemeiner beim Reinforcement Learning zu bewältigen."]} +{"source": "Good representations facilitate transfer learning and few-shot learning. Motivated by theories of language and communication that explain why communities with large number of speakers have, on average, simpler languages with more regularity, we cast the representation learning problem in terms of learning to communicate. Our starting point sees traditional autoencoders as a single encoder with a fixed decoder partner that must learn to communicate. Generalizing from there, we introduce community-based autoencoders in which multiple encoders and decoders collectively learn representations by being randomly paired up on successive training iterations. Our experiments show that increasing community sizes reduce idiosyncrasies in the learned codes, resulting in more invariant representations with increased reusability and structure.", "target": ["Auf der Grundlage von Sprach- und Kommunikationstheorien stellen wir gemeinschaftsbasierte Autoencoder vor, bei denen mehrere Encoder und Decoder gemeinsam strukturierte und wiederverwendbare Repräsentationen lernen.", "Die Autoren befassen sich mit dem Problem des Repräsentationslernens, zielen darauf ab, wiederverwendbare und strukturierte Repräsentationen zu erstellen, argumentieren, dass die Koadaptation zwischen Encoder und Decoder in der traditionellen AE zu einer schlechten Repräsentation führt, und führen gemeinschaftsbasierte Auto-Encoder ein.", "In diesem Beitrag wird ein gemeinschaftsbasierter Autoencoder vorgestellt, der sich mit der Koadaptation von Encodern und Decodern befasst und darauf abzielt, bessere Repräsentationen zu erstellen."]} +{"source": "Humans are experts at high-fidelity imitation -- closely mimicking a demonstration, often in one attempt. Humans use this ability to quickly solve a task instance, and to bootstrap learning of new tasks. Achieving these abilities in autonomous agents is an open problem. In this paper, we introduce an off-policy RL algorithm (MetaMimic) to narrow this gap. MetaMimic can learn both (i) policies for high-fidelity one-shot imitation of diverse novel skills, and (ii) policies that enable the agent to solve tasks more efficiently than the demonstrators. MetaMimic relies on the principle of storing all experiences in a memory and replaying these to learn massive deep neural network policies by off-policy RL. This paper introduces, to the best of our knowledge, the largest existing neural networks for deep RL and shows that larger networks with normalization are needed to achieve one-shot high-fidelity imitation on a challenging manipulation task.\n The results also show that both types of policy can be learned from vision, in spite of the task rewards being sparse, and without access to demonstrator actions.", "target": ["Wir stellen MetaMimic vor, einen Algorithmus, der als Input einen Demonstrationsdatensatz nimmt und (i) eine One-Shot High-Fidelity Imitationsstrategie und (ii) eine unbedingte Aufgabenregel ausgibt.", "Die Arbeit befasst sich mit dem Problem der One-Shot-Imitation mit hoher Imitationsgenauigkeit, indem es DDPGfD so erweitert, dass nur Zustandstrajektorien verwendet werden.", "In diesem Papier wird ein Ansatz für eine One-Shot Imitation mit hoher Genauigkeit vorgeschlagen, der das allgemeine Problem der Exploration beim Imitationslernen angeht.", "Präsentiert eine RL-Methode zum Lernen aus Videodemonstrationen ohne Zugang zu Expertenhandlungen."]} +{"source": "Normalization methods are a central building block in the deep learning toolbox. They accelerate and stabilize training, while decreasing the dependence on manually tuned learning rate schedules. When learning from multi-modal distributions, the effectiveness of batch normalization (BN), arguably the most prominent normalization method, is reduced. As a remedy, we propose a more flexible approach: by extending the normalization to more than a single mean and variance, we detect modes of data on-the-fly, jointly normalizing samples that share common features. We demonstrate that our method outperforms BN and other widely used normalization techniques in several experiments, including single and multi-task datasets.", "target": ["Wir stellen eine neuartige Normalisierungsmethode für tiefe neuronale Netze vor, die robust gegenüber Multimodalitäten in den dazwischenliegenden Merkmalsverteilungen ist.", "Normalisierungsmethode, die eine multimodale Verteilung im Merkmalsraum lernt.", "Vorschlagen einer Verallgemeinerung der Batch-Normalisierung unter der Annahme, dass die Statistik der Aktivierungen der Einheiten über die Batches und über die räumlichen Dimensionen nicht unimodal ist."]} +{"source": "Multilingual machine translation, which translates multiple languages with a single model, has attracted much attention due to its efficiency of offline training and online serving. However, traditional multilingual translation usually yields inferior accuracy compared with the counterpart using individual models for each language pair, due to language diversity and model capacity limitations. In this paper, we propose a distillation-based approach to boost the accuracy of multilingual machine translation. Specifically, individual models are first trained and regarded as teachers, and then the multilingual model is trained to fit the training data and match the outputs of individual models simultaneously through knowledge distillation. Experiments on IWSLT, WMT and Ted talk translation datasets demonstrate the effectiveness of our method. Particularly, we show that one model is enough to handle multiple languages (up to 44 languages in our experiment), with comparable or even better accuracy than individual models.", "target": ["Wir haben eine auf Wissensdestillation basierende Methode vorgeschlagen, um die Genauigkeit der mehrsprachigen neuronalen maschinellen Übersetzung zu erhöhen.", "Ein mehrsprachiges neuronales maschinelles Übersetzungsmodell, das zunächst separate Modelle für jedes Sprachpaar trainiert und dann eine Destillation durchführt.", "Ziel der Arbeit ist es, ein maschinelles Übersetzungsmodell zu trainieren, indem der standardmäßige Cross-Entropie-Verlust durch eine Destillationskomponente ergänzt wird, die auf individuellen Lehrermodellen (für einzelne Sprachpaare) basiert."]} +{"source": "What makes humans so good at solving seemingly complex video games? Unlike computers, humans bring in a great deal of prior knowledge about the world, enabling efficient decision making. This paper investigates the role of human priors for solving video games. Given a sample game, we conduct a series of ablation studies to quantify the importance of various priors. We do this by modifying the video game environment to systematically mask different types of visual information that could be used by humans as priors. We find that removal of some prior knowledge causes a drastic degradation in the speed with which human players solve the game, e.g. from 2 minutes to over 20 minutes. Furthermore, our results indicate that general priors, such as the importance of objects and visual consistency, are critical for efficient game-play.", "target": ["Wir untersuchen die verschiedenen Arten von Vorwissen, die das menschliche Lernen unterstützen, und stellen fest, dass allgemeine Vorannahmen über Objekte die wichtigste Rolle bei der Steuerung des menschlichen Spielverhaltens spielen.", "Die Autoren untersuchen experimentell, welche Aspekte der menschlichen Vorurteile für das Reinforcement Learning in Videospielen wichtig sind.", "Die Autoren stellen eine Studie über die von Menschen beim Spielen von Videospielen verwendeten Vorurteile vor und zeigen, dass es eine Taxonomie von Merkmalen gibt, die sich in unterschiedlichem Maße auf die Fähigkeit auswirken, Aufgaben im Spiel zu erledigen."]} +{"source": "Driven by the need for parallelizable hyperparameter optimization methods, this paper studies \\emph{open loop} search methods: sequences that are predetermined and can be generated before a single configuration is evaluated. Examples include grid search, uniform random search, low discrepancy sequences, and other sampling distributions.\n In particular, we propose the use of $k$-determinantal point processes in hyperparameter optimization via random search. Compared to conventional uniform random search where hyperparameter settings are sampled independently, a $k$-DPP promotes diversity. We describe an approach that transforms hyperparameter search spaces for efficient use with a $k$-DPP. In addition, we introduce a novel Metropolis-Hastings algorithm which can sample from $k$-DPPs defined over spaces with a mixture of discrete and continuous dimensions. Our experiments show significant benefits over uniform random search in realistic scenarios with a limited budget for training supervised learners, whether in serial or parallel.", "target": ["Aufgrund des Bedarfs an parallelisierbaren, offenen Hyperparameter-Optimierungsverfahren schlagen wir die Verwendung von k-determinanten Punktprozessen in der Hyperparameter-Optimierung mittels Zufallssuche vor.", "Schlägt die Verwendung des k-DPP zur Auswahl von Kandidatenpunkten bei der Suche nach Hyperparametern vor.", "Die Autoren schlagen k-DPP als Open-Loop-Methode für die Hyperparameter-Optimierung vor und bieten eine empirische Studie und einen Vergleich mit anderen Methoden.", "Betrachtet die nicht-sequenzielle und uninformierte Hyperparametersuche unter Verwendung determinanter Punktprozesse, die Wahrscheinlichkeitsverteilungen über Teilmengen einer Grundmenge sind, mit der Eigenschaft, dass Teilmengen mit \"vielfältigeren\" Elementen eine höhere Wahrscheinlichkeit haben."]} +{"source": "In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch.\n When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task.\n However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task.\n In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model.\n We eventually recommend a simple $L^2$ penalty using the pre-trained model as a reference, and we show that this approach behaves much better than the standard scheme using weight decay on a partially frozen network.", "target": ["Beim induktiven Transferlernen übertrifft die Feinabstimmung von vortrainierten Convolutional Networks das Training von Grund auf erheblich.", "Befasst sich mit dem Problem des Transferlernens in tiefen Netzwerken und schlägt einen Regularisierungsterm vor, der die Abweichung von der Initialisierung bestraft.", "schlägt eine Analyse verschiedener adaptiver Regularisierungstechniken für tiefes Transferlernen vor und konzentriert sich dabei auf die Verwendung einer L@-SP-Bedingung."]} +{"source": "Artificial neural networks have opened up a world of possibilities in data science and artificial intelligence, but neural networks are cumbersome tools that grow with the complexity of the learning problem. We make contributions to this issue by considering a modified version of the fully connected layer we call a block diagonal inner product layer. These modified layers have weight matrices that are block diagonal, turning a single fully connected layer into a set of densely connected neuron groups. This idea is a natural extension of group, or depthwise separable, convolutional layers applied to the fully connected layers. Block diagonal inner product layers can be achieved by either initializing a purely block diagonal weight matrix or by iteratively pruning off diagonal block entries. This method condenses network storage and speeds up the run time without significant adverse effect on the testing accuracy, thus offering a new approach to improve network computation efficiency.", "target": ["Wir betrachten neuronale Netze mit blockdiagonalen inneren Produktschichten aus Gründen der Effizienz.", "In diesem Beitrag wird vorgeschlagen, die inneren Schichten in einem neuronalen Netz blockdiagonal zu gestalten, und es wird erörtert, dass blockdiagonale Matrizen effizienter sind als Pruning und dass blockdiagonale Schichten zu effizienteren Netzen führen.", "Ersetzen von vollständig zusammenhängenden Schichten durch blockdiagonale vollständig zusammenhängende Schichten."]} +{"source": "One of the challenges in the study of generative adversarial networks is the instability of its training. \n In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator.\n Our new normalization technique is computationally light and easy to incorporate into existing implementations. \n We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.", "target": ["Wir schlagen eine neue Technik zur Gewichtsnormalisierung vor, die als spektrale Normalisierung bezeichnet wird, um das Training des Diskriminators von GANs zu stabilisieren.", "In dieser Arbeit wird die spektrale Regularisierung zur Normalisierung von GAN-Zielen verwendet, und das daraus resultierende GAN, SN-GAN genannt, gewährleistet im Wesentlichen die Lipschitz-Eigenschaft des Diskriminators.", "In dieser Arbeit wird eine \"spektrale Normalisierung\" vorgeschlagen, die einen großen Schritt nach vorne bei der Verbesserung des Trainings von GANs darstellt."]} +{"source": "Humans acquire complex skills by exploiting previously learned skills and making transitions between them. To empower machines with this ability, we propose a method that can learn transition policies which effectively connect primitive skills to perform sequential tasks without handcrafted rewards. To efficiently train our transition policies, we introduce proximity predictors which induce rewards gauging proximity to suitable initial states for the next skill. The proposed method is evaluated on a set of complex continuous control tasks in bipedal locomotion and robotic arm manipulation which traditional policy gradient methods struggle at. We demonstrate that transition policies enable us to effectively compose complex skills with existing primitive skills. The proposed induced rewards computed using the proximity predictor further improve training efficiency by providing more dense information than the sparse rewards from the environments. We make our environments, primitive skills, and code public for further research at https://youngwoon.github.io/transition .", "target": ["Übergangsstrategien ermöglichen es den Agenten, komplexe Fertigkeiten zusammenzustellen, indem sie zuvor erworbene primitive Fertigkeiten nahtlos miteinander verbinden.", "Schlägt ein Schema für den Übergang zu günstigen Strategiezuständen für die Ausführung gegebener Optionen in kontinuierlichen Domänen vor. Dabei kommen zwei gleichzeitig ablaufende Lernprozesse zum Einsatz.", "Vorgestellt wird eine Methode zum Erlernen von Strategien für den Übergang von einer Aufgabe zu einer anderen mit dem Ziel, komplexe Aufgaben unter Verwendung eines Schätzers für die Zustandsnähe zur Belohnung für die Übergangsstrategie abzuschließen.", "Schlägt ein neues Trainingsschema mit einer erlernten Hilfsbelohnungsfunktion zur Optimierung von Übergangsstrategien vor, die den Endzustand einer vorherigen Makroaktion/Option mit guten Anfangszuständen der folgenden Makroaktion/Option verbinden."]} +{"source": "Gated recurrent units (GRUs) were inspired by the common gated recurrent unit, long short-term memory (LSTM), as a means of capturing temporal structure with less complex memory unit architecture. Despite their incredible success in tasks such as natural and artificial language processing, speech, video, and polyphonic music, very little is understood about the specific dynamic features representable in a GRU network. As a result, it is difficult to know a priori how successful a GRU-RNN will perform on a given data set. In this paper, we develop a new theoretical framework to analyze one and two dimensional GRUs as a continuous dynamical system, and classify the dynamical features obtainable with such system.\n We found rich repertoire that includes stable limit cycles over time (nonlinear oscillations), multi-stable state transitions with various topologies, and homoclinic orbits. In addition, we show that any finite dimensional GRU cannot precisely replicate the dynamics of a ring attractor, or more generally, any continuous attractor, and is limited to finitely many isolated fixed points in theory. These findings were then experimentally verified in two dimensions by means of time series prediction.", "target": ["Wir klassifizieren die dynamischen Merkmale, die eine und zwei GRU-Zellen in kontinuierlicher Zeit erfassen können und die nicht erfasst werden können, und verifizieren unsere Ergebnisse experimentell mit k-schrittigen Zeitreihenvorhersagen. ", "Die Autoren analysieren GRUs mit versteckten Größen von eins und zwei als zeitkontinuierliche dynamische Systeme und behaupten, dass die Ausdruckskraft der versteckten Zustandsdarstellung Vorwissen darüber liefern kann, wie gut eine GRU bei einem bestimmten Datensatz abschneiden wird.", "In diesem Beitrag werden GRUs aus der Perspektive dynamischer Systeme analysiert und es wird gezeigt, dass 2d-GRUs so trainiert werden können, dass sie eine Vielzahl von Fixpunkten annehmen und Linienattraktoren annähern können, aber keinen Ringattraktor nachahmen können.", "Konvertiert GRU-Gleichungen in kontinuierliche Zeit und nutzt Theorie und Erfahrungen, um 1- und 2-dimensionale GRU-Netzwerke zu untersuchen und jede Vielfalt der dynamischen Topologie in diesen Systemen zu zeigen."]} +{"source": "Stacked hourglass network has become an important model for Human pose estimation. The estimation of human body posture depends on the global information of the keypoints type and the local information of the keypoints location. The consistent processing of inputs and constraints makes it difficult to form differentiated and determined collaboration mechanisms for each stacked hourglass network. In this paper, we propose a Multi-Scale Stacked Hourglass (MSSH) network to high-light the differentiation capabilities of each Hourglass network for human pose estimation. The pre-processing network forms feature maps of different scales,and dispatch them to various locations of the stack hourglass network, where the small-scale features reach the front of stacked hourglass network, and large-scale features reach the rear of stacked hourglass network. And a new loss function is proposed for multi-scale stacked hourglass network. Different keypoints have different weight coefficients of loss function at different scales, and the keypoints weight coefficients are dynamically adjusted from the top-level hourglass network to the bottom-level hourglass network. Experimental results show that the pro-posed method is competitive with respect to the comparison algorithm on MPII and LSP datasets.", "target": ["Differenzierte Eingaben führen zu einer funktionalen Differenzierung des Netzes, und die Interaktion von Verlustfunktionen zwischen Netzen kann den Optimierungsprozess beeinflussen.", "Eine Abwandlung des ursprünglichen Sanduhrnetzes für die Schätzung von Einzelposen, die Verbesserungen gegenüber der ursprünglichen Basislinie bringt.", "Die Autoren erweitern ein gestapeltes Sanduhr-Netzwerk mit Inception-Resnet-A-Modulen und schlagen einen mehrskaligen Ansatz zur Schätzung der menschlichen Pose in RGB-Standbildern vor."]} +{"source": "We present a new unsupervised method for learning general-purpose sentence embeddings.\n Unlike existing methods which rely on local contexts, such as words\n inside the sentence or immediately neighboring sentences, our method selects, for\n each target sentence, influential sentences in the entire document based on a document\n structure. We identify a dependency structure of sentences using metadata\n or text styles. Furthermore, we propose a novel out-of-vocabulary word handling\n technique to model many domain-specific terms, which were mostly discarded by\n existing sentence embedding methods. We validate our model on several tasks\n showing 30% precision improvement in coreference resolution in a technical domain,\n and 7.5% accuracy increase in paraphrase detection compared to baselines.", "target": ["Um eine Satzeinbettung anhand von technischen Dokumenten zu trainieren, berücksichtigt unser Ansatz die Dokumentstruktur, um einen breiteren Kontext zu finden und Wörter außerhalb des Vokabulars zu behandeln.", "Präsentiert Ideen zur Verbesserung der Satzeinbettung durch die Einbeziehung von mehr Kontext.", "Lernen von Satzrepräsentationen mit Informationen über Satzabhängigkeiten.", "Erweitert die Idee der Bildung einer unüberwachten Repräsentation von Sätzen, wie sie im SkipThough-Ansatz verwendet wird, durch die Verwendung einer breiteren Reihe von Beweisen für die Bildung der Repräsentation eines Satzes."]} +{"source": "Neural network training relies on our ability to find ````````\"good\" minimizers of highly non-convex loss functions. It is well known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood.\n\n In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple ``\"filter normalization\" method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture effects the loss landscape, and how training parameters affect the shape of minimizers.", "target": ["Wir erforschen die Struktur neuronaler Verlustfunktionen und die Auswirkung von Verlustlandschaften auf die Generalisierung, indem wir eine Reihe von Visualisierungsmethoden verwenden.", "Diese Arbeit schlägt eine Methode zur Visualisierung der Verlustfunktion eines NN vor und gibt Einblicke in die Trainierbarkeit und Generalisierung von NNs.", "Untersucht die Nicht-Konvexität der Verlustfläche und der Optimierungspfade."]} +{"source": "Deep models are state-of-the-art for many computer vision tasks including image classification and object detection. However, it has been shown that deep models are vulnerable to adversarial examples. We highlight how one-hot encoding directly contributes to this vulnerability and propose breaking away from this widely-used, but highly-vulnerable mapping. We demonstrate that by leveraging a different output encoding, multi-way encoding, we can make models more robust. Our approach makes it more difficult for adversaries to find useful gradients for generating adversarial attacks. We present state-of-the-art robustness results for black-box, white-box attacks, and achieve higher clean accuracy on four benchmark datasets: MNIST, CIFAR-10, CIFAR-100, and SVHN when combined with adversarial training. The strength of our approach is also presented in the form of an attack for model watermarking, raising challenges in detecting stolen models.", "target": ["Wir zeigen, dass wir tiefe Modelle robuster gegen adversarial Angriffe machen können, indem wir eine Multi-way Output Encoding anstelle des weit verbreiteten One-Hot Encoding verwenden.", "In diesem Beitrag wird vorgeschlagen, die abschließende Cross-Entropie-Schicht, die auf One-Hot-Labels in Klassifikatoren trainiert wird, durch die Kodierung jedes Labels als hochdimensionalen Vektor zu ersetzen und den Klassifikator so zu trainieren, dass der L2-Abstand zur Kodierung der richtigen Klasse minimiert wird.", "Die Autoren schlagen eine neue Methode zur Abwehr von Angriffen vor, die im Vergleich zu den Basislinien erhebliche Vorteile bietet."]} +{"source": "Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English–Romanian.", "target": ["Wir stellen das erste NMT-Modell mit vollständig paralleler Dekodierung vor, das die Inferenzlatenz um das 10-fache reduziert.", "Diese Arbeit schlägt einen nicht-autoregressiven Decoder für den Encoder-Decoder-Rahmen vor, bei dem die Entscheidung, ein Wort zu erzeugen, nicht von der vorherigen Entscheidung der erzeugten Wörter abhängt.", "Diese Arbeit beschreibt einen Ansatz zur nicht-autoregressiven Dekodierung für neuronale maschinelle Übersetzung mit der Möglichkeit einer paralleleren Dekodierung, die zu einer erheblichen Geschwindigkeitssteigerung führen kann.", "Schlägt die Einführung einer Reihe latenter Variablen vor, die die Fruchtbarkeit jedes Ausgangswortes darstellen, um die Generierung des Zielsatzes nicht autoregressiv zu gestalten."]} +{"source": "While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no that perturbs each pixel by at most $\\epsilon = 0.1$ can cause more than $35\\%$ test error.\n", "target": ["Wir demonstrieren ein zertifizierbares, trainierbares und skalierbares Verfahren zur Verteidigung gegen adversarial Beispiele.", "Schlägt eine neue Verteidigung gegen Sicherheitsangriffe auf neuronale Netze mit dem attack-Modell vor, das ein Sicherheitszertifikat für den Algorithmus ausgibt.", "Ableitung einer oberen Schranke für adversarial Störungen für neuronale Netze mit einer versteckten Schicht."]} +{"source": "We formulate an information-based optimization problem for supervised classification. For invertible neural networks, the control of these information terms is passed down to the latent features and parameter matrix in the last fully connected layer, given that mutual information is invariant under invertible map. We propose an objective function and prove that it solves the optimization problem. Our framework allows us to learn latent features in an more interpretable form while improving the classification performance. We perform extensive quantitative and qualitative experiments in comparison with the existing state-of-the-art classification models.", "target": ["Wir schlagen einen Regularisierer vor, der die Klassifizierungsleistung neuronaler Netze verbessert.", "Die Autoren schlagen vor, ein Modell unter dem Gesichtspunkt der Maximierung der gegenseitigen Information zwischen den Vorhersagen und den wahren Ausgaben zu trainieren, mit einem Regularisierungsterm, der irrelevante Informationen beim Lernen minimiert.", "Schlägt vor, die Parameter in eine invertierbare Merkmalskarte F und eine lineare Transformation w in der letzten Schicht zu zerlegen, um die gegenseitige Information I(Y, \\hat{T}) zu maximieren und gleichzeitig irrelevante Informationen einzuschränken."]} +{"source": "Powerful generative models, particularly in Natural Language Modelling, are commonly trained by maximizing a variational lower bound on the data log likelihood. These models often suffer from poor use of their latent variable, with ad-hoc annealing factors used to encourage retention of information in the latent variable. We discuss an alternative and general approach to latent variable modelling, based on an objective that encourages a perfect reconstruction by tying a stochastic autoencoder with a variational autoencoder (VAE). This ensures by design that the latent variable captures information about the observations, whilst retaining the ability to generate well. Interestingly, although our model is fundamentally different to a VAE, the lower bound attained is identical to the standard VAE bound but with the addition of a simple pre-factor; thus, providing a formal interpretation of the commonly used, ad-hoc pre-factors in training VAEs.", "target": ["In diesem Beitrag wird ein neuartiger generativer Modellierungsrahmen vorgestellt, der den Zusammenbruch latenter Variablen vermeidet und die Verwendung bestimmter Ad-hoc-Faktoren beim Training von variationalen Autoencodern verdeutlicht.", "Die Arbeit schlägt vor, das Problem eines variationalen Autoencoders zu lösen, der die latenten Variablen ignoriert.", "Diese Arbeit schlägt vor, einen stochastischen Autoencoder zum ursprünglichen VAE-Modell hinzuzufügen, um das Problem zu lösen, dass der LSTM Decoder eines Sprachmodells zu stark sein könnte, um die Informationen der latenten Variablen zu ignorieren.", "In diesem Beitrag wird AutoGen vorgestellt, das einen generativen variationalen Autoencoder mit einem auf Autoencoder basierenden High-Fidelity-Rekonstruktionsmodell kombiniert, um die latente Repräsentation besser zu nutzen."]} +{"source": "This paper studies the problem of domain division which aims to segment instances drawn from different probabilistic distributions. This problem exists in many previous recognition tasks, such as Open Set Learning (OSL) and Generalized Zero-Shot Learning (G-ZSL), where the testing instances come from either seen or unseen/novel classes with different probabilistic distributions. Previous works only calibrate the confident prediction of classifiers of seen classes (WSVM Scheirer et al. (2014)) or taking unseen classes as outliers Socher et al. (2013). In contrast, this paper proposes a probabilistic way of directly estimating and fine-tuning the decision boundary between seen and unseen classes. In particular, we propose a domain division algorithm to split the testing instances into known, unknown and uncertain domains, and then conduct recognition tasks in each domain. Two statistical tools, namely, bootstrapping and KolmogorovSmirnov (K-S) Test, for the first time, are introduced to uncover and fine-tune the decision boundary of each domain. Critically, the uncertain domain is newly introduced in our framework to adopt those instances whose domain labels cannot be predicted confidently. Extensive experiments demonstrate that our approach achieved the state-of-the-art performance on OSL and G-ZSL benchmarks.", "target": ["Diese Arbeit untersucht das Problem der Domänenaufteilung durch Segmentierung von Instanzen, die aus verschiedenen probabilistischen Verteilungen stammen. ", "Dieser Beitrag befasst sich mit dem Problem der Neuheitserkennung beim Lernen mit offenen Mengen und beim verallgemeinerten Zero-Shot Lernen und schlägt eine mögliche Lösung vor.", "Ein Ansatz zur Trennung von Bereichen, der auf Bootstrapping basiert, um Ähnlichkeitsschwellenwerte für bekannte Klassen zu ermitteln, gefolgt von einem Kolmogorov-Smirnoff-Test zur Verfeinerung der Bootstrapping-In-Distributionszonen.", "Schlägt vor, einen neuen Bereich, den unsicheren Bereich, einzuführen, um die Unterscheidung zwischen gesehenen und ungesehenen Bereichen beim Open-Set und verallgemeinerten Zero-Shot Lernen besser zu handhaben."]} +{"source": "Stochastic gradient descent (SGD) is widely believed to perform implicit regularization when used to train deep neural networks, but the precise manner in which this occurs has thus far been elusive. We prove that SGD minimizes an average potential over the posterior distribution of weights along with an entropic regularization term. This potential is however not the original loss function in general. So SGD does perform variational inference, but for a different loss than the one used to compute the gradients. Even more surprisingly, SGD does not even converge in the classical sense: we show that the most likely trajectories of SGD for deep networks do not behave like Brownian motion around critical points. Instead, they resemble closed loops with deterministic components. We prove that such out-of-equilibrium behavior is a consequence of highly non-isotropic gradient noise in SGD; the covariance matrix of mini-batch gradients for deep networks has a rank as small as 1% of its dimension. We provide extensive empirical validation of these claims, proven in the appendix.", "target": ["SGD führt implizit eine Variationsinferenz durch; das Gradientenrauschen ist in hohem Maße nicht isotrop, so dass SGD nicht einmal zu kritischen Punkten des ursprünglichen Verlusts konvergiert.", "Diese Arbeit bietet eine Variationsanalyse von SGD als Nicht-Gleichgewichtsprozess.", "Dieser Beitrag diskutiert die regulierte Zielfunktion, die durch Standard-SGD im Kontext neuronaler Netze minimiert wird, und bietet eine Perspektive der Variationsinferenz unter Verwendung der Fokker-Planck-Gleichung.", "Entwicklung einer Theorie zur Untersuchung der Auswirkungen von stochastischem Gradientenrauschen für SGD, insbesondere für tiefe neuronale Netzmodelle."]} +{"source": "The current dominant paradigm for imitation learning relies on strong supervision of expert actions to learn both 'what' and 'how' to imitate. We pursue an alternative paradigm wherein an agent first explores the world without any expert supervision and then distills its experience into a goal-conditioned skill policy with a novel forward consistency loss. In our framework, the role of the expert is only to communicate the goals (i.e., what to imitate) during inference. The learned policy is then employed to mimic the expert (i.e., how to imitate) after seeing just a sequence of images demonstrating the desired task. Our method is 'zero-shot' in the sense that the agent never has access to expert actions during training or for the task demonstration at inference. We evaluate our zero-shot imitator in two real-world settings: complex rope manipulation with a Baxter robot and navigation in previously unseen office environments with a TurtleBot. Through further experiments in VizDoom simulation, we provide evidence that better mechanisms for exploration lead to learning a more capable policy which in turn improves end task performance. Videos, models, and more details are available at https://pathak22.github.io/zeroshot-imitation/.", "target": ["Agenten können lernen, ausschließlich visuelle Demonstrationen (ohne Handlungen) zur Testzeit zu imitieren, nachdem sie zur Trainingszeit aus eigener Erfahrung und ohne jegliche Form der Überwachung gelernt haben.", "In diesem Beitrag wird ein Ansatz für Zero-Shot visuelles Lernen durch das Erlernen parametrischer Fähigkeitsfunktionen vorgeschlagen.", "Eine Arbeit über die Imitation einer Aufgabe, die nur während der Inferenz präsentiert wird, wobei das Lernen auf selbstüberwachte Weise erfolgt und der Agent während des Trainings verwandte, aber unterschiedliche Aufgaben erforscht.", "Schlägt eine Methode zur Umgehung des Problems der kostspieligen Expertendemonstration vor, indem die zufällige Erkundung eines Agenten genutzt wird, um verallgemeinerbare Fähigkeiten zu erlernen, die ohne spezielles Vortraining angewendet werden können."]} +{"source": "Distributional Semantics Models(DSM) derive word space from linguistic items\n in context. Meaning is obtained by defining a distance measure between vectors\n corresponding to lexical entities. Such vectors present several problems. This\n work concentrates on quality of word embeddings, improvement of word embedding\n vectors, applicability of a novel similarity metric used ‘on top’ of the\n word embeddings. In this paper we provide comparison between two methods\n for post process improvements to the baseline DSM vectors. The counter-fitting\n method which enforces antonymy and synonymy constraints into the Paragram\n vector space representations recently showed improvement in the vectors’ capability\n for judging semantic similarity. The second method is our novel RESM\n method applied to GloVe baseline vectors. By applying the hubness reduction\n method, implementing relational knowledge into the model by retrofitting synonyms\n and providing a new ranking similarity definition RESM that gives maximum\n weight to the top vector component values we equal the results for the ESL\n and TOEFL sets in comparison with our calculations using the Paragram and Paragram\n + Counter-fitting methods. For SIMLEX-999 gold standard since we cannot\n use the RESM the results using GloVe and PPDB are significantly worse compared\n to Paragram. Apparently, counter-fitting corrects hubness. The Paragram\n or our cosine retrofitting method are state-of-the-art results for the SIMLEX-999\n gold standard. They are 0.2 better for SIMLEX-999 than word2vec with sense\n de-conflation (that was announced to be state-of the-art method for less reliable\n gold standards). Apparently relational knowledge and counter-fitting is more important\n for judging semantic similarity than sense determination for words. It is to\n be mentioned, though that Paragram hyperparameters are fitted to SIMLEX-999\n results. The lesson is that many corrections to word embeddings are necessary\n and methods with more parameters and hyperparameters perform better.\n", "target": ["Die Arbeit beschreibt ein Verfahren zur Verbesserung von Wortvektorraummodellen mit einer Bewertung von Paragram- und GloVe-Modellen für Ähnlichkeitsbenchmarks.", "In diesem Beitrag wird ein neuer Algorithmus vorgeschlagen, der GloVe-Wortvektoren anpasst und dann eine nicht-euklidische Ähnlichkeitsfunktion zwischen ihnen verwendet.", "Die Autoren stellen Beobachtungen zu den Schwächen der bestehenden Vektorraummodelle vor und nennen einen 6-stufigen Ansatz zur Verfeinerung bestehender Wortvektoren."]} +{"source": "Recurrent neural networks have achieved excellent performance in many applications. However, on portable devices with limited resources, the models are often too large to deploy. For applications on the server with large scale concurrent requests, the latency during inference can also be very critical for costly computing resources. In this work, we address these problems by quantizing the network, both weights and activations, into multiple binary codes {-1,+1}. We formulate the quantization as an optimization problem. Under the key observation that once the quantization coefficients are fixed the binary codes can be derived efficiently by binary search tree, alternating minimization is then applied. We test the quantization for two well-known RNNs, i.e., long short term memory (LSTM) and gated recurrent unit (GRU), on the language models. Compared with the full-precision counter part, by 2-bit quantization we can achieve ~16x memory saving and ~6x real inference acceleration on CPUs, with only a reasonable loss in the accuracy. By 3-bit quantization, we can achieve almost no loss in the accuracy or even surpass the original model, with ~10.5x memory saving and ~3x real inference acceleration. Both results beat the exiting quantization works with large margins. We extend our alternating quantization to image classification tasks. In both RNNs and feedforward neural networks, the method also achieves excellent performance.", "target": ["Wir schlagen eine neue Quantisierungsmethode vor und wenden sie zur Quantisierung von RNNs sowohl für die Kompression als auch für die Beschleunigung an.", "In dieser Arbeit wird eine Multibit Quantisierungsmethode für rekurrente neuronale Netze vorgeschlagen.", "Eine Technik zur Quantisierung von Gewichtsmatrizen neuronaler Netze und ein alternierendes Optimierungsverfahren zur Schätzung der Menge von k binären Vektoren und Koeffizienten, die den ursprünglichen Vektor am besten repräsentieren."]} +{"source": "The goal of this paper is to demonstrate a method for tensorizing neural networks based upon an efficient way of approximating scale invariant quantum states, the Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a replacement for linear layers in a neural network and test this implementation on the CIFAR-10 dataset. The proposed method outperforms factorization using tensor trains, providing greater compression for the same level of accuracy and greater accuracy for the same level of compression. We demonstrate MERA-layers with 3900 times fewer parameters and a reduction in accuracy of less than 1% compared to the equivalent fully connected layers.\n", "target": ["Wir ersetzen die vollständig verknüpften Schichten eines neuronalen Netzes durch den Multiskalen-Verschränkungs-Renormalisierungssatz, eine Art Quantenoperation, die Korrelationen über große Entfernungen beschreibt. ", "In dem Beitrag schlagen die Autoren vor, die MERA-Tensorisierungstechnik zur Komprimierung neuronaler Netze zu verwenden.", "Eine neue Parametrisierung linearer Zuordnungen für den Einsatz in neuronalen Netzen, die eine hierarchische Faktorisierung der linearen Zuordnung verwendet, die die Anzahl der Parameter reduziert und gleichzeitig die Modellierung relativ komplexer Wechselwirkungen ermöglicht.", "Studien zur Komprimierung von Feed-Forward-Schichten unter Verwendung von Tensor-Zerlegungen niedrigen Ranges und Erforschung einer baumartigen Zerlegung."]} +{"source": "Deep learning models have outperformed traditional methods in many fields such\n as natural language processing and computer vision. However, despite their\n tremendous success, the methods of designing optimal Convolutional Neural Networks\n (CNNs) are still based on heuristics or grid search. The resulting networks\n obtained using these techniques are often overparametrized with huge computational\n and memory requirements. This paper focuses on a structured, explainable\n approach towards optimal model design that maximizes accuracy while keeping\n computational costs tractable. We propose a single-shot analysis of a trained CNN\n that uses Principal Component Analysis (PCA) to determine the number of filters\n that are doing significant transformations per layer, without the need for retraining.\n It can be interpreted as identifying the dimensionality of the hypothesis space\n under consideration. The proposed technique also helps estimate an optimal number\n of layers by looking at the expansion of dimensions as the model gets deeper.\n This analysis can be used to design an optimal structure of a given network on\n a dataset, or help to adapt a predesigned network on a new dataset. We demonstrate\n these techniques by optimizing VGG and AlexNet networks on CIFAR-10,\n CIFAR-100 and ImageNet datasets.", "target": ["Wir präsentieren eine Single-Shot Analyse eines trainierten neuronalen Netzes, um Redundanzen zu entfernen und die optimale Netzstruktur zu ermitteln.", "In dieser Arbeit wird eine Reihe von Heuristiken zur Identifizierung einer guten Architektur für neuronale Netze vorgeschlagen, die auf der PCA der Aktivierungen der Einheiten über den Datensatz basieren.", "In diesem Beitrag wird ein Rahmen für die Optimierung von Architekturen neuronaler Netze durch die Identifizierung redundanter Filter in verschiedenen Schichten vorgestellt."]} +{"source": "Recent work has introduced attacks that extract the architecture information of deep neural networks (DNN), as this knowledge enhances an adversary’s capability to conduct attacks on black-box networks. This paper presents the first in-depth security analysis of DNN fingerprinting attacks that exploit cache side-channels. First, we define the threat model for these attacks: our adversary does not need the ability to query the victim model; instead, she runs a co-located process on the host machine victim ’s deep learning (DL) system is running and passively monitors the accesses of the target functions in the shared framework. Second, we introduce DeepRecon, an attack that reconstructs the architecture of the victim network by using the internal information extracted via Flush+Reload, a cache side-channel technique. Once the attacker observes function invocations that map directly to architecture attributes of the victim network, the attacker can reconstruct the victim’s entire network architecture. In our evaluation, we demonstrate that an attacker can accurately reconstruct two complex networks (VGG19 and ResNet50) having only observed one forward propagation. Based on the extracted architecture attributes, we also demonstrate that an attacker can build a meta-model that accurately fingerprints the architecture and family of the pre-trained model in a transfer learning setting. From this meta-model, we evaluate the importance of the observed attributes in the fingerprinting process. Third, we propose and evaluate new framework-level defense techniques that obfuscate our attacker’s observations. Our empirical security analysis represents a step toward understanding the DNNs’ vulnerability to cache side-channel attacks.", "target": ["Wir führen die erste eingehende Sicherheitsanalyse von DNN-Fingerprinting Angriffen durch, die Cache Seitenkanäle ausnutzen, was einen Schritt zum Verständnis der Anfälligkeit von DNNs für Seitenkanalangriffe darstellt.", "In diesem Beitrag wird das Problem des Fingerprints von neuronalen Netzwerkarchitekturen unter Verwendung von Cache-Seitenkanälen betrachtet und es werden Verteidigungsmaßnahmen gegen Sicherheit durch Unklarheit diskutiert.", "In diesem Beitrag werden Cache-Seitenkanalangriffe durchgeführt, um Attribute eines Opfermodells zu extrahieren und auf dessen Architektur zu schließen. Außerdem wird gezeigt, dass sie eine nahezu perfekte Klassifizierungsgenauigkeit erreichen können."]} +{"source": "Learning with a primary objective, such as softmax cross entropy for classification and sequence generation, has been the norm for training deep neural networks for years. Although being a widely-adopted approach, using cross entropy as the primary objective exploits mostly the information from the ground-truth class for maximizing data likelihood, and largely ignores information from the complement (incorrect) classes. We argue that, in addition to the primary objective, training also using a complement objective that leverages information from the complement classes can be effective in improving model performance. This motivates us to study a new training paradigm that maximizes the likelihood of the ground-truth class while neutralizing the probabilities of the complement classes. We conduct extensive experiments on multiple tasks ranging from computer vision to natural language understanding. The experimental results confirm that, compared to the conventional training with just one primary objective, training also with the complement objective further improves the performance of the state-of-the-art models across all tasks. In addition to the accuracy improvement, we also show that models trained with both primary and complement objectives are more robust to single-step adversarial attacks.\n", "target": ["Wir schlagen Complement Objective Training (COT) vor, ein neues Trainingsparadigma, das sowohl die primären als auch die komplementären Ziele optimiert, um die Parameter neuronaler Netze effektiv zu lernen.", "Es wird erwogen, das Ziel der Cross-Entropy durch eine Maximierung des \"Komplement\"-Ziels zu ergänzen, das darauf abzielt, die vorhergesagten Wahrscheinlichkeiten von Klassen, die nicht der Grundwahrheit entsprechen, zu neutralisieren.", "Die Autoren schlagen ein sekundäres Ziel für die Softmax-Minimierung vor, das auf der Auswertung der von den falschen Klassen gesammelten Informationen beruht und zu einem neuen Trainingsansatz führt.", "Befasst sich mit dem Training neuronaler Netze für Klassifizierungs- oder Sequenzerstellungsaufgaben unter Verwendung von Cross-Entropie-Verlusten."]} +{"source": "We present a new method for uncertainty estimation and out-of-distribution detection in neural networks with softmax output. We extend softmax layer with an additional constant input. The corresponding additional output is able to represent the uncertainty of the network. The proposed method requires neither additional parameters nor multiple forward passes nor input preprocessing nor out-of-distribution datasets. We show that our method performs comparably to more computationally expensive methods and outperforms baselines on our experiments from image recognition and sentiment analysis domains.", "target": ["Unsicherheitsabschätzung in einem einzigen Vorwärtsdurchlauf ohne zusätzliche lernbare Parameter.", "Eine neue Methode zur Berechnung von Output-Unsicherheitsschätzungen in DNNs für Klassifizierungsprobleme, die mit den modernsten Methoden zur Unsicherheitsschätzung übereinstimmt und diese bei Aufgaben zur Erkennung von Abweichungen von der Verteilung übertrifft.", "Die Autoren stellen den gehemmte nSoftmax vor, eine Modifikation des Softmax durch Hinzufügen einer konstanten Aktivierung, die ein Maß für die Unsicherheit darstellt. "]} +{"source": "When deep learning is applied to sensitive data sets, many privacy-related implementation issues arise. These issues are especially evident in the healthcare, finance, law and government industries. Homomorphic encryption could allow a server to make inferences on inputs encrypted by a client, but to our best knowledge, there has been no complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption. This paper demonstrates a novel approach, efficiently implementing many deep learning functions with bootstrapped homomorphic encryption. As part of our implementation, we demonstrate Single and Multi-Layer Neural Networks, for the Wisconsin Breast Cancer dataset, as well as a Convolutional Neural Network for MNIST. Our results give promising directions for privacy-preserving representation learning, and the return of data control to users.\n\n", "target": ["Wir haben ein funktionsreiches System für Deep Learning mit verschlüsselten Eingaben entwickelt, das verschlüsselte Ausgaben erzeugt und die Privatsphäre wahrt.", "Ein Framework für private Deep Learning Modellinferenz unter Verwendung von FHE-Schemata, die schnelles Bootstrapping unterstützen und somit die Rechenzeit reduzieren können.", "In dem Beitrag wird eine Möglichkeit vorgestellt, ein neuronales Netz mit Hilfe homomorpher Verschlüsselung sicher zu bewerten."]} +{"source": "In this paper, we introduce a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant. Interactive theorem provers such as Coq enable users to construct machine-checkable proofs in a step-by-step manner. Hence, they provide an opportunity to explore theorem proving with human supervision. We use GamePad to synthesize proofs for a simple algebraic rewrite problem and train baseline models for a formalization of the Feit-Thompson theorem. We address position evaluation (i.e., predict the number of proof steps left) and tactic prediction (i.e., predict the next proof step) tasks, which arise naturally in tactic-based theorem proving.", "target": ["Wir stellen ein System namens GamePad vor, um die Anwendung von Methoden des maschinellen Lernens auf das Theorembeweisen im Coq-Beweisassistenten zu untersuchen.", "Dieser Artikel beschreibt ein System zur Anwendung von maschinellem Lernen auf interaktive Theorembeweise, konzentriert sich auf die Aufgaben der Taktikvorhersage und der Positionsbewertung und zeigt, dass ein neuronales Modell ein SVM bei beiden Aufgaben übertrifft.", "Schlägt vor, dass Techniken des maschinellen Lernens bei der Erstellung von Beweisen im Theorembeweiser Coq eingesetzt werden."]} +{"source": "Deep neural networks are usually huge, which significantly limits the deployment on low-end devices. In recent years, many\n weight-quantized models have been proposed. They have small storage and fast inference, but training can still be time-consuming. This can be improved with distributed learning. To reduce the high communication cost due to worker-server synchronization, recently gradient quantization has also been proposed to train deep networks with full-precision weights. \n In this paper, we theoretically study how the combination of both weight and gradient quantization affects convergence.\n We show that (i) weight-quantized models converge to an error related to the weight quantization resolution and weight dimension; (ii) quantizing gradients slows convergence by a factor related to the gradient quantization resolution and dimension; and (iii) clipping the gradient before quantization renders this factor dimension-free, thus allowing the use of fewer bits for gradient quantization. Empirical experiments confirm the theoretical convergence results, and demonstrate that quantized networks can speed up training and have comparable performance as full-precision networks.", "target": ["In dieser Arbeit haben wir das effiziente Training von verlustbewussten gewichtsquantisierten Netzen mit quantisiertem Gradienten in einer verteilten Umgebung sowohl theoretisch als auch empirisch untersucht.", "Diese Arbeit untersucht die Konvergenzeigenschaften der verlustbewussten Gewichtsquantisierung mit verschiedenen Gradientenpräzisionen in einer verteilten Umgebung und bietet eine Konvergenzanalyse für die Gewichtsquantisierung mit voll-präzisen, quantisierten, und quantisierten abgeschnittenen Gradienten.", "Die Autoren schlagen eine Analyse der Auswirkungen der gleichzeitigen Quantisierung der Gewichte und Gradienten beim Training eines parametrisierten Modells in einer vollständig synchronisierten verteilten Umgebung vor."]} +{"source": "Sequential learning, also called lifelong learning, studies the problem of learning tasks in a sequence with access restricted to only the data of the current task. In this paper we look at a scenario with fixed model capacity, and postulate that the learning process should not be selfish, i.e. it should account for future tasks to be added and thus leave enough capacity for them. To achieve Selfless Sequential Learning we study different regularization strategies and activation functions. We find that\n imposing sparsity at the level of the representation (i.e. neuron activations) is more beneficial for sequential learning than encouraging parameter sparsity. In particular, we propose a novel regularizer, that encourages representation sparsity by means of neural inhibition. It results in few active neurons which in turn leaves more free neurons to be utilized by upcoming tasks. As neural inhibition over an entire layer can be too drastic, especially for complex tasks requiring strong representations,\n our regularizer only inhibits other neurons in a local neighbourhood, inspired by lateral inhibition processes in the brain. We combine our novel regularizer with state-of-the-art lifelong learning methods that penalize changes to important previously learned parts of the network. We show that our new regularizer leads to increased sparsity which translates in consistent performance improvement on diverse datasets.", "target": ["Eine Regularisierungsstrategie zur Verbesserung der Leistung des sequentiellen Lernens.", "Ein neuartiger, auf Regularisierung basierender Ansatz für das sequentielle Lernproblem unter Verwendung eines Modells fester Größe, das zusätzliche Terme zum Verlust hinzufügt, das die Seltenheit der Darstellung fördert und katastrophales Vergessen bekämpft.", "Diese Arbeit befasst sich mit dem Problem des katastrophalen Vergessens beim lebenslangen Lernen, indem es regulierte Lernstrategien vorschlägt."]} +{"source": "A Synaptic Neural Network (SynaNN) consists of synapses and neurons. Inspired by the synapse research of neuroscience, we built a synapse model with a nonlinear synapse function of excitatory and inhibitory channel probabilities. Introduced the concept of surprisal space and constructed a commutative diagram, we proved that the inhibitory probability function -log(1-exp(-x)) in surprisal space is the topologically conjugate function of the inhibitory complementary probability 1-x in probability space. Furthermore, we found that the derivative of the synapse over the parameter in the surprisal space is equal to the negative Bose-Einstein distribution. In addition, we constructed a fully connected synapse graph (tensor) as a synapse block of a synaptic neural network. Moreover, we proved the gradient formula of a cross-entropy loss function over parameters, so synapse learning can work with the gradient descent and backpropagation algorithms. In the proof-of-concept experiment, we performed an MNIST training and testing on the MLP model with synapse network as hidden layers.", "target": ["Ein synaptisches neuronales Netzwerk mit Synapsengraphen und Lernen, das die Eigenschaft der topologischen Konjugation und der Bose-Einstein-Verteilung im Surprisal-Raum aufweist. ", "Die Autoren schlagen ein hybrides neuronales Netzwerk vor, das aus einem Synapsengraphen besteht, der in ein standardmäßiges neuronales Netzwerk eingebettet werden kann.", "stellt ein biologisch inspiriertes neuronales Netzmodell vor, das auf den erregenden und hemmenden Ionenkanälen in den Membranen echter Zellen basiert."]} +{"source": "Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but only ad hoc solutions have been obtained this far. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning algorithms, and propose to extend this by introducing a multi-shot unsupervised learning framework. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks.", "target": ["Verallgemeinerte Modelle zur Grapheneinbettung.", "Ein verallgemeinerter Ansatz zur Einbettung von Wissensgraphen, der die Einbettungen auf der Grundlage von drei verschiedenen gleichzeitigen Zielen erlernt und gleich gut oder sogar besser abschneidet als die bestehenden State-of-the-Art-Ansätze.", "Bewältigt die Aufgabe, Einbettungen von multirelationalen Graphen mit Hilfe eines neuronalen Netzes zu lernen.", "Schlägt eine neue Methode, GEN, zur Berechnung von Einbettungen von Multibeziehungsgraphen vor, insbesondere, dass so genannte E-Zellen und R-Zellen Anfragen der Form (h,r,?),(?r,t) und (h,?,t) beantworten können."]} +{"source": "We introduce and study minimax curriculum learning (MCL), a new method for adaptively selecting a sequence of training subsets for a succession of stages in machine learning. The subsets are encouraged to be small and diverse early on, and then larger, harder, and allowably more homogeneous in later stages. At each stage, model weights and training sets are chosen by solving a joint continuous-discrete minimax optimization, whose objective is composed of a continuous loss (reflecting training set hardness) and a discrete submodular promoter of diversity for the chosen subset. MCL repeatedly solves a sequence of such optimizations with a schedule of increasing training set size and decreasing pressure on diversity encouragement. We reduce MCL to the minimization of a surrogate function handled by submodular maximization and continuous gradient methods. We show that MCL achieves better performance and, with a clustering trick, uses fewer labeled samples for both shallow and deep models while achieving the same performance. Our method involves repeatedly solving constrained submodular maximization of an only slowly varying function on the same ground set. Therefore, we develop a heuristic method that utilizes the previous submodular maximization solution as a warm start for the current submodular maximization process to reduce computation while still yielding a guarantee.", "target": ["Minimax Curriculum Learning ist eine maschinelle Lernmethode, bei der die wünschenswerte Härte erhöht und die Vielfalt planmäßig reduziert wird.", "Ein Curriculum Lernansatz, der eine submodulare Mengenfunktion verwendet, die die Vielfalt der während des Trainings ausgewählten Beispiele erfasst. ", "In dem Beitrag wird das MiniMax Curriculum Lernen als ein Ansatz für das adaptive Training von Modellen durch die Bereitstellung verschiedener Teilmengen von Daten vorgestellt."]} +{"source": "Progress in probabilistic generative models has accelerated, developing richer models with neural architectures, implicit densities, and with scalable algorithms for their Bayesian inference. However, there has been limited progress in models that capture causal relationships, for example, how individual genetic factors cause major human diseases. In this work, we focus on two challenges in particular: How do we build richer causal models, which can capture highly nonlinear relationships and interactions between multiple causes? How do we adjust for latent confounders, which are variables influencing both cause and effect and which prevent learning of causal relationships? To address these challenges, we synthesize ideas from causality and modern probabilistic modeling. For the first, we describe implicit causal models, a class of causal models that leverages neural architectures with an implicit density. For the second, we describe an implicit causal model that adjusts for confounders by sharing strength across examples. In experiments, we scale Bayesian inference on up to a billion genetic measurements. We achieve state of the art accuracy for identifying causal factors: we significantly outperform the second best result by an absolute difference of 15-45.3%.", "target": ["Implizite Modelle, angewandt auf Kausalität und Genetik.", "Die Autoren schlagen vor, das implizite Modell zu verwenden, um das Genom-weite Assoziationsproblem anzugehen.", "In diesem Beitrag werden Lösungen für die Probleme bei genomweiten Assoziationsstudien vorgeschlagen, die durch die Populationsstruktur und das potenzielle Vorhandensein nichtlinearer Wechselwirkungen zwischen verschiedenen Teilen des Genoms entstehen, und es werden Brücken zwischen statistischer Genetik und ML geschlagen.", "Vorstellung eines nichtlinearen generativen Modells für GWAS, das die Populationsstruktur modelliert, wobei Nichtlinearitäten mit Hilfe von neuronalen Netzen als nichtlineare Funktionsapproximatoren modelliert werden und die Inferenz mit Hilfe von likelihood-freier Variationsinferenz durchgeführt wird."]} +{"source": "\nFew-shot learning trains image classifiers over datasets with few examples per category. \n It poses challenges for the optimization algorithms, which typically require many examples to fine-tune the model parameters for new categories. \n Distance-learning-based approaches avoid the optimization issue by embedding the images into a metric space and applying the nearest neighbor classifier for new categories. In this paper, we propose to exploit the object-level relation to learn the image relation feature, which is converted into a distance directly.\n For a new category, even though its images are not seen by the model, some objects may appear in the training images. Hence, object-level relation is useful for inferring the relation of images from unseen categories. Consequently, our model generalizes well for new categories without fine-tuning.\n Experimental results on benchmark datasets show that our approach outperforms state-of-the-art methods.", "target": ["Few-Shot Lernen durch Ausnutzung der Beziehung auf Objektebene, um die Beziehung auf Bildebene zu lernen (Ähnlichkeit).", "Diese Arbeit befasst sich mit dem Problem des Few-Shot Lernens, indem es einen auf Einbettung basierenden Ansatz vorschlägt, der lernt, Merkmale auf Objektebene zwischen Support- und Query-Set-Beispielen zu vergleichen.", "Schlägt eine Few-Shot Lernmethode vor, die die Beziehung zwischen verschiedenen Bildern auf Objektebene auf der Grundlage der Suche nach nahen Nachbarn ausnutzt und Merkmalskarten von zwei Eingabebildern zu einer Merkmalszuordnung zusammenfügt."]} +{"source": "Word embeddings are widely used in machine learning based natural language processing systems. It is common to use pre-trained word embeddings which provide benefits such as reduced training time and improved overall performance. There has been a recent interest in applying natural language processing techniques to programming languages. However, none of this recent work uses pre-trained embeddings on code tokens. Using extreme summarization as the downstream task, we show that using pre-trained embeddings on code tokens provides the same benefits as it does to natural languages, achieving: over 1.9x speedup, 5\\% improvement in test loss, 4\\% improvement in F1 scores, and resistance to over-fitting. We also show that the choice of language used for the embeddings does not have to match that of the task to achieve these benefits and that even embeddings pre-trained on human languages provide these benefits to programming languages. ", "target": ["Forscher, die Techniken zur Verarbeitung natürlicher Sprache auf Quellcode anwenden, verwenden keine Form von vortrainierten Einbettungen, wir zeigen, dass sie dies tun sollten.", "In diesem Beitrag wird untersucht, ob das Pretraining von Worteinbettungen für Programmiersprachencode mit Hilfe von NLP-ähnlichen Sprachmodellen einen Einfluss auf die Aufgabe der Zusammenfassung von extremem Code hat.", "Diese Arbeit zeigt, wie das Pretraining von Wortvektoren anhand von Code-Korpussen zu Repräsentationen führt, die besser geeignet sind als zufällig initialisierte und trainierte Repräsentationen für die Vorhersage von Funktions-/Methodennamen."]} +{"source": "Recently, Approximate Policy Iteration (API) algorithms have achieved super-human proficiency in two-player zero-sum games such as Go, Chess, and Shogi without human data. These API algorithms iterate between two policies: a slow policy (tree search), and a fast policy (a neural network). In these two-player games, a reward is always received at the end of the game. However, the Rubik’s Cube has only a single solved state, and episodes are not guaranteed to terminate. This poses a major problem for these API algorithms since they rely on the reward received at the end of the game. We introduce Autodidactic Iteration: an API algorithm that overcomes the problem of sparse rewards by training on a distribution of states that allows the reward to propagate from the goal state to states farther away. Autodidactic Iteration is able to learn how to solve the Rubik’s Cube and the 15-puzzle without relying on human data. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge.", "target": ["Wir lösen den Rubik's Cube mit reinem Reinforcement Learning.", "Lösung zum Lösen des Rubik Cube durch Reinforcement Learning (RL) mit Monte Carlo Baumsuche (MCTS) durch autodidaktische Iteration. ", "In dieser Arbeit wird der Rubik's Cube mit Hilfe einer annähernden Iterationsmethode, der so genannten autodidaktischen Iteration, gelöst, wobei das Problem der spärlichen Belohnungen durch die Schaffung eines eigenen Belohnungssystems überwunden wird.", "Einführung eines tiefen RL-Algorithmus zur Lösung des Rubik-Würfels, der den riesigen Zustandsraum und die sehr spärliche Belohnung des Rubik-Würfels bewältigt."]} +{"source": "Answering compositional questions requiring multi-step reasoning is challenging for current models. We introduce an end-to-end differentiable model for interpreting questions, which is inspired by formal approaches to semantics. Each span of text is represented by a denotation in a knowledge graph, together with a vector that captures ungrounded aspects of meaning. Learned composition modules recursively combine constituents, culminating in a grounding for the complete sentence which is an answer to the question. For example, to interpret ‘not green’, the model will represent ‘green’ as a set of entities, ‘not’ as a trainable ungrounded vector, and then use this vector to parametrize a composition function to perform a complement operation. For each sentence, we build a parse chart subsuming all possible parses, allowing the model to jointly learn both the composition operators and output structure by gradient descent. We show the model can learn to represent a variety of challenging semantic operators, such as quantifiers, negation, disjunctions and composed relations on a synthetic question answering task. The model also generalizes well to longer sentences than seen in its training data, in contrast to LSTM and RelNet baselines. We will release our code.", "target": ["Wir beschreiben ein differenzierbares Ende-zu-Ende Modell für QA, das lernt, Textabschnitte in der Frage als Denotationen im Wissensgraphen darzustellen, indem es sowohl neuronale Module für die Komposition als auch die syntaktische Struktur des Satzes lernt.", "In diesem Beitrag wird ein Modell für die Beantwortung visueller Fragen vorgestellt, das sowohl Parameter als auch Strukturvorherseher für ein modulares neuronales Netz erlernen kann, ohne überwachte Strukturen oder Unterstützung durch einen syntaktischen Parser.", "Schlägt vor, ein Modell zur Beantwortung von Fragen nur aus Antworten und einer KB zu trainieren, indem latente Bäume gelernt werden, die die Syntax erfassen und die Semantik von Wörtern lernen."]} +{"source": "Deep learning software demands reliability and performance. However, many of the existing deep learning frameworks are software libraries that act as an unsafe DSL in Python and a computation graph interpreter. We present DLVM, a design and implementation of a compiler infrastructure with a linear algebra intermediate representation, algorithmic differentiation by adjoint code generation, domain- specific optimizations and a code generator targeting GPU via LLVM. Designed as a modern compiler infrastructure inspired by LLVM, DLVM is more modular and more generic than existing deep learning compiler frameworks, and supports tensor DSLs with high expressivity. With our prototypical staged DSL embedded in Swift, we argue that the DLVM system enables a form of modular, safe and performant frameworks for deep learning.", "target": ["Wir stellen eine neuartige Compiler-Infrastruktur vor, die die Unzulänglichkeiten bestehender Deep-Learning-Frameworks behebt.", "Vorschlag zur Umstellung von Ad-hoc Code Generierung in Deep Learning Engines auf bewährte Verfahren für Compiler und Sprachen.", "In diesem Beitrag wird ein Compiler-Framework vorgestellt, das die Definition von domänenspezifischen Sprachen für Deep Learning Systeme ermöglicht und Kompilierungsphasen definiert, die Standardoptimierungen und spezielle Optimierungen für neuronale Netze nutzen können.", "In diesem Beitrag wird ein DLVM vorgestellt, das die Vorteile eines Tensor-Compilers nutzt."]} +{"source": "In this work, we focus on the problem of grounding language by training an agent\n to follow a set of natural language instructions and navigate to a target object\n in a 2D grid environment. The agent receives visual information through raw\n pixels and a natural language instruction telling what task needs to be achieved.\n Other than these two sources of information, our model does not have any prior\n information of both the visual and textual modalities and is end-to-end trainable.\n We develop an attention mechanism for multi-modal fusion of visual and textual\n modalities that allows the agent to learn to complete the navigation tasks and also\n achieve language grounding. Our experimental results show that our attention\n mechanism outperforms the existing multi-modal fusion mechanisms proposed in\n order to solve the above mentioned navigation task. We demonstrate through the\n visualization of attention weights that our model learns to correlate attributes of\n the object referred in the instruction with visual representations and also show\n that the learnt textual representations are semantically meaningful as they follow\n vector arithmetic and are also consistent enough to induce translation between instructions\n in different natural languages. We also show that our model generalizes\n effectively to unseen scenarios and exhibit zero-shot generalization capabilities.\n In order to simulate the above described challenges, we introduce a new 2D environment\n for an agent to jointly learn visual and textual modalities", "target": ["Aufmerksamkeitsbasierte Architektur für das Erlernen von Sprache durch Reinforcement Learning in einer neuen anpassbaren 2D-Gitterumgebung.", "Der Beitrag befasst sich mit dem Problem der Navigation anhand einer Anweisung und schlägt einen Ansatz vor, der textliche und visuelle Informationen über einen Aufmerksamkeitsmechanismus kombiniert.", "In diesem Beitrag wird das Problem der Befolgung von Anweisungen in natürlicher Sprache bei einer Ich-Perspektive einer a priori unbekannten Umgebung betrachtet und eine Methode der neuronalen Architektur vorgeschlagen.", "Untersucht das Problem der Navigation zu einem Zielobjekt in einer 2D-Gitterumgebung, indem man einer gegebenen natürlichsprachlichen Beschreibung folgt und visuelle Informationen als rohe Pixel erhält."]} +{"source": " Current end-to-end machine reading and question answering (Q\\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\\&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. \n On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.", "target": ["Eine einfache Architektur, die aus Convolutions und Aufmerksamkeit besteht, erzielt Ergebnisse, die mit den am besten dokumentierten rekurrenten Modellen vergleichbar sind.", "Ein schnelles, leistungsstarkes Paraphrasierungsverfahren auf der Grundlage von Datenerweiterung und ein nicht-rekurrentes Leseverständnismodell, das nur Convolutions und Aufmerksamkeit verwendet.", "In dieser Arbeit wird vorgeschlagen, CNNs und Selbstaufmerksamkeitsmodule anstelle von LSTMs einzusetzen und das RC-Modelltraining mit Passagenparaphrasen zu erweitern, die von einem neuronalen Paraphrasierungsmodell generiert werden, um die RC-Leistung zu verbessern.", "In diesem Beitrag wird ein Modell für das Leseverständnis vorgestellt, das Convolutions und Aufmerksamkeit verwendet, und es wird vorgeschlagen, zusätzliche Trainingsdaten durch Paraphrasierung auf der Grundlage von standardmäßiger neuronaler Maschinenübersetzung zu ergänzen."]} +{"source": "Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images. However, a number of problems of recent interest have created a demand for models that can analyze spherical images. Examples include omnidirectional vision for drones, robots, and autonomous cars, molecular regression problems, and global weather and climate modelling. A naive application of convolutional networks to a planar projection of the spherical signal is destined to fail, because the space-varying distortions introduced by such a projection will make translational weight sharing ineffective.\n\n In this paper we introduce the building blocks for constructing spherical CNNs. We propose a definition for the spherical cross-correlation that is both expressive and rotation-equivariant. The spherical correlation satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. We demonstrate the computational efficiency, numerical accuracy, and effectiveness of spherical CNNs applied to 3D model recognition and atomization energy regression.", "target": ["Wir stellen Spherical CNNs vor, ein Convolutional Network für sphärische Signale, und wenden es auf die Erkennung von 3D-Modellen und die Regression molekularer Energie an.", "Die Arbeit schlägt einen Rahmen für die Konstruktion von sphärischen Convolutional Networks vor, der auf einer neuartigen Synthese mehrerer bestehender Konzepte beruht.", "In diesem Beitrag geht es darum, wie Convolutional Neural Networks so erweitert werden können, dass sie über eine eingebaute sphärische Invarianz verfügen, und es werden Werkzeuge aus der nicht-abelschen harmonischen Analyse verwendet, um dieses Ziel zu erreichen.", "Die Autoren entwickeln ein neuartiges Schema zur Darstellung sphärischer Daten von Grund auf."]} +{"source": "We propose a novel method that makes use of deep neural networks and gradient decent to perform automated design on complex real world engineering tasks. Our approach works by training a neural network to mimic the fitness function of a design optimization task and then, using the differential nature of the neural network, perform gradient decent to maximize the fitness. We demonstrate this methods effectiveness by designing an optimized heat sink and both 2D and 3D airfoils that maximize the lift drag ratio under steady state flow conditions. We highlight that our method has two distinct benefits over other automated design approaches. First, evaluating the neural networks prediction of fitness can be orders of magnitude faster then simulating the system of interest. Second, using gradient decent allows the design space to be searched much more efficiently then other gradient free methods. These two strengths work together to overcome some of the current shortcomings of automated design.", "target": ["Ein Verfahren für die automatisierte Planung von realen Objekten wie Kühlkörpern und Tragflächenprofilen, das neuronale Netze und Gradientenabstieg einsetzt.", "Neuronales Netz (Parametrisierung und Vorhersage) und Gradientenabstieg (Backpropogation) für die automatische Planung von technischen Aufgaben. ", "In diesem Beitrag wird die Verwendung eines tiefen Netzwerks vorgestellt, um das Verhalten eines komplexen physikalischen Systems zu approximieren und dann optimale Geräte zu entwerfen, indem dieses Netzwerk im Hinblick auf seine Eingaben optimiert wird."]} +{"source": "Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.", "target": ["Wir schlagen eine duale Version der logistischen adversen Distanz für den Merkmalsabgleich vor und zeigen, dass sie stabilere Gradientenschritt-Iterationen liefert als das Min-Max-Ziel.", "Die Arbeit befasst sich mit der Fixierung von GANs auf der Berechnungsebene.", "Diese Arbeit untersucht eine duale Formulierung eines adversarial Verlusts, die auf einer Obergrenze des logistischen Verlusts basiert, und verwandelt das Standard-Min-Max-Problem des adversarial Trainings in ein einziges Minimierungsproblem.", "Schlägt vor, das GAN-Sattelpunktziel (für einen logistischen Regressionsdiskriminator) als Minimierungsproblem durch Dualisierung des Maximum-Likelihood-Ziels für die regularisierte logistische Regression neu zu formulieren."]} +{"source": " There are many applications scenarios for which the computational\n performance and memory footprint of the prediction phase of Deep\n Neural Networks (DNNs) need to be optimized. Binary Deep Neural\n Networks (BDNNs) have been shown to be an effective way of achieving\n this objective. In this paper, we show how Convolutional Neural\n Networks (CNNs) can be implemented using binary\n representations. Espresso is a compact, yet powerful\n library written in C/CUDA that features all the functionalities\n required for the forward propagation of CNNs, in a binary file less\n than 400KB, without any external dependencies. Although it is mainly\n designed to take advantage of massive GPU parallelism, Espresso also\n provides an equivalent CPU implementation for CNNs. Espresso\n provides special convolutional and dense layers for BCNNs,\n leveraging bit-packing and bit-wise computations\n for efficient execution. These techniques provide a speed-up of\n matrix-multiplication routines, and at the same time, reduce memory\n usage when storing parameters and activations. We experimentally\n show that Espresso is significantly faster than existing\n implementations of optimized binary neural networks (~ 2\n orders of magnitude). Espresso is released under the Apache 2.0\n license and is available at http://github.com/organization/project.", "target": ["Implementierung von binären neuronalen Netzen auf dem neuesten Stand der Rechenleistung.", "Die Arbeit stellt eine in C/CUDA geschriebene Bibliothek vor, die alle für die Forward Propagation von BCNNs erforderlichen Funktionen enthält.", "Dieser Beitrag baut auf Binary-NET auf und erweitert es auf CNN-Architekturen, bietet Optimierungen, die die Geschwindigkeit des Vorwärtspasses verbessern, und stellt optimierten Code für Binary CNN bereit."]} +{"source": "Optimal selection of a subset of items from a given set is a hard problem that requires combinatorial optimization. In this paper, we propose a subset selection algorithm that is trainable with gradient based methods yet achieves near optimal performance via submodular optimization. We focus on the task of identifying a relevant set of sentences for claim verification in the context of the FEVER task. Conventional methods for this task look at sentences on their individual merit and thus do not optimize the informativeness of sentences as a set. We show that our proposed method which builds on the idea of unfolding a greedy algorithm into a computational graph allows both interpretability and gradient based training. The proposed differentiable greedy network (DGN) outperforms discrete optimization algorithms as well as other baseline methods in terms of precision and recall.", "target": ["Wir schlagen einen Algorithmus zur Auswahl von Teilmengen vor, der mit gradientenbasierten Methoden trainiert werden kann und durch submodulare Optimierung eine nahezu optimale Leistung erzielt.", "Vorschlagen eines auf einem neuronalen Netz basierenden Modells, das eine submodulare Funktion integriert, indem es eine gradientenbasierte Optimierungstechnik mit einem submodularen Rahmen namens 'Differentiable Greedy Network' (DGN) kombiniert.", "Vorschlagen eines neuronalen Netz, das eine Teilmenge von Elementen auswählt (z. B. die Auswahl von k Sätzen, die sich am meisten auf eine Behauptung beziehen, aus einer Menge von abgerufenen Dokumenten)."]} +{"source": "The joint optimization of representation learning and clustering in the embedding space has experienced a breakthrough in recent years. In spite of the advance, clustering with representation learning has been limited to flat-level categories, which oftentimes involves cohesive clustering with a focus on instance relations. To overcome the limitations of flat clustering, we introduce hierarchically clustered representation learning (HCRL), which simultaneously optimizes representation learning and hierarchical clustering in the embedding space. Specifically, we place a nonparametric Bayesian prior on embeddings to handle dynamic mixture hierarchies under the variational autoencoder framework, and to adopt the generative process of a hierarchical-versioned Gaussian mixture model. Compared with a few prior works focusing on unifying representation learning and hierarchical clustering, HCRL is the first model to consider a generation of deep embeddings from every component of the hierarchy, not just leaf components. This generation process enables more meaningful separations and mergers of clusters via branches in a hierarchy. In addition to obtaining hierarchically clustered embeddings, we can reconstruct data by the various abstraction levels, infer the intrinsic hierarchical structure, and learn the level-proportion features. We conducted evaluations with image and text domains, and our quantitative analyses showed competent likelihoods and the best accuracies compared with the baselines.", "target": ["Wir führen das hierarchisch geclusterte Repräsentationslernen (HCRL) ein, das gleichzeitig das Repräsentationslernen und das hierarchische Clustering im Einbettungsraum optimiert.", "In dem Beitrag wird vorgeschlagen, das verschachtelte CRP als Clustermodell und nicht als Themenmodell zu verwenden.", "stellt eine neuartige hierarchische Clustering-Methode über einem Einbettungsraum vor, bei der sowohl der Einbettungsraum als auch das hierarchische Clustering gleichzeitig erlernt werden."]} +{"source": "We introduce a novel geometric perspective and unsupervised model augmentation framework for transforming traditional deep (convolutional) neural networks into adversarially robust classifiers. Class-conditional probability densities based on Bayesian nonparametric mixtures of factor analyzers (BNP-MFA) over the input space are used to design soft decision labels for feature to label isometry. Classconditional distributions over features are also learned using BNP-MFA to develop plug-in maximum a posterior (MAP) classifiers to replace the traditional multinomial logistic softmax classification layers. This novel unsupervised augmented framework, which we call geometrically robust networks (GRN), is applied to CIFAR-10, CIFAR-100, and to Radio-ML (a time series dataset for radio modulation recognition). We demonstrate the robustness of GRN models to adversarial attacks from fast gradient sign method, Carlini-Wagner, and projected gradient descent.", "target": ["Wir entwickeln ein statistisch-geometrisches, unüberwachtes Lernverfahren für tiefe neuronale Netze, um sie robust gegen adversarial Angriffe zu machen.", "Umwandlung traditioneller tiefer neuronaler Netze in robuste adversarial Kalssifizierer unter Verwendung von GRNs.", "Vorschlagen einer Verteidigung, die auf klassenbedingten Merkmalsverteilungen basiert, um tiefe neuronale Netze in robuste Klassifizierer zu verwandeln."]} +{"source": "Reinforcement learning in environments with large state-action spaces is challenging, as exploration can be highly inefficient. Even if the dynamics are simple, the optimal policy can be combinatorially hard to discover. In this work, we propose a hierarchical approach to structured exploration to improve the sample efficiency of on-policy exploration in large state-action spaces. The key idea is to model a stochastic policy as a hierarchical latent variable model, which can learn low-dimensional structure in the state-action space, and to define exploration by sampling from the low-dimensional latent space. This approach enables lower sample complexity, while preserving policy expressivity. In order to make learning tractable, we derive a joint learning and exploration strategy by combining hierarchical variational inference with actor-critic learning. The benefits of our learning approach are that 1) it is principled, 2) simple to implement, 3) easily scalable to settings with many actions and 4) easily composable with existing deep learning approaches. We demonstrate the effectiveness of our approach on learning a deep centralized multi-agent policy, as multi-agent environments naturally have an exponentially large state-action space. In this setting, the latent hierarchy implements a form of multi-agent coordination during exploration and execution (MACE). We demonstrate empirically that MACE can more efficiently learn optimal policies in challenging multi-agent games with a large number (~20) of agents, compared to conventional baselines. Moreover, we show that our hierarchical structure leads to meaningful agent coordination.", "target": ["Effizienteres tiefes Reinforcement Learning in großen Zustands-Aktionsräumen durch strukturierte Exploration mit tiefen hierarchischen Richtlinien.", "Eine Methode zur Koordinierung des Agentenverhaltens unter Verwendung von Strategien, die eine gemeinsame latente Struktur haben, eine Methode zur variablen Optimierung der Strategien, um die koordinierten Strategien zu optimieren, und eine Ableitung der variablen, hierarchischen Aktualisierung der Autoren.", "Diese Arbeit schlägt eine algorithmische Innovation vor, die aus hierarchischen latenten Variablen für die koordinierte Exploration in Multi-Agenten Umgebungen besteht."]} +{"source": "Much attention has been devoted recently to the generalization puzzle in deep learning: large, deep networks can generalize well, but existing theories bounding generalization error are exceedingly loose, and thus cannot explain this striking performance. Furthermore, a major hope is that knowledge may transfer across tasks, so that multi-task learning can improve generalization on individual tasks. However we lack analytic theories that can quantitatively predict how the degree of knowledge transfer depends on the relationship between the tasks. We develop an analytic theory of the nonlinear dynamics of generalization in deep linear networks, both within and across tasks. In particular, our theory provides analytic solutions to the training and testing error of deep networks as a function of training time, number of examples, network size and initialization, and the task structure and SNR. Our theory reveals that deep networks progressively learn the most important task structure first, so that generalization error at the early stopping time primarily depends on task structure and is independent of network size. This suggests any tight bound on generalization error must take into account task structure, and explains observations about real data being learned faster than random data. Intriguingly our theory also reveals the existence of a learning algorithm that proveably out-performs neural network training through gradient descent. Finally, for transfer learning, our theory reveals that knowledge transfer depends sensitively, but computably, on the SNRs and input feature alignments of pairs of tasks.", "target": ["Wir bieten viele Einblicke in die Generalisierung neuronaler Netze aus dem theoretisch nachvollziehbaren linearen Fall.", "Die Autoren untersuchen ein einfaches Modell linearer Netzwerke zum Verständnis von Generalisierung und Transferlernen."]} +{"source": "We conduct a mathematical analysis on the Batch normalization (BN) effect on gradient backpropagation in residual network training in this work, which is believed to play a critical role in addressing the gradient vanishing/explosion problem. Specifically, by analyzing the mean and variance behavior of the input and the gradient in the forward and backward passes through the BN and residual branches, respectively, we show that they work together to confine the gradient variance to a certain range across residual blocks in backpropagation. As a result, the gradient vanishing/explosion problem is avoided. Furthermore, we use the same analysis to discuss the tradeoff between depth and width of a residual network and demonstrate that shallower yet wider resnets have stronger learning performance than deeper yet thinner resnets.", "target": ["Durch die Batch-Normalisierung bleibt die Gradientenvarianz während des gesamten Trainings erhalten, wodurch die Optimierung stabilisiert wird.", "In dieser Arbeit wurde die Auswirkung der Batch-Normalisierung auf die Gradienten Backpropagation in residualen Netzen analysiert."]} +{"source": "To study how mental object representations are related to behavior, we estimated sparse, non-negative representations of objects using human behavioral judgments on images representative of 1,854 object categories. These representations predicted a latent similarity structure between objects, which captured most of the explainable variance in human behavioral judgments. Individual dimensions in the low-dimensional embedding were found to be highly reproducible and interpretable as conveying degrees of taxonomic membership, functionality, and perceptual attributes. We further demonstrated the predictive power of the embeddings for explaining other forms of human behavior, including categorization, typicality judgments, and feature ratings, suggesting that the dimensions reflect human conceptual representations of objects beyond the specific task.", "target": ["Menschliche Verhaltensbeurteilungen werden verwendet, um spärliche und interpretierbare Darstellungen von Objekten zu erhalten, die sich auf andere Aufgaben übertragen lassen.", "Dieser Beitrag beschreibt ein groß angelegtes Experiment zu menschlichen Objekt-/Semantikrepräsentationen und ein Modell solcher Repräsentationen.", "In diesem Beitrag wird ein neues Repräsentationssystem für Objektrepräsentationen entwickelt, das auf der Grundlage von Daten trainiert wird, die aus menschlichen Beurteilungen von Bildern gewonnen wurden.", "Ein neuer Ansatz zum Erlernen eines spärlichen, positiven, interpretierbaren semantischen Raums, der die menschlichen Ähnlichkeitsurteile maximiert, indem er so trainiert wird, dass die Vorhersage menschlicher Ähnlichkeitsurteile speziell maximiert wird."]} +{"source": "We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. \n\n We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. \n\n The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks.\n\nWe also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming.", "target": ["Wir schlagen einen Agenten vor, der zwischen dem Benutzer und einem Black-Box-System zur Beantwortung von Fragen sitzt und lernt, Fragen so umzuformulieren, dass die bestmöglichen Antworten herauskommen.", "In diesem Beitrag wird eine aktive Beantwortung von Fragen durch einen Ansatz des verstärkten Lernens vorgeschlagen, der lernt, Fragen so umzuformulieren, dass sie die bestmöglichen Antworten liefern.", "Beschreibt anschaulich, wie die Forscher zwei Modelle für die Umformulierung von Fragen und die Auswahl von Antworten während der Beantwortung von Fragen entwickelt und aktiv trainiert haben."]} +{"source": "Most deep latent factor models choose simple priors for simplicity, tractability\n or not knowing what prior to use. Recent studies show that the choice of\n the prior may have a profound effect on the expressiveness of the model,\n especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders\n (AAEs). We introduce the notion of code generators to transform manually selected\n simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than\n AAEs in both supervised and unsupervised settings. Lastly, we present its\n ability to do cross-domain translation in a text-to-image synthesis task.", "target": ["Lernen von Prioritäten für adversarial Autoencoder.", "Schlägt eine einfache Erweiterung der adversarial Autoencoder für die bedingte Bilderzeugung vor.", "Konzentriert sich auf adversarial Autoencoder und führt ein Codegenerator-Netzwerk ein, um einen einfachen Prior in einen umzuwandeln, der zusammen mit dem Generator die Datenverteilung besser abbilden kann."]} +{"source": "In the past few years, various advancements have been made in generative models owing to the formulation of Generative Adversarial Networks (GANs). GANs have been shown to perform exceedingly well on a wide variety of tasks pertaining to image generation and style transfer. In the field of Natural Language Processing, word embeddings such as word2vec and GLoVe are state-of-the-art methods for applying neural network models on textual data. Attempts have been made for utilizing GANs with word embeddings for text generation. This work presents an approach to text generation using Skip-Thought sentence embeddings in conjunction with GANs based on gradient penalty functions and f-measures. The results of using sentence embeddings with GANs for generating text conditioned on input information are comparable to the approaches where word embeddings are used.", "target": ["Generierung von Text unter Verwendung von Satzeinbettungen aus Skip-Thought Vektoren mit Hilfe von generativen adversarial Netzen.", "Beschreibt die Anwendung von generativen adversarial Netzen zur Modellierung von Textdaten mit Hilfe von Skigedankenvektoren und Experimenten mit verschiedenen GANs für zwei unterschiedliche Datensätze."]} +{"source": "The novel \\emph{Unbiased Online Recurrent Optimization} (UORO) algorithm allows for online learning of general recurrent computational graphs such as recurrent network models. It works in a streaming fashion and avoids backtracking through past activations and inputs. UORO is computationally as costly as \\emph{Truncated Backpropagation Through Time} (truncated BPTT), a widespread algorithm for online learning of recurrent networks \\cite{jaeger2002tutorial}. UORO is a modification of \\emph{NoBackTrack} \\cite{DBLP:journals/corr/OllivierC15} that bypasses the need for model sparsity and makes implementation easy in current deep learning frameworks, even for complex models. Like NoBackTrack, UORO provides unbiased gradient estimates; unbiasedness is the core hypothesis in stochastic gradient descent theory, without which convergence to a local optimum is not guaranteed. On the contrary, truncated BPTT does not provide this property, leading to possible divergence. On synthetic tasks where truncated BPTT is shown to diverge, UORO converges. For instance, when a parameter has a positive short-term but negative long-term influence, truncated BPTT diverges unless the truncation span is very significantly longer than the intrinsic temporal range of the interactions, while UORO performs well thanks to the unbiasedness of its gradients.\n", "target": ["Stellt eine unvoreingenommene und leicht zu implementierende Online-Gradientenschätzung für rekurrente Modelle vor.", "Die Autoren stellen einen neuen Ansatz für das Online-Lernen der Parameter rekurrenter neuronaler Netze aus langen Sequenzen vor, der die Imitation der abgeschnittenen Backpropagation durch die Zeit überwindet.", "In diesem Beitrag wird das Online-Training von RNNs auf prinzipielle Weise angegangen, und es wird eine Modifikation von RTRL und die Verwendung eines Forward Ansatzes für die Gradientenberechnung vorgeschlagen."]} +{"source": "We present a deep learning-based method for super-resolving coarse (low-resolution) labels assigned to groups of image pixels into pixel-level (high-resolution) labels, given the joint distribution between those low- and high-resolution labels. This method involves a novel loss function that minimizes the distance between a distribution determined by a set of model outputs and the corresponding distribution given by low-resolution labels over the same set of outputs. This setup does not require that the high-resolution classes match the low-resolution classes and can be used in high-resolution semantic segmentation tasks where high-resolution labeled data is not available. Furthermore, our proposed method is able to utilize both data with low-resolution labels and any available high-resolution labels, which we show improves performance compared to a network trained only with the same amount of high-resolution data.\n We test our proposed algorithm in a challenging land cover mapping task to super-resolve labels at a 30m resolution to a separate set of labels at a 1m resolution. We compare our algorithm with models that are trained on high-resolution data and show that 1) we can achieve similar performance using only low-resolution data; and 2) we can achieve better performance when we incorporate a small amount of high-resolution data in our training. We also test our approach on a medical imaging problem, resolving low-resolution probability maps into high-resolution segmentation of lymphocytes with accuracy equal to that of fully supervised models.", "target": ["Superauflösung grober Labels in Beschriftungen auf Pixelebene, angewandt auf Luftbilder und medizinische Scans.", "Eine Methode zur Superauflösung von groben, niedrig aufgelösten Segmentierungslabels, wenn die gemeinsame Verteilung von niedrig und hoch aufgelösten Labels bekannt ist."]} +{"source": "We propose a novel framework for combining datasets via alignment of their associated intrinsic dimensions. Our approach assumes that the two datasets are sampled from a common latent space, i.e., they measure equivalent systems. Thus, we expect there to exist a natural (albeit unknown) alignment of the data manifolds associated with the intrinsic geometry of these datasets, which are perturbed by measurement artifacts in the sampling process. Importantly, we do not assume any individual correspondence (partial or complete) between data points. Instead, we rely on our assumption that a subset of data features have correspondence across datasets. We leverage this assumption to estimate relations between intrinsic manifold dimensions, which are given by diffusion map coordinates over each of the datasets. We compute a correlation matrix between diffusion coordinates of the datasets by considering graph (or manifold) Fourier coefficients of corresponding data features. We then orthogonalize this correlation matrix to form an isometric transformation between the diffusion maps of the datasets. Finally, we apply this transformation to the diffusion coordinates and construct a unified diffusion geometry of the datasets together. We show that this approach successfully corrects misalignment artifacts, and allows for integrated data.", "target": ["Wir schlagen eine Methode vor, um die latenten Merkmale, die aus verschiedenen Datensätzen gelernt wurden, mithilfe harmonischer Korrelationen abzugleichen.", "Es wird vorgeschlagen, Merkmalskorrespondenzen zu verwenden, um einen vielfältigen Abgleich zwischen Datenstapeln aus denselben Proben vorzunehmen, um die Sammlung gesstörter Messungen zu vermeiden."]} +{"source": "Reinforcement learning (RL) has proven to be a powerful paradigm for deriving complex behaviors from simple reward signals in a wide range of environments. When applying RL to continuous control agents in simulated physics environments, the body is usually considered to be part of the environment. However, during evolution the physical body of biological organisms and their controlling brains are co-evolved, thus exploring a much larger space of actuator/controller configurations. Put differently, the intelligence does not reside only in the agent's mind, but also in the design of their body. \n We propose a method for uncovering strong agents, consisting of a good combination of a body and policy, based on combining RL with an evolutionary procedure. Given the resulting agent, we also propose an approach for identifying the body changes that contributed the most to the agent performance. We use the Shapley value from cooperative game theory to find the fair contribution of individual components, taking into account synergies between components. \n We evaluate our methods in an environment similar to the the recently proposed Robo-Sumo task, where agents in a 3D environment with simulated physics compete in tipping over their opponent or pushing them out of the arena. Our results show that the proposed methods are indeed capable of generating strong agents, significantly outperforming baselines that focus on optimizing the agent policy alone. \n\n A video is available at: www.youtube.com/watch?v=eei6Rgom3YY", "target": ["Die Entwicklung der Körperform bei RL-gesteuerten Agenten verbessert deren Leistung (und hilft beim Lernen).", "PEOM-Algorithmus, der den Shapley-Wert einbezieht, um die Entwicklung zu beschleunigen, indem der Beitrag jedes Körperteils ermittelt wird."]} +{"source": "Many practical reinforcement learning problems contain catastrophic states that the optimal policy visits infrequently or never. Even on toy problems, deep reinforcement learners periodically revisit these states, once they are forgotten under a new policy. In this paper, we introduce intrinsic fear, a learned reward shaping that accelerates deep reinforcement learning and guards oscillating policies against periodic catastrophes. Our approach incorporates a second model trained via supervised learning to predict the probability of imminent catastrophe. This score acts as a penalty on the Q-learning objective. Our theoretical analysis demonstrates that the perturbed objective yields the same average return under strong assumptions and an $\\epsilon$-close average return under weaker assumptions. Our analysis also shows robustness to classification errors. Equipped with intrinsic fear, our DQNs solve the toy environments and improve on the Atari games Seaquest, Asteroids, and Freeway.", "target": ["Gestalten Sie die Belohnung mit intrinsischer Motivation, um katastrophale Zustände zu vermeiden und das katastrophale Vergessen abzuschwächen.", "Ein RL-Algorithmus, der den DQN-Algorithmus mit einem parallel trainierten Angstmodell kombiniert, um katastrophale Zustände vorherzusagen.", "Die Arbeit untersucht katastrophales Vergessen in RL, indem es Aufgaben hervorhebt, bei denen ein DQN in der Lage ist, zu lernen, katastrophale Ereignisse zu vermeiden, solange es das Vergessen vermeidet."]} +{"source": "Convolution is an efficient technique to obtain abstract feature representations using hierarchical layers in deep networks. Although performing convolution in Euclidean geometries is fairly straightforward, its extension to other topological spaces---such as a sphere S^2 or a unit ball B^3---entails unique challenges. In this work, we propose a novel `\"volumetric convolution\" operation that can effectively convolve arbitrary functions in B^3. We develop a theoretical framework for \"volumetric convolution\" based on Zernike polynomials and efficiently implement it as a differentiable and an easily pluggable layer for deep networks. Furthermore, our formulation leads to derivation of a novel formula to measure the symmetry of a function in B^3 around an arbitrary axis, that is useful in 3D shape analysis tasks. We demonstrate the efficacy of proposed volumetric convolution operation on a possible use-case i.e., 3D object recognition task.", "target": ["Ein neuartiger Convolution Operator für automatisches Repräsentationslernen in der Einheitskugel.", "Diese Arbeit steht im Zusammenhang mit den jüngsten Arbeiten über sphärische CNN und äquivariante SE(n)-Netzwerke und erweitert frühere Ideen auf volumetrische Daten in der Einheitskugel.", "Er schlägt vor, volumetrische Convolutions auf Convolution Networks zu verwenden, um die Einheitskugel zu lernen, und erörtert die Methodik und die Ergebnisse des Prozesses."]} +{"source": "Learning in environments with large state and action spaces, and sparse rewards, can hinder a Reinforcement Learning (RL) agent’s learning through trial-and-error. For instance, following natural language instructions on the Web (such as booking a flight ticket) leads to RL settings where input vocabulary and number of actionable elements on a page can grow very large. Even though recent approaches improve the success rate on relatively simple environments with the help of human demonstrations to guide the exploration, they still fail in environments where the set of possible instructions can reach millions. We approach the aforementioned problems from a different perspective and propose guided RL approaches that can generate unbounded amount of experience for an agent to learn from. Instead of learning from a complicated instruction with a large vocabulary, we decompose it into multiple sub-instructions and schedule a curriculum in which an agent is tasked with a gradually increasing subset of these relatively easier sub-instructions. In addition, when the expert demonstrations are not available, we propose a novel meta-learning framework that generates new instruction following tasks and trains the agent more effectively. We train DQN, deep reinforcement learning agent, with Q-value function approximated with a novel QWeb neural network architecture on these smaller, synthetic instructions. We evaluate the ability of our agent to generalize to new instructions onWorld of Bits benchmark, on forms with up to 100 elements, supporting 14 million possible instructions. The QWeb agent outperforms the baseline without using any human demonstration achieving 100% success rate on several difficult environments.", "target": ["Wir trainieren Strategien des Reinforcement Learning mit Hilfe von Belohnungserweiterung, Curriculum-Lernen und Meta-Lernen, um erfolgreich durch Webseiten zu navigieren.", "Entwickelt eine Lehrplan-Lernmethode für das Training eines RL-Agenten zur Navigation in einem Web, basierend auf der Idee, eine Anweisung in mehrere Unteranweisungen zu zerlegen."]} +{"source": "Labeled text classification datasets are typically only available in a few select languages. In order to train a model for e.g news categorization in a language $L_t$ without a suitable text classification dataset there are two options. The first option is to create a new labeled dataset by hand, and the second option is to transfer label information from an existing labeled dataset in a source language $L_s$ to the target language $L_t$. In this paper we propose a method for sharing label information across languages by means of a language independent text encoder. The encoder will give almost identical representations to multilingual versions of the same text. This means that labeled data in one language can be used to train a classifier that works for the rest of the languages. The encoder is trained independently of any concrete classification task and can therefore subsequently be used for any classification task. We show that it is possible to obtain good performance even in the case where only a comparable corpus of texts is available.", "target": ["Sprachübergreifende Textklassifizierung durch universelle Kodierung.", "In diesem Beitrag wird ein Ansatz zur sprachübergreifenden Textklassifikation durch die Verwendung vergleichbarer Korpora vorgeschlagen.", "Lernen von sprachenübergreifenden Einbettungen und Trainieren eines Klassifizierers unter Verwendung etikettierter Daten in der Ausgangssprache, um das Lernen eines sprachenübergreifenden Textkategorisierers ohne etikettierte Informationen in der Zielsprache anzugehen."]} +{"source": "Syntax is a powerful abstraction for language understanding. Many downstream tasks require segmenting input text into meaningful constituent chunks (e.g., noun phrases or entities); more generally, models for learning semantic representations of text benefit from integrating syntax in the form of parse trees (e.g., tree-LSTMs). Supervised parsers have traditionally been used to obtain these trees, but lately interest has increased in unsupervised methods that induce syntactic representations directly from unlabeled text. To this end, we propose the deep inside-outside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree. Unlike many prior approaches, DIORA does not rely on supervision from auxiliary downstream tasks and is thus not constrained to particular domains. Furthermore, competing approaches do not learn explicit phrase representations along with tree structures, which limits their applicability to phrase-based tasks. Extensive experiments on unsupervised parsing, segmentation, and phrase clustering demonstrate the efficacy of our method. DIORA achieves the state of the art in unsupervised parsing (46.9 F1) on the benchmark WSJ dataset.", "target": ["In dieser Arbeit schlagen wir Deep Inside-Outside Recursive Auto-Encoder (DIORA) vor, eine vollständig unbeaufsichtigte Methode zur Entdeckung der Syntax bei gleichzeitigem Lernen von Repräsentationen für die entdeckten Konstituenten. ", "Ein neuronales latentes Baummodell, das mit einem Auto-Encoding Ziel trainiert wird, das den Stand der Technik beim unüberwachten Konstituenten Parsing erreicht und die syntaktische Struktur besser erfasst als andere latente Baummodelle.", "Die Arbeit schlägt ein Modell für unüberwachtes Dependency Parsing (latente Bauminduktion) vor, das auf einer Kombination des Inside-Outside-Algorithmus mit neuronaler Modellierung (rekursive Auto-Encoder) basiert. "]} +{"source": "Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the meta-objective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if meta-optimization is to scale to practical neural net training regimes.", "target": ["Wir untersuchen die Verzerrung des kurzsichtigen Meta-Optimierungsziels.", "In diesem Beitrag werden ein vereinfachtes Modell und ein Problem vorgeschlagen, um die kurzfristige Verzerrung der Meta-Optimierung der Lernrate zu demonstrieren.", "In diesem Beitrag wird die Frage der verkürzten Backpropagation für die Meta-Optimierung anhand einer Reihe von Experimenten an einem Spielzeugproblem untersucht."]} +{"source": "Mainstream captioning models often follow a sequential structure to generate cap-\n tions, leading to issues such as introduction of irrelevant semantics, lack of diversity\n in the generated captions, and inadequate generalization performance. In this paper,\n we present an alternative paradigm for image captioning, which factorizes the\n captioning procedure into two stages: (1) extracting an explicit semantic represen-\n tation from the given image; and (2) constructing the caption based on a recursive\n compositional procedure in a bottom-up manner. Compared to conventional ones,\n our paradigm better preserves the semantic content through an explicit factorization\n of semantics and syntax. By using the compositional generation procedure, caption\n construction follows a recursive structure, which naturally fits the properties of\n human language. Moreover, the proposed compositional procedure requires less\n data to train, generalizes better, and yields more diverse captions.", "target": ["Eine hierarchische und kompositorische Methode zur Erstellung von Beschriftungen.", "In dieser Arbeit wird eine besser interpretierbare Methode für Bildunterschriften vorgestellt."]} +{"source": "While many approaches to make neural networks more fathomable have been proposed, they are restricted to interrogating the network with input data. Measures for characterizing and monitoring structural properties, however, have not been developed. In this work, we propose neural persistence, a complexity measure for neural network architectures based on topological data analysis on weighted stratified graphs. To demonstrate the usefulness of our approach, we show that neural persistence reflects best practices developed in the deep learning community such as dropout and batch normalization. Moreover, we derive a neural persistence-based stopping criterion that shortens the training process while achieving comparable accuracies as early stopping based on validation loss.", "target": ["Wir entwickeln ein neues topologisches Komplexitätsmaß für tiefe neuronale Netze und zeigen, dass es die wichtigsten Eigenschaften dieser Netze erfasst.", "In diesem Beitrag wird der Begriff der neuronalen Persistenz vorgeschlagen, ein topologisches Maß für die Zuordnung von Punktwerten zu vollständig verbundenen Schichten in einem neuronalen Netz.", "In der Arbeit wird vorgeschlagen, die Komplexität eines neuronalen Netzes anhand seiner nullten persistenten Homologie zu analysieren."]} +{"source": "Deep neural networks (DNNs) are vulnerable to adversarial examples, which are carefully crafted instances aiming to cause prediction errors for DNNs. Recent research on adversarial examples has examined local neighborhoods in the input space of DNN models. However, previous work has limited what regions to consider, focusing either on low-dimensional subspaces or small balls. In this paper, we argue that information from larger neighborhoods, such as from more directions and from greater distances, will better characterize the relationship between adversarial examples and the DNN models. First, we introduce an attack, OPTMARGIN, which generates adversarial examples robust to small perturbations. These examples successfully evade a defense that only considers a small ball around an input instance. Second, we analyze a larger neighborhood around input instances by looking at properties of surrounding decision boundaries, namely the distances to the boundaries and the adjacent classes. We find that the boundaries around these adversarial examples do not resemble the boundaries around benign examples. Finally, we show that, under scrutiny of the surrounding decision boundaries, our OPTMARGIN examples do not convincingly mimic benign examples. Although our experiments are limited to a few specific attacks, we hope these findings will motivate new, more evasive attacks and ultimately, effective defenses.", "target": ["Die Betrachtung der Entscheidungsgrenzen um eine Eingabe herum gibt Ihnen mehr Informationen als eine feste kleine Nachbarschaft.", "Die Autoren stellen einen neuartigen Angriff zur Generierung von adversarial Beispielen vor, bei dem sie Klassifikatoren angreifen, die durch zufällige Klassifizierung von kleinen L2-Störungen erstellt wurden.", "Ein neuer Ansatz zur Erzeugung von adversarial Angriffen auf ein neuronales Netz und eine Methode zur Verteidigung eines neuronalen Netzes gegen diese Angriffe."]} +{"source": "Machine learning models are usually tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of both weights and hyperparameters. Our method trains a neural network to output approximately optimal weights as a function of hyperparameters. We show that our method converges to locally optimal weights and hyperparameters for sufficiently large hypernets. We compare this method to standard hyperparameter optimization strategies and demonstrate its effectiveness for tuning thousands of hyperparameters.", "target": ["Wir trainieren ein neuronales Netz, um annähernd optimale Gewichte in Abhängigkeit von den Hyperparametern auszugeben.", "Hyper-Netzwerke für die Optimierung von Hyper-Parametern in neuronalen Netzwerken."]} +{"source": "Estimating covariances between financial assets plays an important role in risk management. In practice, when the sample size is small compared to the number of variables, the empirical estimate is known to be very unstable. Here, we propose a novel covariance estimator based on the Gaussian Process Latent Variable Model (GP-LVM). Our estimator can be considered as a non-linear extension of standard factor models with readily interpretable parameters reminiscent of market betas. Furthermore, our Bayesian treatment naturally shrinks the sample covariance matrix towards a more structured matrix given by the prior and thereby systematically reduces estimation errors. Finally, we discuss some financial applications of the GP-LVM model.", "target": ["Schätzung der Kovarianzmatrix von Finanzanlagen mit Gauß-Prozess-Modellen mit latente Variablen.", "Veranschaulicht, wie das Gaussian Process Latent Variable Model (GP-LVM) klassische lineare Faktormodelle für die Schätzung von Kovarianzmatrizen in Portfolio-Optimierungsproblemen ersetzen kann.", "In dieser Arbeit werden Standard-GPLVMs verwendet, um die Kovarianzstruktur und eine latente Raumdarstellung von S&P500-Finanzzeitreihen zu modellieren, um Portfolios zu optimieren und fehlende Werte vorherzusagen.", "In diesem Papier wird vorgeschlagen, ein GPLVM zur Modellierung von Finanzerträgen zu verwenden."]} +{"source": "We study how, in generative adversarial networks, variance in the discriminator's output affects the generator's ability to learn the data distribution. In particular, we contrast the results from various well-known techniques for training GANs when the discriminator is near-optimal and updated multiple times per update to the generator. As an alternative, we propose an additional method to train GANs by explicitly modeling the discriminator's output as a bi-modal Gaussian distribution over the real/fake indicator variables. In order to do this, we train the Gaussian classifier to match the target bi-modal distribution implicitly through meta-adversarial training. We observe that our new method, when trained together with a strong discriminator, provides meaningful, non-vanishing gradients.", "target": ["Wir führen meta-adversariales Lernen ein, eine neue Technik zur Regularisierung von GANs, und schlagen eine Trainingsmethode vor, die explizit die Ausgangsverteilung des Diskriminators kontrolliert.", "Die Arbeit schlägt Varianz regularisierendes adversarial Lernen für das Training von GANs vor, um sicherzustellen, dass der Gradient für den Generator nicht verschwindet."]} +{"source": "We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent’s policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and epsilon-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.", "target": ["Ein Deep Reinforcement Learning Agent, dessen Gewichte mit parametrischen Störungen versehen sind, kann zur Unterstützung einer effizienten Erkundung eingesetzt werden.", "In diesem Beitrag werden NoisyNets vorgestellt, neuronale Netze, deren Parameter durch eine parametrische Störfunktion gestört werden, und die eine erhebliche Leistungsverbesserung gegenüber grundlegenden Algorithmen für tiefes Reinforcement Learning erzielen.", "Neue Explorationsmethode für tiefe RL durch Einspeisung von Störungen in die Gewichte der tiefen Netze, wobei die Störungen verschiedene Formen annehmen können."]} +{"source": "Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose \"Active Neural Localizer\", a fully differentiable neural network that learns to localize efficiently. The proposed model incorporates ideas of traditional filtering-based localization methods, by using a structured belief of the state with multiplicative interactions to propagate belief, and combines it with a policy model to minimize the number of steps required for localization. Active Neural Localizer is trained end-to-end with reinforcement learning. We use a variety of simulation environments for our experiments which include random 2D mazes, random mazes in the Doom game engine and a photo-realistic environment in the Unreal game engine. The results on the 2D environments show the effectiveness of the learned policy in an idealistic setting while results on the 3D environments demonstrate the model's capability of learning the policy and perceptual model jointly from raw-pixel based RGB observations. We also show that a model trained on random textures in the Doom environment generalizes well to a photo-realistic office space environment in the Unreal engine.", "target": ["Active Neural Localizer, ein vollständig differenzierbares neuronales Netz, das mit Hilfe von Deep Reinforcement Learning eine effiziente Lokalisierung lernt.", "In diesem Beitrag wird das Problem der Lokalisierung auf einer bekannten Karte unter Verwendung eines Glaubensnetzes als RL-Problem formuliert, bei dem das Ziel des Agenten darin besteht, die Anzahl der Schritte zu minimieren, um sich selbst zu lokalisieren.", "Dies ist eine klare und interessante Arbeit, die ein parametrisiertes Netzwerk zur Auswahl von Aktionen für einen Roboter in einer simulierten Umgebung aufbaut."]} +{"source": "Machine translation is an important real-world application, and neural network-based AutoRegressive Translation (ART) models have achieved very promising accuracy. Due to the unparallelizable nature of the autoregressive factorization, ART models have to generate tokens one by one during decoding and thus suffer from high inference latency. Recently, Non-AutoRegressive Translation (NART) models were proposed to reduce the inference time. However, they could only achieve inferior accuracy compared with ART models. To improve the accuracy of NART models, in this paper, we propose to leverage the hints from a well-trained ART model to train the NART model. We define two hints for the machine translation task: hints from hidden states and hints from word alignments, and use such hints to regularize the optimization of NART models. Experimental results show that the NART model trained with hints could achieve significantly better translation performance than previous NART models on several tasks. In particular, for the WMT14 En-De and De-En task, we obtain BLEU scores of 25.20 and 29.52 respectively, which largely outperforms the previous non-autoregressive baselines. It is even comparable to a strong LSTM-based ART model (24.60 on WMT14 En-De), but one order of magnitude faster in inference.", "target": ["Wir entwickeln einen Trainingsalgorithmus für nicht-autoregressive maschinelle Übersetzungsmodelle, der eine vergleichbare Genauigkeit wie starke autoregressive Grundmodelle erreicht, aber eine Größenordnung schneller in der Inferenz ist. ", "Entnimmt Wissen aus versteckten Zwischenzuständen und Aufmerksamkeitsgewichten, um die nicht-autoregressive neuronale maschinelle Übersetzung zu verbessern.", "Schlägt vor, ein gut trainiertes autoregressives Modell zu nutzen, um die versteckten Zustände und die Wortausrichtung von nicht-autoregressiven neuronalen maschinellen Übersetzungsmodellen zu informieren."]} +{"source": "Artificial neural networks are built on the basic operation of linear combination and non-linear activation function. Theoretically this structure can approximate any continuous function with three layer architecture. But in practice learning the parameters of such network can be hard. Also the choice of activation function can greatly impact the performance of the network. In this paper we are proposing to replace the basic linear combination operation with non-linear operations that do away with the need of additional non-linear activation function. To this end we are proposing the use of elementary morphological operations (dilation and erosion) as the basic operation in neurons. We show that these networks (Denoted as Morph-Net) with morphological operations can approximate any smooth function requiring less number of parameters than what is necessary for normal neural networks. The results show that our network perform favorably when compared with similar structured network. We have carried out our experiments on MNIST, Fashion-MNIST, CIFAR10 and CIFAR100.", "target": ["Mit Hilfe mophologischer Operationen (Dilatation und Erosion) haben wir eine Klasse von Netzen definiert, die eine beliebige kontinuierliche Funktion annähern können. ", "In dieser Arbeit wird vorgeschlagen, die Standard RELU/tanh Einheiten durch eine Kombination von Dilatations- und Erosionsoperationen zu ersetzen, wobei festgestellt wird, dass der neue Operator mehr Hyperebenen erzeugt und eine größere Ausdruckskraft hat.", "Die Autoren stellen Morph-Net vor, ein einschichtiges neuronales Netz, bei dem die Abbildung durch morphologische Dilatation und Erosion erfolgt."]} +{"source": "With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment. This work aims to advance the compression beyond the weights to the activations of DNNs. We propose the Integral Pruning (IP) technique which integrates the activation pruning with the weight pruning. Through the learning on the different importance of neuron responses and connections, the generated network, namely IPnet, balances the sparsity between activations and weights and therefore further improves execution efficiency. The feasibility and effectiveness of IPnet are thoroughly evaluated through various network models with different activation functions and on different datasets. With <0.5% disturbance on the testing accuracy, IPnet saves 71.1% ~ 96.35% of computation cost, compared to the original dense models with up to 5.8x and 10x reductions in activation and weight numbers, respectively.", "target": ["Diese Arbeit erweitert die DNN Kompression über die Gewichte hinaus auf die Aktivierungen, durch Integration des Prunings von Aktivierungen in das Pruning der Gewichte. ", "Eine integrale Modellkompressionsmethode, die sowohl Gewichts- als auch Aktivierungs- Pruning handhabt, was zu einer effizienteren Netzberechnung und einer effektiven Reduzierung der Anzahl von Multiplikationen und Akkumulationen führt.", "In diesem Artikel wird ein neuartiger Ansatz zur Verringerung der Rechenkosten von tiefen neuronalen Netzen durch die Integration von Aktivierungs-Pruning zusammen mit Gewichts-Pruning vorgestellt. Es wird gezeigt, dass gängige Techniken des ausschließlichen Gewichts-Pruning die Anzahl der Aktivierungen ungleich Null nach ReLU erhöhen."]} +{"source": "The Variational Auto Encoder (VAE) is a popular generative \nlatent variable model that is often \napplied for representation learning.\n Standard VAEs assume continuous valued \nlatent variables and are trained by maximization\nof the evidence lower bound (ELBO). Conventional methods obtain a \ndifferentiable estimate of the ELBO with reparametrized sampling and\noptimize it with Stochastic Gradient Descend (SGD). However, this is not possible if \nwe want to train VAEs with discrete valued latent variables, \nsince reparametrized sampling is not possible. Till now, there\nexist no simple solutions to circumvent this problem.\n In this paper, we propose an easy method to train VAEs \nwith binary or categorically valued latent representations. Therefore, we use a differentiable\nestimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and\ntrain two different VAEs architectures with Bernoulli and \nCategorically distributed latent representations on two different benchmark\ndatasets.\t", "target": ["Wir schlagen eine einfache Methode zum Trainieren von variationalen Auto Encoders (VAE) mit diskreten latenten Repräsentationen unter Verwendung von Wichtigkeitssampling vor.", "Einführung einer Wichtigkeits-Sampling Verteilung und Verwendung von Stichproben aus der Verteilung zur Berechnung der Wichtigkeitsgewichteten Schätzung des Gradienten", "In dieser Arbeit wird vorgeschlagen, wichtige Stichproben zur Optimierung der VAE mit diskreten latenten Variablen zu verwenden."]} +{"source": "Distributed computing can significantly reduce the training time of neural networks. Despite its potential, however, distributed training has not been widely adopted: scaling the training process is difficult, and existing SGD methods require substantial tuning of hyperparameters and learning schedules to achieve sufficient accuracy when increasing the number of workers. In practice, such tuning can be prohibitively expensive given the huge number of potential hyperparameter configurations and the effort required to test each one.\n \n We propose DANA, a novel approach that scales out-of-the-box to large clusters using the same hyperparameters and learning schedule optimized for training on a single worker, while maintaining similar final accuracy without additional overhead. DANA estimates the future value of model parameters by adapting Nesterov Accelerated Gradient to a distributed setting, and so mitigates the effect of gradient staleness, one of the main difficulties in scaling SGD to more workers.\n\n Evaluation on three state-of-the-art network architectures and three datasets shows that DANA scales as well as or better than existing work without having to tune any hyperparameters or tweak the learning schedule. For example, DANA achieves 75.73% accuracy on ImageNet when training ResNet-50 with 16 workers, similar to the non-distributed baseline.", "target": ["Ein neuer verteilter asynchroner SGD Algorithmus, der auf bestehenden Architekturen ohne zusätzliches Tuning oder Overhead die höchste Genauigkeit erreicht.", "Vorschlagen einer Verbesserung bestehender ASGD-Ansätze bei mittlerer Skalierung, die Momentum mit SGD für asynchrones Training über einen verteilten Worker-Pool verwendet.", "Diese Arbeit befasst sich mit der Staleness des Gradienten im Vergleich zur parallelen Leistung Problem beim verteilten Deep Learning-Training und schlägt einen Ansatz zur Schätzung zukünftiger Modellparameter an den Slaves vor, um die Auswirkungen der Kommunikationslatenz zu reduzieren."]} +{"source": "This paper proposes a novel approach to train deep neural networks by unlocking the layer-wise dependency of backpropagation training. The approach employs additional modules called local critic networks besides the main network model to be trained, which are used to obtain error gradients without complete feedforward and backward propagation processes. We propose a cascaded learning strategy for these local networks. In addition, the approach is also useful from multi-model perspectives, including structural optimization of neural networks, computationally efficient progressive inference, and ensemble classification for performance improvement. Experimental results show the effectiveness of the proposed approach and suggest guidelines for determining appropriate algorithm parameters.", "target": ["Wir schlagen einen neuen Lernalgorithmus für tiefe neuronale Netze vor, der die schichtweise Abhängigkeit der Backpropagation aufhebt.", "Ein alternatives Trainingsparadigma für DNIs, bei dem das Hilfsmodul so trainiert wird, dass es die endgültige Ausgabe des ursprünglichen Modells direkt annähert, bietet Nebeneffekte.", "Beschreibt eine Methode zum Training neuronaler Netze ohne Aktualisierungssperre."]} +{"source": "\\emph{Truncated Backpropagation Through Time} (truncated BPTT, \\cite{jaeger2002tutorial}) is a widespread method for learning recurrent computational graphs. Truncated BPTT keeps the computational benefits of \\emph{Backpropagation Through Time} (BPTT \\cite{werbos:bptt}) while relieving the need for a complete backtrack through the whole data sequence at every step. However, truncation favors short-term dependencies: the gradient estimate of truncated BPTT is biased, so that it does not benefit from the convergence guarantees from stochastic gradient theory. We introduce \\emph{Anticipated Reweighted Truncated Backpropagation} (ARTBP), an algorithm that keeps the computational benefits of truncated BPTT, while providing unbiasedness. ARTBP works by using variable truncation lengths together with carefully chosen compensation factors in the backpropagation equation. We check the viability of ARTBP on two tasks. First, a simple synthetic task where careful balancing of temporal dependencies at different scales is needed: truncated BPTT displays unreliable performance, and in worst case scenarios, divergence, while ARTBP converges reliably. Second, on Penn Treebank character-level language modelling \\cite{ptb_proc}, ARTBP slightly outperforms truncated BPTT.\n", "target": ["Bietet eine unverzerrte Version der verkürzten Backpropagation durch Stichproben der Verkürzungslängen und entsprechende Neugewichtung.", "Vorschlagen einer stochastische Methoden zur Bestimmung von Abbruchpunkten in der Backpropagation durch die Zeit.", "Eine neue Annäherung an die Backpropagation durch die Zeit, um die Rechen- und Speicherbelastung zu überwinden, die entsteht, wenn man aus langen Sequenzen lernen muss."]} +{"source": "Graph convolutional networks (GCNs) have been widely used for classifying graph nodes in the semi-supervised setting.\n Previous works have shown that GCNs are vulnerable to the perturbation on adjacency and feature matrices of existing nodes. However, it is unrealistic to change the connections of existing nodes in many applications, such as existing users in social networks. In this paper, we investigate methods attacking GCNs by adding fake nodes. A greedy algorithm is proposed to generate adjacency and feature matrices of fake nodes, aiming to minimize the classification accuracy on the existing ones. In additional, we introduce a discriminator to classify fake nodes from real nodes, and propose a Greedy-GAN algorithm to simultaneously update the discriminator and the attacker, to make fake nodes indistinguishable to the real ones. Our non-targeted attack decreases the accuracy of GCN down to 0.10, and our targeted attack reaches a success rate of 0.99 for attacking the whole datasets, and 0.94 on average for attacking a single node.", "target": ["nicht gezielte und gezielte Angriffe auf GCN durch Hinzufügen gefälschter Knotenpunkte", "Die Autoren schlagen eine neue Technik vor, mit der \"falsche\" Knoten hinzugefügt werden können, um einen GCN-basierten Klassifikator zu täuschen."]} +{"source": "Transfer learning aims to solve the data sparsity for a specific domain by applying information of another domain. Given a sequence (e.g. a natural language sentence), the transfer learning, usually enabled by recurrent neural network (RNN), represent the sequential information transfer. RNN uses a chain of repeating cells to model the sequence data. However, previous studies of neural network based transfer learning simply transfer the information across the whole layers, which are unfeasible for seq2seq and sequence labeling. Meanwhile, such layer-wise transfer learning mechanisms also lose the fine-grained cell-level information from the source domain.\n\n In this paper, we proposed the aligned recurrent transfer, ART, to achieve cell-level information transfer. ART is in a recurrent manner that different cells share the same parameters. Besides transferring the corresponding information at the same position, ART transfers information from all collocated words in the source domain. This strategy enables ART to capture the word collocation across domains in a more flexible way. We conducted extensive experiments on both sequence labeling tasks (POS tagging, NER) and sentence classification (sentiment analysis). ART outperforms the state-of-the-arts over all experiments.\n", "target": ["Transferlernen für Sequenzen durch Lernen, um Informationen auf Zellebene über Domänen hinweg abzugleichen.", "In dem Papier wird vorgeschlagen, RNN/LSTM mit Kollokationsabgleich als Repräsentationslernmethode für Transferlernen/Domänenanpassung in NLP zu verwenden."]} +{"source": "Addressing uncertainty is critical for autonomous systems to robustly adapt to the real world. We formulate the problem of model uncertainty as a continuous Bayes-Adaptive Markov Decision Process (BAMDP), where an agent maintains a posterior distribution over latent model parameters given a history of observations and maximizes its expected long-term reward with respect to this belief distribution. Our algorithm, Bayesian Policy Optimization, builds on recent policy optimization algorithms to learn a universal policy that navigates the exploration-exploitation trade-off to maximize the Bayesian value function. To address challenges from discretizing the continuous latent parameter space, we propose a new policy network architecture that encodes the belief distribution independently from the observable state. Our method significantly outperforms algorithms that address model uncertainty without explicitly reasoning about belief distributions and is competitive with state-of-the-art Partially Observable Markov Decision Process solvers.", "target": ["Wir formulieren die Modellunsicherheit beim Reinforcement Learning als einen kontinuierlichen Bayes Adaptiven Markov Entscheidungsprozess und präsentieren eine Methode zur praktischen und skalierbaren Bayes'schen Optimierung von Strategien.", "Mit einem Bayes'schen Ansatz lässt sich ein besseres Gleichgewicht zwischen Erkundung und Ausbeutung in RL erreichen."]} +{"source": "For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered state-of-the-art; we consider this problematic.\n We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be ``won'' by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation, implement an example black-box metric based on these ideas, and validate experimentally that it can measure a notion of generalization.\n", "target": ["Wir argumentieren, dass GAN-Benchmarks eine große Stichprobe des Modells erfordern müssen, um das Auswendiglernen zu bestrafen, und untersuchen, ob die Divergenzen neuronaler Netze diese Eigenschaft aufweisen.", "Die Autoren schlagen ein Kriterium für die Bewertung der Qualität der von einem generativen adversarial Netzwerk erzeugten Beispielen vor."]} +{"source": "Conventional methods model open domain dialogue generation as a black box through end-to-end learning from large scale conversation data. In this work, we make the first step to open the black box by introducing dialogue acts into open domain dialogue generation. The dialogue acts are generally designed and reveal how people engage in social chat. Inspired by analysis on real data, we propose jointly modeling dialogue act selection and response generation, and perform learning with human-human conversations tagged with a dialogue act classifier and a reinforcement approach to further optimizing the model for long-term conversation. With the dialogue acts, we not only achieve significant improvement over state-of-the-art methods on response quality for given contexts and long-term conversation in both machine-machine simulation and human-machine conversation, but also are capable of explaining why such achievements can be made.", "target": ["Erzeugung von Dialogen im offenen Bereich mit Dialogakten.", "Die Autoren verwenden eine Fernüberwachungsmethode, um Tags für Dialoghandlungen als Konditionierungsfaktor für die Generierung von Antworten in Dialogen mit offener Domäne hinzuzufügen.", "Das Papier beschreibt eine Technik zur Einbindung von Dialoghandlungen in neuronale Konversationsagenten."]} +{"source": "We discuss the feasibility of the following learning problem: given unmatched samples from two domains and nothing else, learn a mapping between the two, which preserves semantics. Due to the lack of paired samples and without any definition of the semantic information, the problem might seem ill-posed. Specifically, in typical cases, it seems possible to build infinitely many alternative mappings from every target mapping. This apparent ambiguity stands in sharp contrast to the recent empirical success in solving this problem.\n\n We identify the abstract notion of aligning two domains in a semantic way with concrete terms of minimal relative complexity. A theoretical framework for measuring the complexity of compositions of functions is developed in order to show that it is reasonable to expect the minimal complexity mapping to be unique. The measured complexity used is directly related to the depth of the neural networks being learned and a semantically aligned mapping could then be captured simply by learning using architectures that are not much bigger than the minimal architecture.\n\n Various predictions are made based on the hypothesis that semantic alignment can be captured by the minimal mapping. These are verified extensively. In addition, a new mapping algorithm is proposed and shown to lead to better mapping results.", "target": ["Unsere Hypothese ist, dass bei zwei Bereichen die Abbildung mit der geringsten Komplexität und der geringsten Diskrepanz der Zielabbildung am nächsten kommt.", "Die Arbeit befasst sich mit dem Problem des Lernens von Zuordnungen zwischen verschiedenen Domänen ohne jegliche Überwachung und stellt drei Vermutungen auf.", "Es wird gezeigt, dass es beim unüberwachten Lernen auf unausgerichteten Daten möglich ist, die Abbildung zwischen den Domänen nur mit GAN ohne Rekonstruktionsverlust zu lernen."]} +{"source": "We present a novel approach for the certification of neural networks against adversarial perturbations which combines scalable overapproximation methods with precise (mixed integer) linear programming. This results in significantly better precision than state-of-the-art verifiers on challenging feedforward and convolutional neural networks with piecewise linear activation functions.", "target": ["Wir verfeinern die Ergebnisse der Über-Approximation von unvollständigen Verifikatoren mit MILP-Lösern, um mehr Robustheitseigenschaften als der Stand der Technik zu beweisen. ", "Vorstellen eines Verifizierer, der die Genauigkeit von unvollständigen Verifizierern und die Skalierbarkeit von vollständigen Verifizierern durch Überparametrisierung, gemischt ganzzahlige lineare Programmierung und Entspannung der linearen Programmierung verbessert.", "Eine gemischte Strategie zur Erzielung einer besseren Präzision bei Robustheitsüberprüfungen von Feed-Forward neuronalen Netzen mit stückweise linearen Aktivierungsfunktionen, wobei eine bessere Präzision als bei unvollständigen Überprüfern und eine bessere Skalierbarkeit als bei vollständigen Überprüfern erreicht wird."]} +{"source": "A distinct commonality between HMMs and RNNs is that they both learn hidden representations for sequential data. In addition, it has been noted that the backward computation of the Baum-Welch algorithm for HMMs is a special case of the back-propagation algorithm used for neural networks (Eisner (2016)). Do these observations suggest that, despite their many apparent differences, HMMs are a special case of RNNs? In this paper, we show that that is indeed the case, and investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization. In particular, we investigate three key design factors—independence assumptions between the hidden states and the observation, the placement of softmaxes, and the use of non-linearities—in order to pin down their empirical effects. We present a comprehensive empirical study to provide insights into the interplay between expressivity and interpretability in this model family with respect to language modeling and parts-of-speech induction.", "target": ["Sind HMMs ein Spezialfall von RNNs? Wir untersuchen eine Reihe von architektonischen Transformationen zwischen HMMs und RNNs, sowohl durch theoretische Ableitungen als auch durch empirische Hybridisierung und liefern neue Erkenntnisse.", "In diesem Beitrag wird untersucht, ob HMMs ein Spezialfall von RNNs sind, die Sprachmodellierung und POS-Tagging verwenden."]} +{"source": "Deep neural networks have been tremendously successful in a number of tasks.\n One of the main reasons for this is their capability to automatically\n learn representations of data in levels of abstraction,\n increasingly disentangling the data as the internal transformations are applied.\n In this paper we propose a novel regularization method that penalize covariance between dimensions of the hidden layers in a network, something that benefits the disentanglement.\n This makes the network learn nonlinear representations that are linearly uncorrelated, yet allows the model to obtain good results on a number of tasks, as demonstrated by our experimental evaluation.\n The proposed technique can be used to find the dimensionality of the underlying data, because it effectively disables dimensions that aren't needed.\n Our approach is simple and computationally cheap, as it can be applied as a regularizer to any gradient-based learning model.", "target": ["Wir schlagen eine neuartige Regularisierungsmethode vor, die die Kovarianz zwischen den Dimensionen der versteckten Schichten in einem Netzwerk bestraft.", "Diese Arbeit stellt einen Regularisierungsmechanismus vor, der die Kovarianz zwischen allen Dimensionen in der latenten Repräsentation eines neuronalen Netzes bestraft, um die latente Repräsentation zu entflechten."]} +{"source": "This report introduces a training and recognition scheme, in which classification is realized via class-wise discerning. Trained with datasets whose labels are randomly shuffled except for one class of interest, a neural network learns class-wise parameter values, and remolds itself from a feature sorter into feature filters, each of which discerns objects belonging to one of the classes only. Classification of an input can be inferred from the maximum response of the filters. A multiple check with multiple versions of filters can diminish fluctuation and yields better performance. This scheme of discerning, maximum response and multiple check is a method of general viability to improve performance of feedforward networks, and the filter training itself is a promising feature abstraction procedure. In contrast to the direct sorting, the scheme mimics the classification process mediated by a series of one component picking.", "target": ["Das vorgeschlagene System ahmt den Klassifizierungsprozess nach, der durch eine Reihe von Ein-Komponenten Pickings vermittelt wird.", "Eine Methode zur Erhöhung der Genauigkeit von Deep-Nets bei Mehrklassen-Klassifizierungsaufgaben, die scheinbar durch eine Reduktion von Mehrklassen- auf Binärklassifizierung erfolgt.", "Ein neuartiges Klassifizierungsverfahren mit Unterscheidung, Maximalantwort und Mehrfachprüfung zur Verbesserung der Genauigkeit mittelmäßiger Netzwerke und zur Verbesserung von Feedforward Netzwerken."]} +{"source": "A long-held conventional wisdom states that larger models train more slowly when using gradient descent. This work challenges this widely-held belief, showing that larger models can potentially train faster despite the increasing computational requirements of each training step. In particular, we study the effect of network structure (depth and width) on halting time and show that larger models---wider models in particular---take fewer training steps to converge.\n\n We design simple experiments to quantitatively characterize the effect of overparametrization on weight space traversal. Results show that halting time improves when growing model's width for three different applications, and the improvement comes from each factor: The distance from initialized weights to converged weights shrinks with a power-law-like relationship, the average step size grows with a power-law-like relationship, and gradient vectors become more aligned with each other during traversal.\n", "target": ["Die Erfahrung zeigt, dass größere Modelle in weniger Trainingsschritten trainiert werden können, da sich alle Faktoren bei der Durchquerung des Gewichtsraums verbessern.", "Diese Arbeit zeigt, dass breitere RNNs die Konvergenzgeschwindigkeit verbessern, wenn sie auf NLP-Probleme angewandt werden, und dass die Auswirkung der Erhöhung der Breiten in tiefen neuronalen Netzen auf die Konvergenz der Optimierung.", "In dieser Arbeit werden die Auswirkungen einer Überparametrisierung auf die Anzahl der Wiederholungen, die ein Algorithmus benötigt, um zu konvergieren, beschrieben und weitere empirische Beobachtungen zu den Auswirkungen einer Überparametrisierung beim Training neuronaler Netze vorgestellt."]} +{"source": "Due to its potential to improve programmer productivity and software quality, automated program repair has been an active topic of research. Newer techniques harness neural networks to learn directly from examples of buggy programs and their fixes. In this work, we consider a recently identified class of bugs called variable-misuse bugs. The state-of-the-art solution for variable misuse enumerates potential fixes for all possible bug locations in a program, before selecting the best prediction. We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs. We present multi-headed pointer networks for this purpose, with one head each for localization and repair. The experimental results show that the joint model significantly outperforms an enumerative solution that uses a pointer based model for repair alone.", "target": ["Mehrköpfige Zeigernetzwerke für gemeinsames Lernen zur Lokalisierung und Behebung von Fehlern durch Variablenmissbrauch.", "Schlägt ein LSTM basiertes Modell mit Zeigern vor, um das Problem des VarMisuse in mehrere Schritte aufzuteilen.", "In diesem Beitrag wird ein LSTM basiertes Modell zur Fehlererkennung und -behebung des VarMisuse Fehlers vorgestellt, das im Vergleich zu früheren Ansätzen in mehreren Datensätzen deutliche Verbesserungen aufweist."]} +{"source": "Classification and clustering have been studied separately in machine learning and computer vision. Inspired by the recent success of deep learning models in solving various vision problems (e.g., object recognition, semantic segmentation) and the fact that humans serve as the gold standard in assessing clustering algorithms, here, we advocate for a unified treatment of the two problems and suggest that hierarchical frameworks that progressively build complex patterns on top of the simpler ones (e.g., convolutional neural networks) offer a promising solution. We do not dwell much on the learning mechanisms in these frameworks as they are still a matter of debate, with respect to biological constraints. Instead, we emphasize on the compositionality of the real world structures and objects. In particular, we show that CNNs, trained end to end using back propagation with noisy labels, are able to cluster data points belonging to several overlapping shapes, and do so much better than the state of the art algorithms. The main takeaway lesson from our study is that mechanisms of human vision, particularly the hierarchal organization of the visual ventral stream should be taken into account in clustering algorithms (e.g., for learning representations in an unsupervised manner or with minimum supervision) to reach human level clustering performance. This, by no means, suggests that other methods do not hold merits. For example, methods relying on pairwise affinities (e.g., spectral clustering) have been very successful in many cases but still fail in some cases (e.g., overlapping clusters).", "target": ["Menschenähnliches Clustering mit CNNs.", "Das Papier bestätigt die Idee, dass tiefe Convolutional Neural Networks lernen könnten, Eingabedaten besser zu clustern als andere Clustering-Methoden, indem sie den Kontext jedes Eingabepunkts aufgrund eines großen Sichtfelds interpretieren können.", "Diese Arbeit kombiniert Deep Learning für die Merkmalsdarstellung mit der Aufgabe der menschenähnlichen, unbeaufsichtigten Gruppierung."]} +{"source": "Instancewise feature scoring is a method for model interpretation, which yields, for each test instance, a vector of importance scores associated with features. Methods based on the Shapley score have been proposed as a fair way of computing feature attributions, but incur an exponential complexity in the number of features. This combinatorial explosion arises from the definition of Shapley value and prevents these methods from being scalable to large data sets and complex models. We focus on settings in which the data have a graph structure, and the contribution of features to the target variable is well-approximated by a graph-structured factorization. In such settings, we develop two algorithms with linear complexity for instancewise feature importance scoring on black-box models. We establish the relationship of our methods to the Shapley value and a closely related concept known as the Myerson value from cooperative game theory. We demonstrate on both language and image data that our algorithms compare favorably with other methods using both quantitative metrics and human evaluation.", "target": ["Wir entwickeln zwei Algorithmen mit linearer Komplexität für die modellagnostische Modellinterpretation auf der Grundlage des Shapley-Wertes, wenn der Beitrag der Merkmale zum Ziel durch eine graphisch-strukturierte Faktorisierung gut angenähert ist.", "In der Arbeit werden zwei Annäherungen an den Shapley-Wert vorgeschlagen, der zur Erstellung von Merkmalsbewertungen für die Interpretierbarkeit verwendet wird.", "In dieser Arbeit werden zwei Methoden für die instanzielle Bewertung der Wichtigkeit von Merkmalen unter Verwendung von Shapely-Werten vorgeschlagen und zwei effiziente Methoden zur Berechnung von ungefähren Shapely-Werten bereitgestellt, wenn eine bekannte Struktur zwischen den Merkmalen vorhanden ist."]} +{"source": "According to parallel distributed processing (PDP) theory in psychology, neural networks (NN) learn distributed rather than interpretable localist representations. This view has been held so strongly that few researchers have analysed single units to determine if this assumption is correct. However, recent results from psychology, neuroscience and computer science have shown the occasional existence of local codes emerging in artificial and biological neural networks. In this paper, we undertake the first systematic survey of when local codes emerge in a feed-forward neural network, using generated input and output data with known qualities. We find that the number of local codes that emerge from a NN follows a well-defined distribution across the number of hidden layer neurons, with a peak determined by the size of input data, number of examples presented and the sparsity of input data. Using a 1-hot output code drastically decreases the number of local codes on the hidden layer. The number of emergent local codes increases with the percentage of dropout applied to the hidden layer, suggesting that the localist encoding may offer a resilience to noisy networks. This data suggests that localist coding can emerge from feed-forward PDP networks and suggests some of the conditions that may lead to interpretable localist representations in the cortex. The findings highlight how local codes should not be dismissed out of hand.", "target": ["Lokale Codes wurden in neuronalen Feed-Forward Netzen gefunden.", "Eine Methode zur Bestimmung, inwieweit einzelne Neuronen in einer versteckten Schicht eines MLP einen lokalistischen Code kodieren, der für verschiedene Eingabedarstellungen untersucht wird.", "Untersucht die Entwicklung von lokalistischen Darstellungen in den verborgenen Schichten von neuronalen Feed-Forward-Netzen."]} +{"source": "Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches however primarily focus on simple link structure between a finite set of entities, ignoring the variety of data types that are often used in relational databases, such as text, images, and numerical values. In our approach, we propose a multimodal embedding using different neural encoders for this variety of data, and combine with existing models to learn embeddings of the entities. We extend existing datasets to create two novel benchmarks, YAGO-10-plus and MovieLens-100k-plus, that contain additional relations such as textual descriptions and images of the original entities. We demonstrate that our model utilizes the additional information effectively to provide further gains in accuracy. Moreover, we test our learned multimodal embeddings by using them to predict missing multimodal attributes.", "target": ["Erweiterung der relationalen Modellierung zur Unterstützung multimodaler Daten unter Verwendung neuronaler Kodierer.", "In diesem Beitrag wird vorgeschlagen, Link-Vorhersagen in Wissensdatenbanken durchzuführen, indem die ursprünglichen Entitäten durch multimodale Informationen ergänzt werden, und es wird ein Modell vorgestellt, das in der Lage ist, bei der Bewertung von Triples alle Arten von Informationen zu kodieren.", "In dem Beitrag geht es um die Einbeziehung von Informationen aus verschiedenen Modalitäten in Ansätzen zur Vorhersage von Verbindungen."]} +{"source": "An ensemble of neural networks is known to be more robust and accurate than an individual network, however usually with linearly-increased cost in both training and testing. \n In this work, we propose a two-stage method to learn Sparse Structured Ensembles (SSEs) for neural networks.\n In the first stage, we run SG-MCMC with group sparse priors to draw an ensemble of samples from the posterior distribution of network parameters. In the second stage, we apply weight-pruning to each sampled network and then perform retraining over the remained connections.\n In this way of learning SSEs with SG-MCMC and pruning, we not only achieve high prediction accuracy since SG-MCMC enhances exploration of the model-parameter space, but also reduce memory and computation cost significantly in both training and testing of NN ensembles.\n This is thoroughly evaluated in the experiments of learning SSE ensembles of both FNNs and LSTMs.\n For example, in LSTM based language modeling (LM), we obtain 21\\% relative reduction in LM perplexity by learning a SSE of 4 large LSTM models, which has only 30\\% of model parameters and 70\\% of computations in total, as compared to the baseline large LSTM LM.\n To the best of our knowledge, this work represents the first methodology and empirical study of integrating SG-MCMC, group sparse prior and network pruning together for learning NN ensembles.", "target": ["Wir schlagen eine neuartige Methode vor, die SG-MCMC-Sampling, Gruppensparsamkeit und Netzwerk-Pruning integriert, um ein Sparse Structured Ensemble (SSE) mit verbesserter Leistung und deutlich geringeren Kosten als traditionelle Methoden zu lernen. ", "Die Autoren schlagen ein Verfahren zur Erzeugung eines Ensembles von spärlich strukturierten Modellen vor.", "Ein neuer Rahmen für das Training von neuronalen Netzwerken, der SG-MCMC-Methoden innerhalb des Deep Learning verwendet und dann die Recheneffizienz durch Gruppensparsamkeit und Pruning erhöht.", "In diesem Beitrag wird die Verwendung von FNN und LSTMs untersucht, um die Durchschnittsbildung von bayesianischen Modellen rechnerisch machbar zu machen und die durchschnittliche Modellleistung zu verbessern."]} +{"source": "This paper introduces a new framework for data efficient and versatile learning. Specifically:\n 1) We develop ML-PIP, a general framework for Meta-Learning approximate Probabilistic Inference for Prediction. ML-PIP extends existing probabilistic interpretations of meta-learning to cover a broad class of methods. \n 2) We introduce \\Versa{}, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass. \\Versa{} substitutes optimization at test time with forward passes through inference networks, amortizing the cost of inference and relieving the need for second derivatives during training.\n 3) We evaluate \\Versa{} on benchmark datasets where the method sets new state-of-the-art results, and can handle arbitrary number of shots, and for classification, arbitrary numbers of classes at train and test time. The power of the approach is then demonstrated through a challenging few-shot ShapeNet view reconstruction task.", "target": ["Neuartiger Rahmen für das Meta-Lernen, der eine breite Klasse bestehender Few-Shot Lernmethoden vereinheitlicht und erweitert. Erzielt eine starke Leistung bei Few-Shot Learning Benchmarks, ohne iterative Testzeitinferenz zu benötigen. ", "Diese Arbeit befasst sich mit dem Few-Shot Learning aus der Sicht der probabilistischen Inferenz und erreicht den neuesten Stand der Technik trotz eines einfacheren Aufbaus als viele Wettbewerber."]} +{"source": "In recent years, softmax together with its fast approximations has become the de-facto loss function for deep neural networks with multiclass predictions. However, softmax is used in many problems that do not fully fit the multiclass framework and where the softmax assumption of mutually exclusive outcomes can lead to biased results. This is often the case for applications such as language modeling, next event prediction and matrix factorization, where many of the potential outcomes are not mutually exclusive, but are more likely to be independent conditionally on the state. To this end, for the set of problems with positive and unlabeled data, we propose a relaxation of the original softmax formulation, where, given the observed state, each of the outcomes are conditionally independent but share a common set of negatives. Since we operate in a regime where explicit negatives are missing, we create an adversarially-trained model of negatives and derive a new negative sampling and weighting scheme which we denote as Cooperative Importance Sampling (CIS). We show empirically the advantages of our newly introduced negative sampling scheme by pluging it in the Word2Vec algorithm and benching it extensively against other negative sampling schemes on both language modeling and matrix factorization tasks and show large lifts in performance.", "target": ["Definition eines sich teilweise gegenseitig ausschließenden Softmax-Verlustes für positive Daten und Implementierung eines kooperativen Stichprobenverfahrens.", "In diesem Beitrag wird Cooperative Importance Sampling vorgestellt, um das Problem zu lösen, dass die sich gegenseitig ausschließende Annahme der traditionellen Softmax verzerrt ist, wenn negative Stichproben nicht explizit definiert sind.", "In dieser Arebit werden PMES-Methoden vorgeschlagen, um die Annahme des ausschließlichen Ergebnisses bei Softmax-Verlusten zu lockern, und es wird der empirische Nutzen zur Verbesserung von Einbettungsmodellen des Typs word2vec aufgezeigt."]} +{"source": "Over the past few years, various tasks involving videos such as classification, description, summarization and question answering have received a lot of attention. Current models for these tasks compute an encoding of the video by treating it as a sequence of images and going over every image in the sequence, which becomes computationally expensive for longer videos. In this paper, we focus on the task of video classification and aim to reduce the computational cost by using the idea of distillation. Specifically, we propose a Teacher-Student network wherein the teacher looks at all the frames in the video but the student looks at only a small fraction of the frames in the video. The idea is to then train the student to minimize (i) the difference between the final representation computed by the student and the teacher and/or (ii) the difference between the distributions predicted by the teacher and the student. This smaller student network which involves fewer computations but still learns to mimic the teacher can then be employed at inference time for video classification. We experiment with the YouTube-8M dataset and show that the proposed student network can reduce the inference time by upto 30% with a negligent drop in the performance.", "target": ["Lehrer-Schüler Framework für eine effiziente Videoklassifizierung mit weniger Bildern.", "In der Arbeit wird eine Idee vorgeschlagen, aus einem vollständigen Videoklassifizierungsmodell ein kleines Modell zu destillieren, das nur eine geringere Anzahl von Einzelbildern erhält.", "Die Autoren stellen ein Lehrer-Schüler-Netzwerk zur Lösung von Videoklassifizierungsproblemen vor und schlagen serielle und parallele Trainingsalgorithmen vor, um die Rechenkosten zu reduzieren."]} +{"source": "Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.", "target": ["Eine einheitliche statistische Sicht auf die breite Klasse der tiefen generativen Modelle.", "Die Arbeit entwickelt einen Rahmen, in dem GAN-Algorithmen als eine Form der Variationsinferenz auf einem generativen Modell interpretiert werden, das eine Indikatorvariable rekonstruiert, die angibt, ob eine Stichprobe zu den wahren generativen Datenverteilungen gehört."]} +{"source": "Deep neural networks have demonstrated promising prediction and classification performance on many healthcare applications. However, the interpretability of those models are often lacking. On the other hand, classical interpretable models such as rule lists or decision trees do not lead to the same level of accuracy as deep neural networks and can often be too complex to interpret (due to the potentially large depth of rule lists). In this work, we present PEARL, Prototype lEArning via Rule Lists, which iteratively uses rule lists to guide a neural network to learn representative data prototypes. The resulting prototype neural network provides accurate prediction, and the prediction can be easily explained by prototype and its guiding rule lists. Thanks to the prediction power of neural networks, the rule lists from\t\t\t\t prototypes are more concise and hence provide better interpretability. On two real-world electronic healthcare records (EHR) datasets, PEARL consistently outperforms all baselines across both datasets, especially achieving performance improvement over conventional rule learning by up to 28% and over prototype learning by up to 3%. Experimental results also show the resulting interpretation of PEARL is simpler than the standard rule learning.", "target": ["Ein Verfahren, das das Lernen von Regeln und das Lernen von Prototypen kombiniert. ", "Es wird ein neuer Framework für interpretierbare Vorhersagen vorgestellt, der regelbasiertes Lernen, Prototyp-Lernen und NNs kombiniert und besonders auf longitudinale Daten anwendbar ist.", "Diese Arbeit zielt darauf ab, den Mangel an Interpretierbarkeit von Deep Learning Modellen zu beheben, und schlägt Prototype lEArning via Rule Lists (PEARL) vor, das Regellernen und Prototyplernen kombiniert, um eine genauere Klassifizierung zu erreichen und die Aufgabe der Interpretierbarkeit zu vereinfachen."]} +{"source": "Generative Adversarial Networks (GANs) are powerful tools for realistic image generation. However, a major drawback of GANs is that they are especially hard to train, often requiring large amounts of data and long training time. In this paper we propose the Deli-Fisher GAN, a GAN that generates photo-realistic images by enforcing structure on the latent generative space using similar approaches in \\cite{deligan}. The structure of the latent space we consider in this paper is modeled as a mixture of Gaussians, whose parameters are learned in the training process. Furthermore, to improve stability and efficiency, we use the Fisher Integral Probability Metric as the divergence measure in our GAN model, instead of the Jensen-Shannon divergence. We show by experiments that the Deli-Fisher GAN performs better than DCGAN, WGAN, and the Fisher GAN as measured by inception score.", "target": ["In diesem Beitrag wird ein neues Generatives Adversariales Netzwerk vorgeschlagen, das stabiler und effizienter ist und bessere Bilder erzeugt als die bisherigen .", "Diese Arbeit kombiniert Fisher-GAN und Deli-GAN.", "Diese Arbeit kombiniert Deli-GAN, das eine gemischte Prioritätsverteilung im latenten Raum hat, und Fisher GAN, das Fisher IPM anstelle von JSD als Ziel verwendet."]} +{"source": "Recent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attentional mechanisms into neural networks increases the performance of the system substantially. We report on a new modular network architecture that applies an attentional mechanism not on temporal and spatial regions of the input, but on sensor selection for multi-sensor setups. This network called the sensor transformation attention network (STAN) is evaluated in scenarios which include the presence of natural noise or synthetic dynamic noise. We demonstrate how the attentional signal responds dynamically to changing noise levels and sensor-specific noise, leading to reduced word error rates (WERs) on both audio and visual tasks using TIDIGITS and GRID; and also on CHiME-3, a multi-microphone real-world noisy dataset. The improvement grows as more channels are corrupted as demonstrated on the CHiME-3 dataset. Moreover, the proposed STAN architecture naturally introduces a number of advantages including ease of removing sensors from existing architectures, attentional interpretability, and increased robustness to a variety of noise environments.", "target": ["Wir stellen eine modulare Multisensornetzwerk-Architektur mit einem Aufmerksamkeitsmechanismus vor, der eine dynamische Sensorauswahl auf realen, verrauschten Daten von CHiME-3 ermöglicht.", "Eine generische neuronale Architektur, die in der Lage ist, die Aufmerksamkeit zu erlernen, die den verschiedenen Eingangskanälen in Abhängigkeit von der relativen Qualität der einzelnen Sensoren im Vergleich zu den anderen zukommen muss.", "Betrachtet die Verwendung von Aufmerksamkeit für die Sensor- oder Kanalauswahl mit Ergebnissen zu TIDIGITS und GRID, die einen Vorteil von Aufmerksamkeit gegenüber der Zusammenstellung von Merkmalen zeigen."]} +{"source": "Massive data exist among user local platforms that usually cannot support deep neural network (DNN) training due to computation and storage resource constraints. Cloud-based training schemes provide beneficial services but suffer from potential privacy risks due to excessive user data collection. To enable cloud-based DNN training while protecting the data privacy simultaneously, we propose to leverage the intermediate representations of the data, which is achieved by splitting the DNNs and deploying them separately onto local platforms and the cloud. The local neural network (NN) is used to generate the feature representations. To avoid local training and protect data privacy, the local NN is derived from pre-trained NNs. The cloud NN is then trained based on the extracted intermediate representations for the target learning task. We validate the idea of DNN splitting by characterizing the dependency of privacy loss and classification accuracy on the local NN topology for a convolutional NN (CNN) based image classification task. Based on the characterization, we further propose PrivyNet to determine the local NN topology, which optimizes the accuracy of the target learning task under the constraints on privacy loss, local computation, and storage. The efficiency and effectiveness of PrivyNet are demonstrated with CIFAR-10 dataset.", "target": ["Um ein Cloud-basiertes DNN-Training zu ermöglichen und gleichzeitig den Datenschutz zu gewährleisten, schlagen wir vor, die Zwischendaten-Darstellungen zu nutzen, indem die DNNs aufgeteilt und separat auf lokalen Plattformen und in der Cloud bereitgestellt werden.", "In diesem Beitrag wird eine Technik zur Privatisierung von Daten durch das Erlernen einer Merkmalsrepräsentation vorgeschlagen, die für die Bildrekonstruktion schwierig, aber für die Bildklassifizierung hilfreich ist."]} +{"source": "Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from ‘serialised’ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN.", "target": ["Bedingte rekurrente GANs für die Generierung reellwertiger medizinischer Sequenzen, mit neuartigen Bewertungsansätzen und einer empirischen Datenschutzanalyse.", "Schlägt vor, synthetische Daten, die von GANs generiert werden, als Ersatz für persönlich identifizierbare Daten beim Training von ML-Modellen für datenschutzsensible Anwendungen zu verwenden.", "Die Autoren schlagen eine neuartige rekurrente GAN-Architektur vor, die kontinuierliche Domänensequenzen erzeugt, und evaluieren sie anhand mehrerer synthetischer Aufgaben und einer Aufgabe mit ICU-Zeitreihen.", "Schlägt vor, RGANs und RCGANs zu verwenden, um synthetische Sequenzen aus aktuellen Daten zu erzeugen."]} +{"source": "Emphasis effects – visual changes that make certain elements more\n prominent – are commonly used in information visualization to draw\n the user’s attention or to indicate importance. Although theoretical\n frameworks of emphasis exist (that link visually diverse emphasis\n effects through the idea of visual prominence compared to background\n elements), most metrics for predicting how emphasis effects\n will be perceived by users come from abstract models of human\n vision which may not apply to visualization design. In particular,\n it is difficult for designers to know, when designing a visualization,\n how different emphasis effects will compare and what level of one\n effect is equivalent to what level of another. To address this gap,\n we carried out two studies that provide empirical evidence about\n how users perceive different emphasis effects, using three visual\n variables (colour, size, and blur/focus) and eight strength levels.\n Results from gaze tracking, mouse clicks, and subjective responses\n show that there are significant differences between visual variables\n and between levels, and allow us to develop an initial understanding\n of perceptual equivalence. We developed a model from the data in\n our first study, and used it to predict the results in the second; the\n model was accurate, with high correlations between predictions and\n real values. Our studies and empirical models provide valuable new\n information for designers who want to understand and control how\n emphasis effects will be perceived by users.", "target": ["Unsere Studien und empirischen Modelle liefern wertvolle neue Informationen für Designer, die verstehen und kontrollieren wollen, wie Betonungseffekte von Nutzern wahrgenommen werden.", "In diesem Beitrag wird untersucht, welche visuellen Hervorhebungen bei der Datenvisualisierung schneller wahrgenommen werden und wie verschiedene Hervorhebungsmethoden im Vergleich zueinander abschneiden.", "Zwei Studien über die Wirksamkeit von Betonungseffekten, von denen eine die Niveaus nützlicher Unterschiede bewertet, und eine weitere, die für eine ökologisch validere Untersuchung tatsächlich unterschiedliche Visualisierungen verwendet."]} +{"source": "Memory Network based models have shown a remarkable progress on the task of relational reasoning.\n Recently, a simpler yet powerful neural network module called Relation Network (RN) has been introduced. \n Despite its architectural simplicity, the time complexity of relation network grows quadratically with data, hence limiting its application to tasks with a large-scaled memory.\n We introduce Related Memory Network, an end-to-end neural network architecture exploiting both memory network and relation network structures. \n We follow memory network's four components while each component operates similar to the relation network without taking a pair of objects. \n As a result, our model is as simple as RN but the computational complexity is reduced to linear time.\n It achieves the state-of-the-art results in jointly trained bAbI-10k story-based question answering and bAbI dialog dataset.", "target": ["Eine einfache, auf dem Gedächtnisnetzwerk (MemNN) und dem Beziehungsnetzwerk (RN) basierende Argumentationsarchitektur, die die Zeitkomplexität im Vergleich zum RN reduziert und ein State-of-the-Art-Ergebnis für die auf bAbI-Geschichten basierende QA und den bAbI-Dialog erzielt.", "Einführung des Related Memory Network (RMN), einer Verbesserung gegenüber dem Relationship Network (RN)."]} +{"source": "We investigate in this paper the architecture of deep convolutional networks. Building on existing state of the art models, we propose a reconfiguration of the model parameters into several parallel branches at the global network level, with each branch being a standalone CNN. We show that this arrangement is an efficient way to significantly reduce the number of parameters while at the same time improving the performance. The use of branches brings an additional form of regularization. In addition to splitting the parameters into parallel branches, we propose a tighter coupling of these branches by averaging their log-probabilities. The tighter coupling favours the learning of better representations, even at the level of the individual branches, as compared to when each branch is trained independently. We refer to this branched architecture as \"coupled ensembles\". The approach is very generic and can be applied with almost any neural network architecture. With coupled ensembles of DenseNet-BC and parameter budget of 25M, we obtain error rates of 2.92%, 15.68% and 1.50% respectively on CIFAR-10, CIFAR-100 and SVHN tasks. For the same parameter budget, DenseNet-BC has an error rate of 3.46%, 17.18%, and 1.8% respectively. With ensembles of coupled ensembles, of DenseNet-BC networks, with 50M total parameters, we obtain error rates of 2.72%, 15.13% and 1.42% respectively on these tasks.", "target": ["Wir zeigen, dass die Aufteilung eines neuronalen Netzes in parallele Zweige die Leistung verbessert und dass die richtige Kopplung der Zweige die Leistung noch weiter erhöht.", "In der Arbeit wird eine Rekonfiguration des bestehenden CNN-Modells nach dem Stand der Technik vorgeschlagen, die eine neue Verzweigungsarchitektur mit besserer Leistung verwendet.", "Diese Arbeit zeigt die Vorteile der Parametereinsparung durch gekoppeltes Ensembling.", "Stellt eine Deep-Network-Architektur vor, die Daten mit Hilfe mehrerer paralleler Zweige verarbeitet und die Ergebnisse dieser Zweige kombiniert, um die endgültigen Ergebnisse zu berechnen."]} +{"source": "Convolutional Neural Networks (CNN) are very popular in many fields including computer vision, speech recognition, natural language processing, to name a few. Though deep learning leads to groundbreaking performance in these domains, the networks used are very demanding computationally and are far from real-time even on a GPU, which is not power efficient and therefore does not suit low power systems such as mobile devices. To overcome this challenge, some solutions have been proposed for quantizing the weights and activations of these networks, which accelerate the runtime significantly. Yet, this acceleration comes at the cost of a larger error. The NICE method proposed in this work trains quantized neural networks by noise injection and a learned clamping, which improve the accuracy. This leads to state-of-the-art results on various regression and classification tasks, e.g., ImageNet classification with architectures such as ResNet-18/34/50 with low as 3-bit weights and 3 -bit activations. We implement the proposed solution on an FPGA to demonstrate its applicability for low power real-time applications.", "target": ["Kombination von Störungsinjektion, schrittweiser Quantisierung und Aktivierungsklemmung zur Erzielung modernster 3, 4 und 5 Bit Quantisierung.", "Schlägt vor, während des Trainings Störungen zu injizieren und die Parameterwerte in einer Schicht sowie die Aktivierungsausgabe in der Quantisierung des neuronalen Netzes zu klammern.", "Eine Methode zur Quantisierung von tiefen neuronalen Netzen für Klassifizierung und Regression, die Störungsinjektion, Clamping mit gelernten maximalen Aktivierungen und graduelle Blockquantisierung verwendet, um gleichwertige oder bessere Leistungen als die modernsten Methoden zu erzielen."]} +{"source": "In complex transfer learning scenarios new tasks might not be tightly linked to previous tasks. Approaches that transfer information contained only in the final parameters of a source model will therefore struggle. Instead, transfer learning at at higher level of abstraction is needed. We propose Leap, a framework that achieves this by transferring knowledge across learning processes. We associate each task with a manifold on which the training process travels from initialization to final parameters and construct a meta-learning objective that minimizes the expected length of this path. Our framework leverages only information obtained during training and can be computed on the fly at negligible cost. We demonstrate that our framework outperforms competing methods, both in meta-learning and transfer learning, on a set of computer vision tasks. Finally, we demonstrate that Leap can transfer knowledge across learning processes in demanding reinforcement learning environments (Atari) that involve millions of gradient steps.", "target": ["Wir schlagen Leap vor, ein System, das Wissen über Lernprozesse hinweg überträgt, indem es die erwartete Distanz minimiert, die der Trainingsprozess auf der Verlustfläche einer Aufgabe zurücklegt.", "Der Artikel schlägt ein neuartiges Meta-Lernziel vor, das darauf abzielt, bei der Bewältigung von Aufgabensammlungen, die eine beträchtliche Vielfalt zwischen den einzelnen Aufgaben aufweisen, die modernsten Ansätze zu übertreffen."]} +{"source": "Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added task, typically as many as the original network. We propose a method called Deep Adaptation Networks (DAN) that constrains newly learned filters to be linear combinations of existing ones. DANs preserve performance on the original task, require a fraction (typically 13%) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.", "target": ["Eine Alternative zum Transfer-Lernen, die schneller lernt, viel weniger Parameter benötigt (3-13 %), in der Regel bessere Ergebnisse erzielt und die Leistung bei alten Aufgaben genau beibehält.", "Controller Module für inkrementelles Lernen auf Bildklassifizierungsdatensätzen."]} +{"source": "High throughput and low latency inference of deep neural networks are critical for the deployment of deep learning applications. This paper presents a general technique toward 8-bit low precision inference of convolutional neural networks, including 1) channel-wise scale factors of weights, especially for depthwise convolution, 2) Winograd convolution, and 3) topology-wise 8-bit support. We experiment the techniques on top of a widely-used deep learning framework. The 8-bit optimized model is automatically generated with a calibration process from FP32 model without the need of fine-tuning or retraining. We perform a systematical and comprehensive study on 18 widely-used convolutional neural networks and demonstrate the effectiveness of 8-bit low precision inference across a wide range of applications and use cases, including image classification, object detection, image segmentation, and super resolution. We show that the inference throughput\n and latency are improved by 1.6X and 1.5X respectively with minimal within 0.6%1to no loss in accuracy from FP32 baseline. We believe the methodology can provide the guidance and reference design of 8-bit low precision inference for other frameworks. All the code and models will be publicly available soon.", "target": ["Wir stellen ein allgemeines Verfahren zur 8-Bit Inferenz mit niedriger Präzision für Convolutional Neural Networks vor. ", "In diesem Beitrag wird ein System zur automatischen Quantisierung der vortrainierten CNN-Modelle entwickelt."]} +{"source": "Recent approaches have successfully demonstrated the benefits of learning the parameters of shallow networks in hyperbolic space. We extend this line of work by imposing hyperbolic geometry on the embeddings used to compute the ubiquitous attention mechanisms for different neural networks architectures. By only changing the geometry of embedding of object representations, we can use the embedding space more efficiently without increasing the number of parameters of the model. Mainly as the number of objects grows exponentially for any semantic distance from the query, hyperbolic geometry --as opposed to Euclidean geometry-- can encode those objects without having any interference. Our method shows improvements in generalization on neural machine translation on WMT'14 (English to German), learning on graphs (both on synthetic and real-world graph tasks) and visual question answering (CLEVR) tasks while keeping the neural representations compact.", "target": ["Wir schlagen vor, induktive Verzerrungen und Operationen aus der hyperbolischen Geometrie einzubeziehen, um den Aufmerksamkeitsmechanismus der neuronalen Netze zu verbessern.", "In dieser Arbeit wird die in den Aufmerksamkeitsmechanismen verwendete Punktproduktähnlichkeit durch die negative hyperbolische Distanz ersetzt und auf das bestehende Transformer-Modell, Graph-Attention-Networks und Relation-Networks angewendet.", "Die Autoren schlagen einen neuartigen Ansatz zur Verbesserung der relationalen Aufmerksamkeit vor, indem sie die Anpassungs- und Aggregationsfunktionen so ändern, dass sie hyperbolische Geometrie verwenden. "]} +{"source": "We present a method for evaluating the sensitivity of deep reinforcement learning (RL) policies. We also formulate a zero-sum dynamic game for designing robust deep reinforcement learning policies. Our approach mitigates the brittleness of policies when agents are trained in a simulated environment and are later exposed to the real world where it is hazardous to employ RL policies. This framework for training deep RL policies involve a zero-sum dynamic game against an adversarial agent, where the goal is to drive the system dynamics to a saddle region. Using a variant of the guided policy search algorithm, our agent learns to adopt robust policies that require less samples for learning the dynamics and performs better than the GPS algorithm. Without loss of generality, we demonstrate that deep RL policies trained in this fashion will be maximally robust to a ``worst\" possible adversarial disturbances.", "target": ["In diesem Beitrag wird gezeigt, wie die H-Infinity-Control-Theorie dazu beitragen kann, robuste, tiefgreifende Strategien für Robotermotoren zu entwickeln.", "Es wird vorgeschlagen, Elemente der robusten Kontrolle in die Forschung zu gelenkten Strategien einzubeziehen, um eine Methode zu entwickeln, die gegenüber Störungen und Modellfehlanpassungen widerstandsfähig ist.", "Die Arbeit stellt eine Methode zur Bewertung der Sensitivität und Robustheit von Deep RL Strategien vor und schlägt einen dynamischen Spielansatz zum Erlernen robuster Strategien vor."]} +{"source": "Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we provide a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exist shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties.", "target": ["Analyse der Anfälligkeit von Klassifikatoren für universelle Störungen und Beziehung zur Krümmung der Entscheidungsgrenze.", "Die Arbeit bietet eine interessante Analyse, die die Geometrie der Entscheidungsgrenzen von Klassifizierern mit kleinen universellen Störfaktoren in Verbindung bringt.", "In diesem Beitrag werden universelle Störungen erörtert - Störungen, die einen trainierten Klassifikator in die Irre führen können, wenn sie den meisten Eingabedatenpunkten hinzugefügt werden.", "Es werden Modelle entwickelt, die versuchen, die Existenz universeller Störungen zu erklären, die neuronale Netze täuschen."]} +{"source": "Behavioral skills or policies for autonomous agents are conventionally learned from reward functions, via reinforcement learning, or from demonstrations, via imitation learning. However, both modes of task specification have their disadvantages: reward functions require manual engineering, while demonstrations require a human expert to be able to actually perform the task in order to generate the demonstration. Instruction following from natural language instructions provides an appealing alternative: in the same way that we can specify goals to other humans simply by speaking or writing, we would like to be able to specify tasks for our machines. However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task. In this work, we propose an interactive formulation of the task specification problem, where iterative language corrections are provided to an autonomous agent, guiding it in acquiring the desired skill. Our proposed language-guided policy learning algorithm can integrate an instruction and a sequence of corrections to acquire new skills very quickly. In our experiments, we show that this method can enable a policy to follow instructions and corrections for simulated navigation and manipulation tasks, substantially outperforming direct, non-interactive instruction following.", "target": ["Wir schlagen eine Meta-Lernmethode für die interaktive Korrektur von Richtlinien mit natürlicher Sprache vor.", "Diese Arbeit bietet ein Meta Learning Framework, das zeigt, wie neue Aufgaben in einer interaktiven Umgebung gelernt werden können. Jede Aufgabe wird durch eine Reinforcement Learning Umgebung gelernt, und dann wird die Aufgabe durch die Beobachtung neuer Anweisungen aktualisiert.", "In diesem Beitrag wird Agenten beigebracht, Aufgaben mittels natürlichsprachlicher Anweisungen in einem iterativen Prozess zu erledigen."]} +{"source": "Deep generative models such as Generative Adversarial Networks (GANs) and\n Variational Auto-Encoders (VAEs) are important tools to capture and investigate\n the properties of complex empirical data. However, the complexity of their inner\n elements makes their functionment challenging to assess and modify. In this\n respect, these architectures behave as black box models. In order to better\n understand the function of such networks, we analyze their modularity based on\n the counterfactual manipulation of their internal variables. Our experiments on the\n generation of human faces with VAEs and GANs support that modularity between\n activation maps distributed over channels of generator architectures is achieved\n to some degree, can be used to better understand how these systems operate and allow meaningful transformations of the generated images without further training.\n erate and edit the content of generated images.", "target": ["Wir untersuchen die Modularität von tiefen generativen Modellen.", "Die Arbeit bietet eine Möglichkeit, die modulare Struktur des tiefen generativen Modells zu untersuchen, mit dem Schlüsselkonzept, über Kanäle von Generatorarchitekturen zu verteilen."]} +{"source": "Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from in the loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policy based reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.", "target": ["Wir stellen Seq2SQL vor, das Fragen in SQL-Abfragen übersetzt, indem es Belohnungen aus der Online-Abfrageausführung nutzt, und WikiSQL, einen SQL-Tabellen-/Fragen-/Abfragedatensatz, der um Größenordnungen größer ist als bestehende Datensätze.", "Ein neuer semantischer Parsing-Datensatz, der sich auf die Generierung von SQL aus natürlicher Sprache unter Verwendung eines auf Reinforcement Learning basierenden Modells konzentriert."]} +{"source": "We introduce Explainable Adversarial Learning, ExL, an approach for training neural networks that are intrinsically robust to adversarial attacks. We find that the implicit generative modeling of random noise with the same loss function used during posterior maximization, improves a model's understanding of the data manifold furthering adversarial robustness. We prove our approach's efficacy and provide a simplistic visualization tool for understanding adversarial data, using Principal Component Analysis. Our analysis reveals that adversarial robustness, in general, manifests in models with higher variance along the high-ranked principal components. We show that models learnt with our approach perform remarkably well against a wide-range of attacks. Furthermore, combining ExL with state-of-the-art adversarial training extends the robustness of a model, even beyond what it is adversarially trained for, in both white-box and black-box attack scenarios.", "target": ["Die Modellierung von Störungen am Eingang während des diskriminierenden Trainings verbessert die Robustheit gegenüber nachteiligen Einflüssen. Vorschlag einer PCA-basierten Bewertungsmetrik für die adversial Robustheit.", "Diese Arbeit schlägt ExL vor, eine adversarische Trainingsmethode, die multiplizierte Störungen verwendet und sich als hilfreich bei der Abwehr von Blackbox-Angriffen auf drei Datensätze erweist.", "In diesem Beitrag werden multiplikative Störungen N in die Trainingsdaten aufgenommen, um adversial Robustheit zu erreichen, wenn sowohl auf die Modellparameter theta als auch auf den Störungen selbst trainiert wird."]} +{"source": "We propose a method which can visually explain the classification decision of deep neural networks (DNNs). There are many proposed methods in machine learning and computer vision seeking to clarify the decision of machine learning black boxes, specifically DNNs. All of these methods try to gain insight into why the network \"chose class A\" as an answer. Humans, when searching for explanations, ask two types of questions. The first question is, \"Why did you choose this answer? \" The second question asks, \"Why did you not choose answer B over A?\" The previously proposed methods are either not able to provide the latter directly or efficiently.\n\n We introduce a method capable of answering the second question both directly and efficiently. In this work, we limit the inputs to be images. In general, the proposed method generates explanations in the input space of any model capable of efficient evaluation and gradient evaluation. We provide results, showing the superiority of this approach for gaining insight into the inner representation of machine learning models.", "target": ["Eine Methode zur Beantwortung der Frage \"Warum nicht Klasse B?\" zur Erklärung von tiefen Netzen.", "Die Arbeit schlägt einen Ansatz vor, der kontrastive visuelle Erklärungen für tiefe neuronale Netze liefert."]} +{"source": "We flip the usual approach to study invariance and robustness of neural networks by considering the non-uniqueness and instability of the inverse mapping. We provide theoretical and numerical results on the inverse of ReLU-layers. First, we derive a necessary and sufficient condition on the existence of invariance that provides a geometric interpretation. Next, we move to robustness via analyzing local effects on the inverse. To conclude, we show how this reverse point of view not only provides insights into key effects, but also enables to view adversarial examples from different perspectives.", "target": ["Wir analysieren die Invertierbarkeit von tiefen neuronalen Netzen, indem wir die Präimages von ReLU-Schichten und die Stabilität der Inverse untersuchen.", "Diese Arbeit untersucht das Volumen der Vorabbildung der Aktivierung eines ReLU-Netzwerks in einer bestimmten Schicht und baut auf der stückweisen Linearität der Forward Funktion eines ReLU-Netzwerks auf. ", "Dieses Arbeit stellt eine Analyse der inversen Invarianz von ReLU-Netzen vor und liefert obere Schranken für singuläre Werte eines Zugnetzes."]} +{"source": "While deep learning has led to remarkable results on a number of challenging problems, researchers have discovered a vulnerability of neural networks in adversarial settings, where small but carefully chosen perturbations to the input can make the models produce extremely inaccurate outputs. This makes these models particularly unsuitable for safety-critical application domains (e.g. self-driving cars) where robustness is extremely important. Recent work has shown that augmenting training with adversarially generated data provides some degree of robustness against test-time attacks. In this paper we investigate how this approach scales as we increase the computational budget given to the defender. We show that increasing the number of parameters in adversarially-trained models increases their robustness, and in particular that ensembling smaller models while adversarially training the entire ensemble as a single model is a more efficient way of spending said budget than simply using a larger single model. Crucially, we show that it is the adversarial training of the ensemble, rather than the ensembling of adversarially trained models, which provides robustness.", "target": ["Das adversial Training von Ensembles bietet eine Robustheit gegenüber adversial Beispielen, die über die von gegnerisch trainierten Modellen und unabhängig trainierten Ensembles beobachtete Robustheit hinausgeht.", " Es wird vorgeschlagen, ein Ensemble von Modellen gemeinsam zu trainieren, wobei in jedem Zeitschritt eine Reihe von Beispielen, die für das Ensemble selbst adversial sind, in das Lernen einbezogen wird."]} +{"source": "Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network – for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR100 (20 tasks) we obtain cross-stitch performance levels with an 85% average reduction in training time.\n", "target": ["Routing-Netzwerke: eine neue Art von neuronalem Netzwerk, das lernt, seine Eingaben adaptiv zu routen, um Multi-Task-Lernen zu ermöglichen.", "In dem Beitrag wird vorgeschlagen, ein modulares Netz mit einem Controller zu verwenden, der bei jedem Zeitschritt Entscheidungen über den nächsten zu verwendenden Knoten trifft.", "Die Arbeit stellt eine neuartige Formulierung für das Lernen der optimalen Architektur eines neuronalen Netzes in einem Multi-Task-Learning-Rahmen vor, indem es Multi-Agenten-Reinforcement Learning verwendet, um eine Strategie zu finden, und zeigt eine Verbesserung gegenüber fest kodierten Architekturen mit geteilten Schichten."]} +{"source": "We propose a practical method for $L_0$ norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of $L_0$ regularization. However, since the $L_0$ norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected $L_0$ regularized objective is differentiable with respect to the distribution parameters. We further propose the \\emph{hard concrete} distribution for the gates, which is obtained by ``stretching'' a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.", "target": ["Wir zeigen, wie man die erwartete L_0-Norm parametrischer Modelle mit Gradientenabstieg optimieren kann, und führen eine neue Verteilung ein, die das Hard Gating erleichtert.", "Die Autoren stellen einen gradientenbasierten Ansatz zur Minimierung einer Zielfunktion mit einem L0 Sparse Penalty vor, um das Erlernen spärlicher neuronaler Netze zu unterstützen."]} +{"source": "Recently popularized graph neural networks achieve the state-of-the-art accuracy on a number of standard benchmark datasets for graph-based semi-supervised learning, improving significantly over existing approaches. These architectures alternate between a propagation layer that aggregates the hidden states of the local neighborhood and a fully-connected layer. Perhaps surprisingly, we show that a linear model, that removes all the intermediate fully-connected layers, is still able to achieve a performance comparable to the state-of-the-art models. This significantly reduces the number of parameters, which is critical for semi-supervised learning where number of labeled examples are small. This in turn allows a room for designing more innovative propagation layers. Based on this insight, we propose a novel graph neural network that removes all the intermediate fully-connected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph. The attention mechanism allows us to learn a dynamic and adaptive local summary of the neighborhood to achieve more accurate predictions. In a number of experiments on benchmark citation networks datasets, we demonstrate that our approach outperforms competing methods. By examining the attention weights among neighbors, we show that our model provides some interesting insights on how neighbors influence each other.", "target": ["Wir schlagen eine neuartige, auf Aufmerksamkeit basierende, interpretierbare Graph Neural Network-Architektur vor, die den aktuellen Stand der Technik in Standard-Benchmark-Datensätzen übertrifft.", "Die Autoren schlagen zwei Erweiterungen von GCNs vor, indem sie zwischengeschaltete Nichtlinearitäten aus der GCN-Berechnung entfernen und einen Aufmerksamkeitsmechanismus in der Aggregationsschicht hinzufügen.", "Die Arbeit schlägt einen halbüberwachten Lernalgorithmus für die Klassifizierung von Graphknoten vor, der von Graph Neural Networks inspiriert ist."]} +{"source": "Modern generative models are usually designed to match target distributions directly in the data space, where the intrinsic dimensionality of data can be much lower than the ambient dimensionality. We argue that this discrepancy may contribute to the difficulties in training generative models. We therefore propose to map both the generated and target distributions to the latent space using the encoder of a standard autoencoder, and train the generator (or decoder) to match the target distribution in the latent space. The resulting method, perceptual generative autoencoder (PGA), is then incorporated with maximum likelihood or variational autoencoder (VAE) objective to train the generative model. With maximum likelihood, PGA generalizes the idea of reversible generative models to unrestricted neural network architectures and arbitrary latent dimensionalities. When combined with VAE, PGA can generate sharper samples than vanilla VAE.", "target": ["Ein Framework für das Training von generativen Modellen auf der Basis von Autoencodern, mit nicht-adversen Verlusten und uneingeschränkten neuronalen Netzwerkarchitekturen.", "In diesem Beitrag werden Autoencoder verwendet, um eine Verteilungsanpassung im hochdimensionalen Raum vorzunehmen."]} +{"source": "The quality of the representations achieved by embeddings is determined by how well the geometry of the embedding space matches the structure of the data.\n Euclidean space has been the workhorse for embeddings; recently hyperbolic and spherical spaces have gained popularity due to their ability to better embed new types of structured data---such as hierarchical data---but most data is not structured so uniformly.\n We address this problem by proposing learning embeddings in a product manifold combining multiple copies of these model spaces (spherical, hyperbolic, Euclidean), providing a space of heterogeneous curvature suitable for a wide variety of structures.\n We introduce a heuristic to estimate the sectional curvature of graph data and directly determine an appropriate signature---the number of component spaces and their dimensions---of the product manifold.\n Empirically, we jointly learn the curvature and the embedding in the product space via Riemannian optimization.\n We discuss how to define and compute intrinsic quantities such as means---a challenging notion for product manifolds---and provably learnable optimization functions.\n On a range of datasets and reconstruction tasks, our product space embeddings outperform single Euclidean or hyperbolic spaces used in previous works, reducing distortion by 32.55% on a Facebook social network dataset. We learn word embeddings and find that a product of hyperbolic spaces in 50 dimensions consistently improves on baseline Euclidean and hyperbolic embeddings, by 2.6\n points in Spearman rank correlation on similarity tasks\n and 3.4 points on analogy accuracy.\n", "target": ["Produktvielfältige Einbettungsräume mit heterogener Krümmung liefern im Vergleich zu traditionellen Einbettungsräumen verbesserte Darstellungen für eine Vielzahl von Strukturen.", "Schlägt eine Methode zur Dimensionalitätsreduktion vor, die Daten in eine Produktmannigfaltigkeit von sphärischen, euklidischen und hyperbolischen Mannigfaltigkeiten einbettet. Der Algorithmus basiert auf dem Abgleich der geodätischen Abstände auf der Produktmannigfaltigkeit mit Graphenabständen."]} +{"source": "Synthesizing user-intended programs from a small number of input-output exam-\n ples is a challenging problem with several important applications like spreadsheet\n manipulation, data wrangling and code refactoring. Existing synthesis systems\n either completely rely on deductive logic techniques that are extensively hand-\n engineered or on purely statistical models that need massive amounts of data, and in\n general fail to provide real-time synthesis on challenging benchmarks. In this work,\n we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique\n that combines the best of both symbolic logic techniques and statistical models.\n Thus, it produces programs that satisfy the provided specifications by construction\n and generalize well on unseen examples, similar to data-driven systems. Our\n technique effectively utilizes the deductive search framework to reduce the learning\n problem of the neural component to a simple supervised learning setup. Further,\n this allows us to both train on sparingly available real-world data and still leverage\n powerful recurrent neural network encoders. We demonstrate the effectiveness\n of our method by evaluating on real-world customer scenarios by synthesizing\n accurate programs with up to 12× speed-up compared to state-of-the-art systems.", "target": ["Wir integrieren symbolische (deduktive) und statistische (neural-basierte) Methoden, um eine Programmsynthese in Echtzeit mit nahezu perfekter Generalisierung von einem Eingabe-Ausgabe-Beispiel zu ermöglichen.", "Die Arbeit präsentiert einen Branch-and-Bound-Ansatz zum Erlernen guter Programme, bei dem ein LSTM verwendet wird, um vorherzusagen, welche Zweige im Suchbaum zu guten Programmen führen sollten.", "Schlägt ein System vor, das aus einem einzigen Beispiel Programme synthetisiert, die besser verallgemeinert werden können als der bisherige Stand der Technik."]} +{"source": "Variational auto-encoders\n (VAEs) offer a tractable approach when performing approximate inference in otherwise intractable generative models. However, standard VAEs often produce latent codes that are disperse and lack interpretability, thus making the resulting representations unsuitable for auxiliary tasks (e.g. classification) and human interpretation. We address these issues by merging ideas from variational auto-encoders and sparse coding, and propose to explicitly model sparsity in the latent space of a VAE with a Spike and Slab prior distribution. We derive the evidence lower bound using a discrete mixture recognition function thereby making approximate posterior inference as computational efficient as in the standard VAE case. With the new approach, we are able to infer truly sparse representations with generally intractable non-linear probabilistic models. We show that these sparse representations are advantageous over standard VAE representations on two benchmark classification tasks (MNIST and Fashion-MNIST) by demonstrating improved classification accuracy and significantly increased robustness to the number of latent dimensions. Furthermore, we demonstrate qualitatively that the sparse elements capture subjectively understandable sources of variation.", "target": ["Wir untersuchen die Überschneidung von VAEs und Sparse Coding.", "Diese Arbeit schlägt eine Erweiterung von VAEs mit spärlichen Prioren und Posterioren vor, um spärliche interpretierbare Repräsentationen zu lernen."]} +{"source": "A widely observed phenomenon in deep learning is the degradation problem: increasing\n the depth of a network leads to a decrease in performance on both test and training data. Novel architectures such as ResNets and Highway networks have addressed this issue by introducing various flavors of skip-connections or gating mechanisms. However, the degradation problem persists in the context of plain feed-forward networks. In this work we propose a simple method to address this issue. The proposed method poses the learning of weights in deep networks as a constrained optimization problem where the presence of skip-connections is penalized by Lagrange multipliers. This allows for skip-connections to be introduced during the early stages of training and subsequently phased out in a principled manner. We demonstrate the benefits of such an approach with experiments on MNIST, fashion-MNIST, CIFAR-10 and CIFAR-100 where the proposed method is shown to greatly decrease the degradation effect (compared to plain networks) and is often competitive with ResNets.", "target": ["Die prinzipielle Einstellung von Skip-Verbindungen verhindert eine Verschlechterung in tiefen Feed-Forward-Netzen.", "Die Autoren stellen eine neue Trainingsstrategie, VAN, für das Training sehr tiefer Feed-Forward-Netze ohne Skip-Verbindungen vor.", "Die Arbeit stellt eine Architektur vor, die linear zwischen ResNets und Vanilla Deep Nets interpoliert, ohne Verbindungen zu überspringen."]} +{"source": "Deep learning is becoming more widespread in its application due to its power in solving complex classification problems. However, deep learning models often require large memory and energy consumption, which may prevent them from being deployed effectively on embedded platforms, limiting their applications. This work addresses the problem by proposing methods {\\em Weight Reduction Quantisation} for compressing the memory footprint of the models, including reducing the number of weights and the number of bits to store each weight. Beside, applying with sparsity-inducing regularization, our work focuses on speeding up stochastic variance reduced gradients (SVRG) optimization on non-convex problem. Our method that mini-batch SVRG with $\\ell$1 regularization on non-convex problem has faster and smoother convergence rates than SGD by using adaptive learning rates. Experimental evaluation of our approach uses MNIST and CIFAR-10 datasets on LeNet-300-100 and LeNet-5 models, showing our approach can reduce the memory requirements both in the convolutional and fully connected layers by up to 60$\\times$ without affecting their test accuracy.", "target": ["Komprimierung von tiefen neuronalen Netzen, die auf eingebetteten Geräten eingesetzt werden.", "Die Autoren stellen einen l-1 regularisierten SVRG-basierten Trainingsalgorithmus vor, der in der Lage ist, viele Gewichte des Netzwerks auf 0 zu setzen.", "Diese Arbeit reduziert den Speicherbedarf."]} +{"source": "It has been argued that the brain is a prediction machine that continuously learns how to make better predictions about the stimuli received from the external environment. For this purpose, it builds a model of the world around us and uses this model to infer the external stimulus. Predictive coding has been proposed as a mechanism through which the brain might be able to build such a model of the external environment. However, it is not clear how predictive coding can be used to build deep neural network models of the brain while complying with the architectural constraints imposed by the brain. In this paper, we describe an algorithm to build a deep generative model using predictive coding that can be used to infer latent representations about the stimuli received from external environment. Specifically, we used predictive coding to train a deep neural network on real-world images in a unsupervised learning paradigm. To understand the capacity of the network with regards to modeling the external environment, we studied the latent representations generated by the model on images of objects that are never presented to the model during training. Despite the novel features of these objects the model is able to infer the latent representations for them. Furthermore, the reconstructions of the original images obtained from these latent representations preserve the important details of these objects.", "target": ["Ein auf vorhersehender Kodierung basierender Lernalgorithmus für den Aufbau tiefer neuronaler Netzmodelle des Gehirns.", "Der Beitrag befasst sich mit dem Lernen eines generativen neuronalen Netzes unter Verwendung eines prädiktiven Kodierungssystems."]} +{"source": "In this paper, we propose deep convolutional generative adversarial networks (DCGAN) that learn to produce a 'mental image' of the input image as internal representation of a certain category of input data distribution. This mental image is what the DCGAN 'imagines' that the input image might look like under ideal conditions. The mental image contains a version of the input that is iconic, without any peculiarities that do not contribute to the ideal representation of the input data distribution within a category. A DCGAN learns this association by training an encoder to capture salient features from the original image and a decoder to convert salient features into its associated mental image representation. Our new approach, which we refer to as a Mental Image DCGAN (MIDCGAN), learns features that are useful for recognizing entire classes of objects, and that this in turn has the benefit of helping single and zero shot recognition. We demonstrate our approach on object instance recognition and handwritten digit recognition tasks.", "target": ["Die Erkennung von Objektinstanzen mit adversial Autoencodern wurde mit einem neuartigen \"mentalen Bild\" durchgeführt, das eine kanonische Repräsentation des Eingangsbildes darstellt.", "In dem Beitrag wird eine Methode zum Erlernen von Merkmalen für die Objekterkennung vorgeschlagen, die gegenüber verschiedenen Transformationen des Objekts, insbesondere der Objektposition, invariant ist.", "Diese Arbeit untersuchte die Aufgabe der Few-Shot Erkennung mittels eines generierten mentalen Bildes als Zwischendarstellung des Eingangsbildes."]} +{"source": "An obstacle that prevents the wide adoption of (deep) reinforcement learning (RL) in control systems is its need for a large number of interactions with the environment in order to master a skill. The learned skill usually generalizes poorly across domains and re-training is often necessary when presented with a new task. We present a framework that combines techniques in \\textit{formal methods} with \\textit{hierarchical reinforcement learning} (HRL). The set of techniques we provide allows for the convenient specification of tasks with logical expressions, learns hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards using any RL methods and is able to construct new skills from existing ones without additional learning. We evaluate the proposed methods in a simple grid world simulation as well as simulation on a Baxter robot.", "target": ["Kombinieren Sie zeitliche Logik mit hierarchischem Reinforcement Learning für die Komposition von Fähigkeiten.", "Die Arbeit bietet eine Strategie zur Konstruktion eines Produkt-MDPs aus einem ursprünglichen MDP und dem mit einer LTL-Formel verbundenen Automaten.", "Es wird vorgeschlagen, zeitliche Logik mit hierarchischem Reinforcement Learning zu verbinden, um die Zusammensetzung von Fähigkeiten zu vereinfachen."]} +{"source": "The tremendous memory and computational complexity of Convolutional Neural Networks (CNNs) prevents the inference deployment on resource-constrained systems. As a result, recent research focused on CNN optimization techniques, in particular quantization, which allows weights and activations of layers to be represented with just a few bits while achieving impressive prediction performance. However, aggressive quantization techniques still fail to achieve full-precision prediction performance on state-of-the-art CNN architectures on large-scale classification tasks. In this work we propose a method for weight and activation quantization that is scalable in terms of quantization levels (n-ary representations) and easy to compute while maintaining the performance close to full-precision CNNs. Our weight quantization scheme is based on trainable scaling factors and a nested-means clustering strategy which is robust to weight updates and therefore exhibits good convergence properties. The flexibility of nested-means clustering enables exploration of various n-ary weight representations with the potential of high parameter compression. For activations, we propose a linear quantization strategy that takes the statistical properties of batch normalization into account. We demonstrate the effectiveness of our approach using state-of-the-art models on ImageNet.", "target": ["Wir schlagen ein Quantisierungsschema für Gewichte und Aktivierungen von tiefen neuronalen Netzen vor. Dadurch wird der Speicherbedarf erheblich reduziert und die Inferenz beschleunigt.", "CNN-Modellkomprimierung und Beschleunigung der Inferenz durch Quantisierung."]} +{"source": "Reinforcement learning (RL) agents optimize only the features specified in a reward function and are indifferent to anything left out inadvertently. This means that we must not only specify what to do, but also the much larger space of what not to do. It is easy to forget these preferences, since these preferences are already satisfied in our environment. This motivates our key insight: when a robot is deployed in an environment that humans act in, the state of the environment is already optimized for what humans want. We can therefore use this implicit preference information from the state to fill in the blanks. We develop an algorithm based on Maximum Causal Entropy IRL and use it to evaluate the idea in a suite of proof-of-concept environments designed to show its properties. We find that information from the initial state can be used to infer both side effects that should be avoided as well as preferences for how the environment should be organized. Our code can be found at https://github.com/HumanCompatibleAI/rlsp.", "target": ["Wenn ein Roboter in einer Umgebung eingesetzt wird, in der Menschen gehandelt haben, ist der Zustand der Umgebung bereits für die Wünsche der Menschen optimiert, und wir können dies nutzen, um auf menschliche Präferenzen zu schließen.", "Die Autoren schlagen vor, die explizit angegebene Belohnungsfunktion eines RL-Agenten durch zusätzliche Belohnungen/Kosten zu ergänzen, die aus dem Anfangszustand und einem Modell der Zustandsdynamik abgeleitet werden.", "In dieser Arbeit wird ein Weg vorgeschlagen, die implizite Information im Ausgangszustand mit Hilfe von IRL abzuleiten und die abgeleitete Belohnung mit einer vorgegebenen Belohnung zu kombinieren."]} +{"source": "Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a novel, systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We identify the atomic building blocks of existing methods, and decouple the assumptions they enforce from the mathematical tools they rely on. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.", "target": ["Systematische Kategorisierung von Regularisierungsmethoden für Deep Learning, die ihre Gemeinsamkeiten aufzeigt.", "Versuch, eine Taxonomie für Regularisierungstechniken zu erstellen, die beim Deep Learning eingesetzt werden."]} +{"source": "Deep neural networks are surprisingly efficient at solving practical tasks,\n but the theory behind this phenomenon is only starting to catch up with\n the practice. Numerous works show that depth is the key to this efficiency.\n A certain class of deep convolutional networks – namely those that correspond\n to the Hierarchical Tucker (HT) tensor decomposition – has been\n proven to have exponentially higher expressive power than shallow networks.\n I.e. a shallow network of exponential width is required to realize\n the same score function as computed by the deep architecture. In this paper,\n we prove the expressive power theorem (an exponential lower bound on\n the width of the equivalent shallow network) for a class of recurrent neural\n networks – ones that correspond to the Tensor Train (TT) decomposition.\n This means that even processing an image patch by patch with an RNN\n can be exponentially more efficient than a (shallow) convolutional network\n with one hidden layer. Using theoretical results on the relation between\n the tensor decompositions we compare expressive powers of the HT- and\n TT-Networks. We also implement the recurrent TT-Networks and provide\n numerical evidence of their expressivity.", "target": ["Wir beweisen die exponentielle Effizienz von rekurrenten neuronalen Netzen gegenüber flachen Netzen.", "Die Autoren vergleichen die Komplexität von Tensor-Train Netzen mit Netzen, die durch CP-Zerlegung strukturiert sind."]} +{"source": "Probabilistic modelling is a principled framework to perform model aggregation, which has been a primary mechanism to combat mode collapse in the context of Generative Adversarial Networks (GAN). In this paper, we propose a novel probabilistic framework for GANs, ProbGAN, which iteratively learns a distribution over generators with a carefully crafted prior. Learning is efficiently triggered by a tailored stochastic gradient Hamiltonian Monte Carlo with a novel gradient approximation to perform Bayesian inference. Our theoretical analysis further reveals that our treatment is the first probabilistic framework that yields an equilibrium where generator distributions are faithful to the data distribution. Empirical evidence on synthetic high-dimensional multi-modal data and image databases (CIFAR-10, STL-10, and ImageNet) demonstrates the superiority of our method over both start-of-the-art multi-generator GANs and other probabilistic treatment for GANs.", "target": ["Eine neue probabilistische Behandlung für GAN mit theoretischer Garantie.", "In diesem Beitrag wird ein Bayes'sches GAN vorgeschlagen, das theoretische Garantien für die Konvergenz zur realen Verteilung hat und Likelihoods über den Generator und den Diskriminator mit Logarithmen proportional zu den traditionellen GAN-Zielfunktionen setzt."]} +{"source": "In the adversarial-perturbation problem of neural networks, an adversary starts with a neural network model $F$ and a point $\\bfx$ that $F$ classifies correctly, and applies a \\emph{small perturbation} to $\\bfx$ to produce another point $\\bfx'$ that $F$ classifies \\emph{incorrectly}. In this paper, we propose taking into account \\emph{the inherent confidence information} produced by models when studying adversarial perturbations, where a natural measure of ``confidence'' is \\|F(\\bfx)\\|_\\infty$ (i.e. how confident $F$ is about its prediction?) . Motivated by a thought experiment based on the manifold assumption, we propose a ``goodness property'' of models which states that \\emph{confident regions of a good model should be well separated}. We give formalizations of this property and examine existing robust training objectives in view of them. Interestingly, we find that a recent objective by Madry et al. encourages training a model that satisfies well our formal version of the goodness property, but has a weak control of points that are wrong but with low confidence. However, if Madry et al.'s model is indeed a good solution to their objective, then good and bad points are now distinguishable and we can try to embed uncertain points back to the closest confident region to get (hopefully) correct predictions. We thus propose embedding objectives and algorithms, and perform an empirical study using this method. Our experimental results are encouraging: Madry et al.'s model wrapped with our embedding procedure achieves almost perfect success rate in defending against attacks that the base model fails on, while retaining good generalization behavior.\n", "target": ["Verteidigung gegen nachteilige Störungen neuronaler Netze aufgrund vielfältiger Annahmen.", "In dem Manuskript werden zwei Zielfunktionen vorgeschlagen, die auf der Annahme der Vielfältigkeit basieren und als Abwehrmechanismen gegen adversarial Beispiele dienen.", "Verteidigung gegen adversarial Angriffe auf der Grundlage der Annahme, dass natürliche Daten vielfältig sind."]} +{"source": "Recently Neural Architecture Search (NAS) has aroused great interest in both academia and industry, however it remains challenging because of its huge and non-continuous search space. Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a Direct Sparse Optimization NAS (DSO-NAS) method. In DSO-NAS, we provide a novel model pruning view to NAS problem. In specific, we start from a completely connected block, and then introduce scaling factors to scale the information flow between operations. Next, we impose sparse regularizations to prune useless connections in the architecture. Lastly, we derive an efficient and theoretically sound optimization method to solve it. Our method enjoys both advantages of differentiability and efficiency, therefore can be directly applied to large datasets like ImageNet. Particularly, On CIFAR-10 dataset, DSO-NAS achieves an average test error 2.84%, while on the ImageNet dataset DSO-NAS achieves 25.4% test error under 600M FLOPs with 8 GPUs in 18 hours.", "target": ["Suche nach einer neuronalen Architektur in einem einzigen Schritt durch direkte spärliche Optimierung.", "Es wird eine Methode zur Architektursuche vorgestellt, bei der Verbindungen mit spärlicher Regularisierung entfernt werden.", "In diesem Beitrag wird die direkte Sparse Optimierung vorgeschlagen, eine Methode, die es ermöglicht, neuronale Architekturen für spezifische Probleme zu einem vernünftigen Rechenaufwand zu erhalten.", "In dieser Arebit wird eine Suchmethode für neuronale Architekturen vorgeschlagen, die auf einer direkten Sparse Optimierung basiert."]} +{"source": "Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger teacher networks into smaller student networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher, into the training of a student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to full-precision teacher models, while providing order of magnitude compression, and inference speedup that is linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.\n", "target": ["Erzielt modernste Genauigkeit für quantisierte, flache Netze durch Nutzung der Destillation. ", "Vorschläge für kleine und kostengünstige Modelle durch die Kombination von Destillation und Quantisierung für Experimente zur Bildverarbeitung und neuronaler maschineller Übersetzung.", "In diesem Beitrag wird ein Rahmen für die Verwendung des Lehrermodells zur Unterstützung der Kompression für das Deep Learning Modell im Rahmen der Modellkompression vorgestellt."]} +{"source": "Previous work has demonstrated the benefits of incorporating additional linguistic annotations such as syntactic trees into neural machine translation. However the cost of obtaining those syntactic annotations is expensive for many languages and the quality of unsupervised learning linguistic structures is too poor to be helpful. In this work, we aim to improve neural machine translation via source side dependency syntax but without explicit annotation. We propose a set of models that learn to induce dependency trees on the source side and learn to use that information on the target side. Importantly, we also show that our dependency trees capture important syntactic features of language and improve translation quality on two language pairs En-De and En-Ru.", "target": ["NMT mit latenten Bäumen verbessern.", "Diese Arbeit beschreibt eine Methode zur Induktion von quellenseitigen Abhängigkeitsstrukturen im Dienste der neuronalen maschinellen Übersetzung."]} +{"source": "Model-free reinforcement learning (RL) requires a large number of trials to learn a good policy, especially in environments with sparse rewards. We explore a method to improve the sample efficiency when we have access to demonstrations. Our approach, Backplay, uses a single demonstration to construct a curriculum for a given task. Rather than starting each training episode in the environment's fixed initial state, we start the agent near the end of the demonstration and move the starting point backwards during the course of training until we reach the initial state. Our contributions are that we analytically characterize the types of environments where Backplay can improve training speed, demonstrate the effectiveness of Backplay both in large grid worlds and a complex four player zero-sum game (Pommerman), and show that Backplay compares favorably to other competitive methods known to improve sample efficiency. This includes reward shaping, behavioral cloning, and reverse curriculum generation.", "target": ["Lernen indem von einer einzigen Demonstration aus rückwärts gearbeitet wird, selbst wenn diese ineffizient ist, und dem Agenten progressiv erlauben nach und nach mehr Aufgaben selbst zu lösen.", "In diesem Beitrag wird eine Methode zur Steigerung der Effizienz von spärlichen Belohnungs-RL-Methoden durch ein Rückwärts Curriculum auf Expertendemonstrationen vorgestellt. ", "Die Arbeit stellt eine Strategie zur Lösung von spärlichen Belohnungsaufgaben mit RL vor, indem Anfangszustände aus Demonstrationen gesampelt werden."]} +{"source": "Episodic memory is a psychology term which refers to the ability to recall specific events from the past. We suggest one advantage of this particular type of memory is the ability to easily assign credit to a specific state when remembered information is found to be useful. Inspired by this idea, and the increasing popularity of external memory mechanisms to handle long-term dependencies in deep learning systems, we propose a novel algorithm which uses a reservoir sampling procedure to maintain an external memory consisting of a fixed number of past states. The algorithm allows a deep reinforcement learning agent to learn online to preferentially remember those states which are found to be useful to recall later on. Critically this method allows for efficient online computation of gradient estimates with respect to the write process of the external memory. Thus unlike most prior mechanisms for external memory it is feasible to use in an online reinforcement learning setting.\n", "target": ["Externer Speicher für Online-Reinforcement Learning basierend auf der Schätzung von Gradienten über eine neuartige Reservoir-Sampling-Technik.", "Die Arbeit schlägt einen modifizierten Ansatz für RL vor, bei dem ein zusätzliches \"episodisches Gedächtnis\" vom Agenten gehalten wird und ein \"Abfragenetzwerk\" verwendet wird, das auf dem aktuellen Zustand basiert."]} +{"source": "We achieve bias-variance decomposition for Boltzmann machines using an information geometric formulation. Our decomposition leads to an interesting phenomenon that the variance does not necessarily increase when more parameters are included in Boltzmann machines, while the bias always decreases. Our result gives a theoretical evidence of the generalization ability of deep learning architectures because it provides the possibility of increasing the representation power with avoiding the variance inflation.", "target": ["Wir erreichen eine Bias-Varianz-Zerlegung für Boltzmann-Maschinen mit Hilfe einer informationsgeometrischen Formulierung.", "Das Ziel dieses Artikels ist es, die Effektivität und Verallgemeinerbarkeit von Deep Learning zu analysieren, indem eine theoretische Analyse der Bias-Varianz-Zerlegung für hierarchische Modelle, insbesondere Boltzmann-Maschinen, vorgestellt wird. ", "Die Arbeit kommt zu dem Schluss, dass es möglich ist, sowohl die Verzerrung als auch die Varianz in einem hierarchischen Modell zu reduzieren."]} +{"source": "Recurrent Neural Networks (RNNs) are powerful tools for solving sequence-based problems, but their efficacy and execution time are dependent on the size of the network. Following recent work in simplifying these networks with model pruning and a novel mapping of work onto GPUs, we design an efficient implementation for sparse RNNs. We investigate several optimizations and tradeoffs: Lamport timestamps, wide memory loads, and a bank-aware weight layout. With these optimizations, we achieve speedups of over 6x over the next best algorithm for a hidden layer of size 2304, batch size of 4, and a density of 30%. Further, our technique allows for models of over 5x the size to fit on a GPU for a speedup of 2x, enabling larger networks to help advance the state-of-the-art. We perform case studies on NMT and speech recognition tasks in the appendix, accelerating their recurrent layers by up to 3x.", "target": ["Kombination von Netzwerk Pruning und persistenten Kernels zu einer praktischen, schnellen und genauen Netzimplementierung.", "In diesem Beitrag werden spärliche persistente RNNs vorgestellt, ein Mechanismus, der die bestehende Arbeit des Speicherns von RNN-Gewichten auf einem Chip durch Pruning ergänzt."]} +{"source": "Weight pruning has proven to be an effective method in reducing the model size and computation cost while not sacrificing the model accuracy. Conventional sparse matrix formats, however, involve irregular index structures with large storage requirement and sequential reconstruction process, resulting in inefficient use of highly parallel computing resources. Hence, pruning is usually restricted to inference with a batch size of one, for which an efficient parallel matrix-vector multiplication method exists. In this paper, a new class of sparse matrix representation utilizing Viterbi algorithm that has a high, and more importantly, fixed index compression ratio regardless of the pruning rate, is proposed. In this approach, numerous sparse matrix candidates are first generated by the Viterbi encoder, and then the one that aims to minimize the model accuracy degradation is selected by the Viterbi algorithm. The model pruning process based on the proposed Viterbi encoder and Viterbi algorithm is highly parallelizable, and can be implemented efficiently in hardware to achieve low-energy, high-performance index decoding process. Compared with the existing magnitude-based pruning methods, index data storage requirement can be further compressed by 85.2% in MNIST and 83.9% in AlexNet while achieving similar pruning rate. Even compared with the relative index compression technique, our method can still reduce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet.", "target": ["Wir stellen eine neue Pruning-Methode und ein spärliches Matrixformat vor, um eine hohe Indexkomprimierung und eine parallele Indexdekodierung zu ermöglichen.", "Die Autoren verwenden die Viterbi-Codierung, um den Index der Sparse-Matrix eines Pruned Netzwerks drastisch zu komprimieren, wodurch einer der wichtigsten Speicher-Overheads reduziert und die Inferenz in der parallelen Umgebung beschleunigt wird."]} +{"source": "Learning policies for complex tasks that require multiple different skills is a major challenge in reinforcement learning (RL). It is also a requirement for its deployment in real-world scenarios. This paper proposes a novel framework for efficient multi-task reinforcement learning. Our framework trains agents to employ hierarchical policies that decide when to use a previously learned policy and when to learn a new skill. This enables agents to continually acquire new skills during different stages of training. Each learned task corresponds to a human language description. Because agents can only access previously learned skills through these descriptions, the agent can always provide a human-interpretable description of its choices. In order to help the agent learn the complex temporal dependencies necessary for the hierarchical policy, we provide it with a stochastic temporal grammar that modulates when to rely on previously learned skills and when to execute new skills. We validate our approach on Minecraft games designed to explicitly test the ability to reuse previously learned skills while simultaneously learning new skills.", "target": ["Ein neuartiges hierarchisches Regelnetzwerk, das bereits erlernte Fähigkeiten neben und als Teilkomponenten von neuen Fähigkeiten wiederverwenden kann, indem es die zugrunde liegenden Beziehungen zwischen den Fähigkeiten entdeckt.", "Diese Arbeit zielt darauf ab, hierarchische Regeln zu lernen, indem eine rekursive Regelstruktur verwendet wird, die durch eine stochastische zeitliche Grammatik geregelt wird.", "In diesem Beitrag wird ein Ansatz zum Erlernen hierarchischer Strategien in einem Kontext des lebenslangen Lernens vorgeschlagen, indem Strategien gestapelt und dann eine explizite \"Switch\"-Strategie verwendet wird."]} +{"source": "Embeddings are a fundamental component of many modern machine learning and natural language processing models.\n Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models.\n State of the art in analyzing embeddings consists in projecting them in two-dimensional planes without any interpretable semantics associated to the axes of the projection, which makes detailed analyses and comparison among multiple sets of embeddings challenging.\n In this work, we propose to use explicit axes defined as algebraic formulae over embeddings to project them into a lower dimensional, but semantically meaningful subspace, as a simple yet effective analysis and visualization methodology.\n This methodology assigns an interpretable semantics to the measures of variability and the axes of visualizations, allowing for both comparisons among different sets of embeddings and fine-grained inspection of the embedding spaces.\n We demonstrate the power of the proposed methodology through a series of case studies that make use of visualizations constructed around the underlying methodology and through a user study. The results show how the methodology is effective at providing more profound insights than classical projection methods and how it is widely applicable to many other use cases.", "target": ["Wir schlagen vor, explizite vektoralgebraische Formelprojektionen als alternative Methode zur Visualisierung von Einbettungsräumen zu verwenden, die speziell auf zielgerichtete Analyseaufgaben zugeschnitten ist und in unserer Benutzerstudie t-SNE übertrifft.", "Analyse der Einbettung von Psaces auf nicht-parametrische Weise (anhand von Beispielen)."]} +{"source": "Learning deep networks which can resist large variations between training andtesting data is essential to build accurate and robust image classifiers. Towardsthis end, a typical strategy is to apply data augmentation to enlarge the trainingset. However, standard data augmentation is essentially a brute-force strategywhich is inefficient, as it performs all the pre-defined transformations to everytraining sample. In this paper, we propose a principled approach to train networkswith significantly improved resistance to large variations between training andtesting data. This is achieved by embedding a learnable transformation moduleinto the introspective networks (Jin et al., 2017; Lazarow et al., 2017; Lee et al.,2018), which is a convolutional neural network (CNN) classifier empowered withgenerative capabilities. Our approach alternatively synthesizes pseudo-negativesamples with learned transformations and enhances the classifier by retraining itwith synthesized samples. Experimental results verify that our approach signif-icantly improves the ability of deep networks to resist large variations betweentraining and testing data and achieves classification accuracy improvements onseveral benchmark datasets, including MNIST, affNIST, SVHN and CIFAR-10.", "target": ["Wir schlagen einen prinzipiellen Ansatz vor, der Klassifizierer mit der Fähigkeit ausstattet, größeren Abweichungen zwischen Trainings- und Testdaten auf intelligente und effiziente Weise zu widerstehen.", "Introspektives Lernen zum Umgang mit Datenvariationen zur Testzeit.", "Diese Arbeit schlägt die Verwendung von gelernten Transformationsnetzwerken vor, die in introspektive Netzwerken eingebettet sind, um die Klassifizierungsleistung mit synthetisierten Beispielen zu verbessern."]} +{"source": "It is well known that it is possible to construct \"adversarial examples\"\n for neural networks: inputs which are misclassified by the network\n yet indistinguishable from true data. We propose a simple\n modification to standard neural network architectures, thermometer\n encoding, which significantly increases the robustness of the network to\n adversarial examples. We demonstrate this robustness with experiments\n on the MNIST, CIFAR-10, CIFAR-100, and SVHN datasets, and show that\n models with thermometer-encoded inputs consistently have higher accuracy\n on adversarial examples, without decreasing generalization.\n State-of-the-art accuracy under the strongest known white-box attack was \n increased from 93.20% to 94.30% on MNIST and 50.00% to 79.16% on CIFAR-10.\n We explore the properties of these networks, providing evidence\n that thermometer encodings help neural networks to\n find more-non-linear decision boundaries.", "target": ["Diskretisierung der Eingaben führt zu Robustheit gegenüber adversarial Beispielen.", "Die Autoren präsentieren eine eingehende Studie über die Diskretisierung / Quantisierung der Eingabe als Schutz gegen adversarial Beispiele."]} +{"source": "Low-precision training is a promising way of decreasing the time and energy cost of training machine learning models.\n Previous work has analyzed low-precision training algorithms, such as low-precision stochastic gradient descent, and derived theoretical bounds on their convergence rates.\n These bounds tend to depend on the dimension of the model $d$ in that the number of bits needed to achieve a particular error bound increases as $d$ increases.\n This is undesirable because a motivating application for low-precision training is large-scale models, such as deep learning, where $d$ can be huge.\n In this paper, we prove dimension-independent bounds for low-precision training algorithms that use fixed-point arithmetic, which lets us better understand what affects the convergence of these algorithms as parameters scale.\n Our methods also generalize naturally to let us prove new convergence bounds on low-precision training with other quantization schemes, such as low-precision floating-point computation and logarithmic quantization.", "target": ["Wir haben dimensionsunabhängige Schranken für Trainingsalgorithmen mit geringer Genauigkeit bewiesen.", "In diesem Beitrag werden Bedingungen diskutiert, unter denen die Konvergenz von Trainingsmodellen mit niedrigpräzisen Gewichten nicht von der Modelldimension abhängt."]} +{"source": "We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and ERL2. Results are presented on a novel environment we call 'Krazy World' and a set of maze environments. We show E-MAML and ERL2 deliver better performance on tasks where exploration is important.", "target": ["Änderungen an MAML und RL2, die eine bessere Erkundung ermöglichen sollen. ", "Die Arbeit schlägt einen Trick vor, um die Zielfunktionen zu erweitern, um die Exploration in Meta-RL auf der Grundlage von zwei neuen Meta RL Algorithmen voranzutreiben."]} +{"source": "We propose a new class of probabilistic neural-symbolic models for visual question answering (VQA) that provide interpretable explanations of their decision making in the form of programs, given a small annotated set of human programs. The key idea of our approach is to learn a rich latent space which effectively propagates program annotations from known questions to novel questions. We do this by formalizing prior work on VQA, called module networks (Andreas, 2016) as discrete, structured, latent variable models on the joint distribution over questions and answers given images, and devise a procedure to train the model effectively. Our results on a dataset of compositional questions about SHAPES (Andreas, 2016) show that our model generates more interpretable programs and obtains better accuracy on VQA in the low-data regime than prior work.", "target": ["Ein probabilistisches neuronales symbolisches Modell mit einem latenten Programmraum für besser interpretierbare Fragebeantwortung.", "In diesem Beitrag wird ein diskretes, strukturiertes latentes Variablenmodell für die Beantwortung visueller Fragen vorgeschlagen, das eine kompositionelle Generalisierung und Schlussfolgerung mit erheblichem Leistungs- und Fähigkeitsgewinn beinhaltet."]} +{"source": "The ability to deploy neural networks in real-world, safety-critical systems is severely limited by the presence of adversarial examples: slightly perturbed inputs that are misclassified by the network. In recent years, several techniques have been proposed for training networks that are robust to such examples; and each time stronger attacks have been devised, demonstrating the shortcomings of existing defenses. This highlights a key difficulty in designing an effective defense: the inability to assess a network's robustness against future attacks. We propose to address this difficulty through formal verification techniques. We construct ground truths: adversarial examples with a provably-minimal distance from a given input point. We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced by those attacks to the ground truths; and also of defense techniques, by computing the distance to the ground truths before and after the defense is applied, and measuring the improvement. We use this technique to assess recently suggested attack and defense techniques.\n", "target": ["Wir nutzen die formale Verifikation, um die Effektivität von Techniken zum Auffinden von Gegenbeispielen oder zur Abwehr von Gegenbeispielen zu bewerten.", "In diesem Beitrag wird eine Methode zur Berechnung von adversarial Beispielen mit minimalem Abstand zu den ursprünglichen Eingaben vorgeschlagen.", "Die Autoren schlagen vor, Beispiele mit nachweislich minimalem Abstand als Instrument zur Bewertung der Robustheit eines trainierten Netzes zu verwenden.", "Die Arbeit beschreibt eine Methode zur Generierung von adversarial Beispielen, die einen minimalen Abstand zu dem Trainingsbeispiel haben, das zu ihrer Generierung verwendet wurde."]} +{"source": "This paper introduces a new framework for open-domain question answering in which the retriever and the reader \\emph{iteratively interact} with each other. The framework is agnostic to the architecture of the machine reading model provided it has \\emph{access} to the token-level hidden representations of the reader. The retriever uses fast nearest neighbor search that allows it to scale to corpora containing millions of paragraphs. A gated recurrent unit updates the query at each step conditioned on the \\emph{state} of the reader and the \\emph{reformulated} query is used to re-rank the paragraphs by the retriever. We conduct analysis and show that iterative interaction helps in retrieving informative paragraphs from the corpus. Finally, we show that our multi-step-reasoning framework brings consistent improvement when applied to two widely used reader architectures (\\drqa and \\bidaf) on various large open-domain datasets ---\\tqau, \\quasart, \\searchqa, and \\squado\\footnote{Code and pretrained models are available at \\url{https://github.com/rajarshd/Multi-Step-Reasoning}}.", "target": ["Die Interaktion zwischen Paragraphen Retriever und maschinellem Lesegerät erfolgt über Reinforcement Learning, um große Verbesserungen bei offenen Datensätzen zu erzielen.", "Die Arbeit stellt einen neuen Rahmen für die bidirektionale Interaktion zwischen Dokumentensuchmaschine und Leser für die Beantwortung von Fragen in offenen Bereichen mit der Idee des 'Leserzustands' von Leser zu Suchmaschine vor.", "In dem Beitrag wird ein Modell für das maschinelle Lesen von mehreren Dokumenten vorgeschlagen, das aus drei verschiedenen Teilen und einem Algorithmus besteht."]} +{"source": "Many imaging tasks require global information about all pixels in an image. Conventional bottom-up classification networks globalize information by decreasing resolution; features are pooled and down-sampled into a single output. But for semantic segmentation and object detection tasks, a network must provide higher-resolution pixel-level outputs. To globalize information while preserving resolution, many researchers propose the inclusion of sophisticated auxiliary blocks, but these come at the cost of a considerable increase in network size and computational cost. This paper proposes stacked u-nets (SUNets), which iteratively combine features from different resolution scales while maintaining resolution. SUNets leverage the information globalization power of u-nets in a deeper net- work architectures that is capable of handling the complexity of natural images. SUNets perform extremely well on semantic segmentation tasks using a small number of parameters.", "target": ["Es wird eine neue Architektur vorgestellt, die die Informationsglobalisierungskraft von U-Netzen in einem tieferen Netz nutzt und ohne Schnickschnack aufgabenübergreifend gute Leistungen erbringt.", "Eine Netzarchitektur für die semantische Bildsegmentierung, die auf der Zusammenstellung eines Stapels grundlegender U-Netz-Architekturen basiert, die die Anzahl der Parameter reduziert und die Ergebnisse verbessert.", "Hier wird eine gestapelte U-Netz-Architektur für die Bildsegmentierung vorgeschlagen."]} +{"source": "Asking questions is an important ability for a chatbot. This paper focuses on question generation. Although there are existing works on question generation based on a piece of descriptive text, it remains to be a very challenging problem. In the paper, we propose a new question generation problem, which also requires the input of a target topic in addition to a piece of descriptive text. The key reason for proposing the new problem is that in practical applications, we found that useful questions need to be targeted toward some relevant topics. One almost never asks a random question in a conversation. Due to the fact that given a descriptive text, it is often possible to ask many types of questions, generating a question without knowing what it is about is of limited use. To solve the problem, we propose a novel neural network that is able to generate topic-specific questions. One major advantage of this model is that it can be trained directly using a question-answering corpus without requiring any additional annotations like annotating topics in the questions or answers. Experimental results show that our model outperforms the state-of-the-art baseline.", "target": ["Wir schlagen ein neuronales Netz vor, das in der Lage ist, themenspezifische Fragen zu generieren.", "Präsentiert einen auf neuronalen Netzen basierenden Ansatz zur Generierung themenspezifischer Fragen mit der Begründung, dass themenspezifische Fragen in praktischen Anwendungen sinnvoller sind.", "Schlägt eine themenbasierte Generierungsmethode vor, die ein LSTM zur Extraktion von Themen mittels einer zweistufigen Kodierungstechnik verwendet."]} +{"source": "Brain-Machine Interfaces (BMIs) have recently emerged as a clinically viable option\n to restore voluntary movements after paralysis. These devices are based on the\n ability to extract information about movement intent from neural signals recorded\n using multi-electrode arrays chronically implanted in the motor cortices of the\n brain. However, the inherent loss and turnover of recorded neurons requires repeated\n recalibrations of the interface, which can potentially alter the day-to-day\n user experience. The resulting need for continued user adaptation interferes with\n the natural, subconscious use of the BMI. Here, we introduce a new computational\n approach that decodes movement intent from a low-dimensional latent representation\n of the neural data. We implement various domain adaptation methods\n to stabilize the interface over significantly long times. This includes Canonical\n Correlation Analysis used to align the latent variables across days; this method\n requires prior point-to-point correspondence of the time series across domains.\n Alternatively, we match the empirical probability distributions of the latent variables\n across days through the minimization of their Kullback-Leibler divergence.\n These two methods provide a significant and comparable improvement in the performance\n of the interface. However, implementation of an Adversarial Domain\n Adaptation Network trained to match the empirical probability distribution of the\n residuals of the reconstructed neural signals outperforms the two methods based\n on latent variables, while requiring remarkably few data points to solve the domain\n adaptation problem.", "target": ["Wir implementieren ein adversariales Domänenanpassungsnetzwerk, um eine feste Gehirn-Maschine-Schnittstelle gegen allmähliche Veränderungen der aufgezeichneten neuronalen Signale zu stabilisieren.", "Beschreibt einen neuen Ansatz für implantierte Gehirn-Maschine Schnittstellen, um Kalibrierungsprobleme und Kovariatenverschiebungen zu lösen. ", "Die Autoren definieren ein BMI, das einen Autoencoder verwendet, und gehen dann auf das Problem der Datendrift im BMI ein."]} +{"source": "Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds. To achieve this so-called grounded language learning, models must overcome certain well-studied learning challenges that are also fundamental to infants learning their first words. While it is notable that models with no meaningful prior knowledge overcome these learning obstacles, AI researchers and practitioners currently lack a clear understanding of exactly how they do so. Here we address this question as a way of achieving a clearer general understanding of grounded language learning, both to inform future research and to improve confidence in model predictions. For maximum control and generality, we focus on a simple neural network-based language learning agent trained via policy-gradient methods to interpret synthetic linguistic instructions in a simulated 3D world. We apply experimental paradigms from developmental psychology to this agent, exploring the conditions under which established human biases and learning effects emerge. We further propose a novel way to visualise and analyse semantic representation in grounded language learning agents that yields a plausible computational account of the observed effects.", "target": ["Analyse und Verständnis der Art und Weise, wie Agenten in neuronalen Netzen lernen, einfache, grundierte Sprache zu verstehen.", "Die Autoren verbinden psychologische Versuchsmethoden mit dem Verständnis, wie die Blackbox der Deep Learning Methoden Probleme löst.", "In diesem Beitrag wird eine Analyse der Agenten vorgestellt, die durch Reinforcement Learning in einer einfachen Umgebung, die verbale Anweisungen mit visuellen Informationen kombiniert, eine grundierte Sprache lernen."]} +{"source": "Humans easily recognize object parts and their hierarchical structure by watching how they move; they can then predict how each part moves in the future. In this paper, we propose a novel formulation that simultaneously learns a hierarchical, disentangled object representation and a dynamics model for object parts from unlabeled videos. Our Parts, Structure, and Dynamics (PSD) model learns to, first, recognize the object parts via a layered image representation; second, predict hierarchy via a structural descriptor that composes low-level concepts into a hierarchical structure; and third, model the system dynamics by predicting the future. Experiments on multiple real and synthetic datasets demonstrate that our PSD model works well on all three tasks: segmenting object parts, building their hierarchical structure, and capturing their motion distributions.", "target": ["Erlernen von Objektteilen, hierarchischer Struktur und Dynamik durch Beobachten ihrer Bewegung.", "Schlägt ein unbeaufsichtigtes Lernmodell vor, das lernt, Objekte in Teile zu zerlegen, eine hierarchische Struktur für die Teile vorherzusagen und auf der Grundlage der zerlegten Teile und der Hierarchie Bewegungen vorherzusagen."]} +{"source": "A successful application of convolutional architectures is to increase the resolution of single low-resolution images -- a image restoration task called super-resolution (SR). Naturally, SR is of value to resource constrained devices like mobile phones, electronic photograph frames and televisions to enhance image quality. However, SR demands perhaps the most extreme amounts of memory and compute operations of any mainstream vision task known today, preventing SR from being deployed to devices that require them. In this paper, we perform a early systematic study of system resource efficiency for SR, within the context of a variety of architectural and low-precision approaches originally developed for discriminative neural networks. We present a rich set of insights, representative SR architectures, and efficiency trade-offs; for example, the prioritization of ways to compress models to reach a specific memory and computation target and techniques to compact SR models so that they are suitable for DSPs and FPGAs. As a result of doing so, we manage to achieve better and comparable performance with previous models in the existing literature, highlighting the practicality of using existing efficiency techniques in SR tasks. Collectively, we believe these results provides the foundation for further research into the little explored area of resource efficiency for SR.", "target": ["Wir entwickeln ein Verständnis für ressourceneffiziente Techniken zur Super-Resolution.", "Die Arbeit schlägt eine detaillierte empirische Bewertung der Kompromisse vor, die von verschiedenen Convolutional Neural Networks bei dem Problem der Superauflösung erzielt werden.", "In diesem Beitrag wird vorgeschlagen, die Effizienz der Systemressourcen für Netze mit hoher Auflösung zu verbessern."]} +{"source": "Neural networks are known to be a class of highly expressive functions able to fit even random input-output mappings with 100% accuracy. In this work we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we show that deep ReLU networks are biased towards low frequency functions, meaning that they cannot have local fluctuations without affecting their global behavior. Intuitively, this property is in line with the observation that over-parameterized networks find simple patterns that generalize across data samples. We also investigate how the shape of the data manifold affects expressivity by showing evidence that learning high frequencies gets easier with increasing manifold complexity, and present a theoretical understanding of this behavior. Finally, we study the robustness of the frequency components with respect to parameter perturbation, to develop the intuition that the parameters must be finely tuned to express high frequency functions.", "target": ["Wir untersuchen ReLU-Netzwerke in der Fourier-Domäne und zeigen ein merkwürdiges Verhalten.", "Fourier-Analyse von ReLU-Netzwerken, bei der festgestellt wurde, dass sie auf das Lernen niedriger Frequenzen ausgerichtet sind. ", "Diese Arbeit enthält theoretische und empirische Beiträge zum Thema Fourier-Koeffizienten von neuronalen Netzen."]} +{"source": "Instance embeddings are an efficient and versatile image representation that facilitates applications like recognition, verification, retrieval, and clustering. Many metric learning methods represent the input as a single point in the embedding space. Often the distance between points is used as a proxy for match confidence. However, this can fail to represent uncertainty which can arise when the input is ambiguous, e.g., due to occlusion or blurriness. This work addresses this issue and explicitly models the uncertainty by “hedging” the location of each input in the embedding space. We introduce the hedged instance embedding (HIB) in which embeddings are modeled as random variables and the model is trained under the variational information bottleneck principle (Alemi et al., 2016; Achille & Soatto, 2018). Empirical results on our new N-digit MNIST dataset show that our method leads to the desired behavior of “hedging its bets” across the embedding space upon encountering ambiguous inputs. This results in improved performance for image matching and classification tasks, more structure in the learned embedding space, and an ability to compute a per-exemplar uncertainty measure which is correlated with downstream performance.", "target": ["In dem Beitrag wird vorgeschlagen, Wahrscheinlichkeitsverteilungen anstelle von Punkten für Einbettungsaufgaben wie Erkennung und Überprüfung zu verwenden.", "In der Arbeit wird eine Alternative zur derzeitigen Punkteinbettung und eine Technik zu deren Schulung vorgeschlagen.", "Die Arbeit schlägt ein Modell vor, das unsichere Einbettungen verwendet, um Deep Learning auf Bayes'sche Anwendungen zu erweitern."]} +{"source": "Convolution neural networks typically consist of many convolutional layers followed by several fully-connected layers. While convolutional layers map between high-order activation tensors, the fully-connected layers operate on flattened activation vectors. Despite its success, this approach has notable drawbacks. Flattening discards the multi-dimensional structure of the activations, and the fully-connected layers require a large number of parameters. \n We present two new techniques to address these problems. First, we introduce tensor contraction layers which can replace the ordinary fully-connected layers in a neural network. Second, we introduce tensor regression layers, which express the output of a neural network as a low-rank multi-linear mapping from a high-order activation tensor to the softmax layer. Both the contraction and regression weights are learned end-to-end by backpropagation. By imposing low rank on both, we use significantly fewer parameters. Experiments on the ImageNet dataset show that applied to the popular VGG and ResNet architectures, our methods significantly reduce the number of parameters in the fully connected layers (about 65% space savings) while negligibly impacting accuracy.", "target": ["Wir schlagen Tensorkontraktion und Tensorregressionsschichten mit niedrigem Rang vor, um die multilineare Struktur im gesamten Netzwerk zu erhalten und zu nutzen, was zu einer enormen Platzersparnis mit geringen bis keinen Auswirkungen auf die Leistung führt.", "In diesem Beitrag werden neue Schichtenarchitekturen für neuronale Netze vorgeschlagen, die eine Low-Rank-Darstellung von Tensoren verwenden.", "In dieser Arbeit werden Tensorzerlegung und Tensorregression in CNN integriert, indem eine neue Tensorregressionsschicht verwendet wird."]} +{"source": "We explore ways of incorporating bilingual dictionaries to enable semi-supervised\n neural machine translation. Conventional back-translation methods have shown\n success in leveraging target side monolingual data. However, since the quality of\n back-translation models is tied to the size of the available parallel corpora, this\n could adversely impact the synthetically generated sentences in a low resource\n setting. We propose a simple data augmentation technique to address both this\n shortcoming. We incorporate widely available bilingual dictionaries that yield\n word-by-word translations to generate synthetic sentences. This automatically\n expands the vocabulary of the model while maintaining high quality content. Our\n method shows an appreciable improvement in performance over strong baselines.", "target": ["Wir verwenden zweisprachige Wörterbücher zur Datenerweiterung für die neuronale maschinelle Übersetzung.", "In diesem Beitrag wird die Verwendung zweisprachiger Wörterbücher zur Erstellung synthetischer Quellen für einsprachige Zieldaten untersucht, um NMT-Modelle zu verbessern, die mit kleinen Mengen paralleler Daten trainiert wurden."]} +{"source": "Rewards are sparse in the real world and most of today's reinforcement learning algorithms struggle with such sparsity. One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning. In particular, inspired by curious behaviour in animals, observing something novel could be rewarded with a bonus. Such bonus is summed up with the real task reward - making it possible for RL algorithms to learn from the combined reward. We propose a new curiosity method which uses episodic memory to form the novelty bonus. To determine the bonus, the current observation is compared with the observations in memory. Crucially, the comparison is done based on how many environment steps it takes to reach the current observation from those in memory - which incorporates rich information about environment dynamics. This allows us to overcome the known \"couch-potato\" issues of prior work - when the agent finds a way to instantly gratify itself by exploiting actions which lead to hardly predictable consequences. We test our approach in visually rich 3D environments in ViZDoom, DMLab and MuJoCo. In navigational tasks from ViZDoom and DMLab, our agent outperforms the state-of-the-art curiosity method ICM. In MuJoCo, an ant equipped with our curiosity module learns locomotion out of the first-person-view curiosity only. The code is available at https://github.com/google-research/episodic-curiosity/.", "target": ["Wir schlagen ein neuartiges Modell der Neugier vor, das auf dem episodischen Gedächtnis und der Idee der Erreichbarkeit basiert und uns erlaubt, die bekannten \"Couch-Potato\" Probleme früherer Arbeiten zu überwinden.", "Schlägt vor, Explorationsboni in RL-Algorithmen zu vergeben, indem größere Boni für Beobachtungen vergeben werden, die in Umgebungsschritten weiter entfernt sind.", "Die Autoren schlagen einen Explorationsbonus vor, der bei spärlichen Belohnungs-RL-Problemen helfen soll, und betrachten viele Experimente in komplexen 3D-Umgebungen."]} +{"source": "We introduce a new dataset of logical entailments for the purpose of measuring models' ability to capture and exploit the structure of logical expressions against an entailment prediction task. We use this task to compare a series of architectures which are ubiquitous in the sequence-processing literature, in addition to a new model class---PossibleWorldNets---which computes entailment as a ``convolution over possible worlds''. Results show that convolutional networks present the wrong inductive bias for this class of problems relative to LSTM RNNs, tree-structured neural networks outperform LSTM RNNs due to their enhanced ability to exploit the syntax of logic, and PossibleWorldNets outperform all benchmarks.", "target": ["Wir führen einen neuen Datensatz mit logischen Folgerungen ein, um die Fähigkeit von Modellen zu messen, die Struktur von logischen Ausdrücken zu erfassen und zu nutzen, und zwar anhand einer Aufgabe zur Vorhersage von Folgerungen.", "Die Arbeit schlägt ein neues Modell vor, um tiefe Modelle für die Erkennung von logischen Folgerungen als Produkt von kontinuierlichen Funktionen über mögliche Welten zu verwenden.", "Schlägt ein neues Modell für maschinelles Lernen mit Vorhersage der logischen Folgerung vor."]} +{"source": "Deep convolutional neural network (DCNN) based supervised learning is a widely practiced approach for large-scale image classification. However, retraining these large networks to accommodate new, previously unseen data demands high computational time and energy requirements. Also, previously seen training samples may not be available at the time of retraining. We propose an efficient training methodology and incrementally growing a DCNN to allow new classes to be learned while sharing part of the base network. Our proposed methodology is inspired by transfer learning techniques, although it does not forget previously learned classes. An updated network for learning new set of classes is formed using previously learned convolutional layers (shared from initial part of base network) with addition of few newly added convolutional kernels included in the later layers of the network. We evaluated the proposed scheme on several recognition applications. The classification accuracy achieved by our approach is comparable to the regular incremental learning approach (where networks are updated with new training samples only, without any network sharing).", "target": ["Die Arbeit handelt von einer neuen energieeffizienten Methode für inkrementelles Lernen.", "Er schlägt ein Verfahren für inkrementelles Lernen als Transferlernen vor.", "In diesem Beitrag wird eine Methode vorgestellt, mit der tiefe Convolutional Neural Networks inkrementell trainiert werden können, wobei die Daten in kleinen Batches über einen bestimmten Zeitraum hinweg verfügbar sind.", "Stellt einen Ansatz für klasseninkrementelles Lernen mit tiefen Netzen vor, indem er drei verschiedene Lernstrategien für den endgültigen/besten Ansatz vorschlägt."]} +{"source": "Recurrent neural networks (RNNs) are widely used to model sequential data but\n their non-linear dependencies between sequence elements prevent parallelizing\n training over sequence length. We show the training of RNNs with only linear\n sequential dependencies can be parallelized over the sequence length using the\n parallel scan algorithm, leading to rapid training on long sequences even with\n small minibatch size. We develop a parallel linear recurrence CUDA kernel and\n show that it can be applied to immediately speed up training and inference of\n several state of the art RNN architectures by up to 9x. We abstract recent work\n on linear RNNs into a new framework of linear surrogate RNNs and develop a\n linear surrogate model for the long short-term memory unit, the GILR-LSTM, that\n utilizes parallel linear recurrence. We extend sequence learning to new\n extremely long sequence regimes that were previously out of reach by\n successfully training a GILR-LSTM on a synthetic sequence classification task\n with a one million timestep dependency.\n", "target": ["Paralleles Scannen zur Parallelisierung linearer rekurrenter neuronaler Netze verwenden. Modell mit einer Länge von 1 Million Abhängigkeiten trainieren.", "Schlägt vor, RNN durch Anwendung der Methode von Blelloch zu beschleunigen.", "Die Autoren schlagen einen parallelen Algorithmus für lineare Surrogate-RNNs vor, der Geschwindigkeitssteigerungen gegenüber den bestehenden Implementierungen von Quasi-RNN, SRU und LSTM ermöglicht."]} +{"source": "Neural text generation models are often autoregressive language models or seq2seq models. Neural autoregressive and seq2seq models that generate text by sampling words sequentially, with each word conditioned on the previous model, are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of sample quality. Language models are typically trained via maximum likelihood and most often with teacher forcing. Teacher forcing is well-suited to optimizing perplexity but can result in poor sample quality because generating text requires conditioning on sequences of words that were never observed at training time. We propose to improve sample quality using Generative Adversarial Network (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally to designed to output differentiable values, so discrete language generation is challenging for them. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic text samples compared to a maximum likelihood trained model.", "target": ["GAN in natürlicher Sprache zum Ausfüllen von Lücken.", "In diesem Beitrag wird vorgeschlagen, Text mithilfe von GANs zu generieren.", "Generierung von Textproben mit Hilfe von GAN und einem Mechanismus zum Auffüllen fehlender Wörter in Abhängigkeit vom umgebenden Text."]} +{"source": "Parametric texture models have been applied successfully to synthesize artificial images. Psychophysical studies show that under defined conditions observers are unable to differentiate between model-generated and original natural textures. In industrial applications the reverse case is of interest: a texture analysis system should decide if human observers are able to discriminate between a reference and a novel texture. For example, in case of inspecting decorative surfaces the de- tection of visible texture anomalies without any prior knowledge is required. Here, we implemented a human-vision-inspired novelty detection approach. Assuming that the features used for texture synthesis are important for human texture percep- tion, we compare psychophysical as well as learnt texture representations based on activations of a pretrained CNN in a novelty detection scenario. Additionally, we introduce a novel objective function to train one-class neural networks for novelty detection and compare the results to standard one-class SVM approaches. Our experiments clearly show the differences between human-vision-inspired texture representations and learnt features in detecting visual anomalies. Based on a dig- ital print inspection scenario we show that psychophysical texture representations are able to outperform CNN-encoded features.", "target": ["Vergleich von psychophysischen und CNN-kodierten Texturrepräsentationen in einer Anwendung zur Erkennung von Neuheiten in einem neuronalen Einklassen-Netzwerk.", "Dieser Artikel konzentriert sich auf die Erkennung von Neuheiten und zeigt, dass psychophysikalische Darstellungen die VGG-Encoder-Merkmale in einigen Bereichen dieser Aufgabe übertreffen können.", "In diesem Beitrag wird die Erkennung von Anomalien in Texturen untersucht und eine originelle Verlustfunktion vorgeschlagen.", "Schlägt vor, zwei Anomalie-Detektoren aus drei verschiedenen Modellen zu trainieren, um Wahrnehmungsanomalien in visuellen Texturen zu erkennen."]} +{"source": "In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regularization approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regularized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance.", "target": ["Wir schlagen einen neuartigen Regularisierungsansatz vor, der die Nicht-Überlappung beim Repräsentationslernen fördert, um die Interpretierbarkeit zu verbessern und die Überanpassung zu reduzieren.", "Die Arbeit führt einen Matrix-Regulierer ein, um gleichzeitig sowohl Seltenheit als auch annähernde Orthogonalität zu induzieren.", "Der Artikel untersucht eine Regularisierungsmethode, um die Seltenheit zu fördern und die Überlappung zwischen den Trägern der Gewichtsvektoren in den gelernten Darstellungen zu reduzieren, um die Interpretierbarkeit zu verbessern und eine Überanpassung zu vermeiden.", "In dem Beitrag wird ein neuer Regularisierungsansatz vorgeschlagen, der gleichzeitig dazu führt, dass die Gewichtsvektoren (W) spärlich und orthogonal zueinander sind."]} +{"source": "Successful recurrent models such as long short-term memories (LSTMs) and gated recurrent units (GRUs) use \\emph{ad hoc} gating mechanisms. Empirically these models have been found to improve the learning of medium to long term temporal dependencies and to help with vanishing gradient issues.\n\t\n We prove that learnable gates in a recurrent model formally provide \\emph{quasi-invariance to general time transformations} in the input data. We recover part of the LSTM architecture from a simple axiomatic approach.\n\t\n This result leads to a new way of initializing gate biases in LSTMs and GRUs. Experimentally, this new \\emph{chrono initialization} is shown to greatly improve learning of long term dependencies, with minimal implementation effort.\n\n", "target": ["Beweist, dass Gating-Mechanismen gegenüber Zeittransformationen invariant sind. Einführung und Test einer neuen Initialisierung für LSTMs auf der Grundlage dieser Erkenntnis.", "Die Arbeit verbindet die Entwicklung von rekurrenten Netzwerken und deren Auswirkungen auf die Reaktion des Netzwerks auf Zeittransformationen und nutzt dies, um ein einfaches Initialisierungsschema für die Verzerrung zu entwickeln."]} +{"source": "Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificial intelligence, however, one has not fully explored the role of teaching, and pays most attention to machine \\emph{learning}. In this paper, we argue that equal attention, if not more, should be paid to teaching, and furthermore, an optimization framework (instead of heuristics) should be used to obtain good teaching strategies. We call this approach ``learning to teach''. In the approach, two intelligent agents interact with each other: a student model (which corresponds to the learner in traditional machine learning algorithms), and a teacher model (which determines the appropriate data, loss function, and hypothesis space to facilitate the training of the student model). The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution. To demonstrate the practical value of our proposed approach, we take the training of deep neural networks (DNN) as an example, and show that by using the learning to teach techniques, we are able to use much less training data and fewer iterations to achieve almost the same accuracy for different kinds of DNN models (e.g., multi-layer perceptron, convolutional neural networks and recurrent neural networks) under various machine learning tasks (e.g., image classification and text understanding).", "target": ["Wir schlagen ein neues Framework für die automatische Steuerung des maschinellen Lernprozesses vor und überprüfen die Wirksamkeit des \"Learning to teach\".", "Diese Arbeit konzentriert sich auf das \"maschinelle Lernen\" und schlägt vor, das Reinforcement Learning zu nutzen, indem die Belohnung als die Lerngeschwindigkeit des Lernenden definiert wird und die Parameter des Lehrers mit Hilfe des Policy-Gradienten aktualisiert werden.", "Die Autoren definieren ein Deep Learning Modell, das aus vier Komponenten besteht: einem Studentenmodell, einem Lehrermodell, einer Verlustfunktion und einem Datensatz. ", "Vorschlagen eines Frameworks für das \"Lernen zu lehren\", das der Auswahl der Daten entspricht, die dem Lernenden präsentiert werden."]} +{"source": "We present DL2, a system for training and querying neural networks with logical constraints. The key idea is to translate these constraints into a differentiable loss with desirable mathematical properties and to then either train with this loss in an iterative manner or to use the loss for querying the network for inputs subject to the constraints. We empirically demonstrate that DL2 is effective in both training and querying scenarios, across a range of constraints and data sets.", "target": ["Ein differenzierbarer Verlust für logische Beschränkungen beim Training und bei der Abfrage neuronaler Netze.", "Ein Rahmenwerk für die Umwandlung von Abfragen über Parameter und Eingabe-/Ausgabepaare für neuronale Netze in differenzierbare Verlustfunktionen und eine zugehörige deklarative Sprache für die Spezifikation dieser Abfragen.", "Diese Arbeit befasst sich mit dem Problem der Kombination logischer Ansätze mit neuronalen Netzen, indem eine logische Formel in eine nicht-negative Verlustfunktion für ein neuronales Netz übersetzt wird."]} +{"source": "Genetic algorithms have been widely used in many practical optimization problems.\n Inspired by natural selection, operators, including mutation, crossover\n and selection, provide effective heuristics for search and black-box optimization.\n However, they have not been shown useful for deep reinforcement learning, possibly\n due to the catastrophic consequence of parameter crossovers of neural networks.\n Here, we present Genetic Policy Optimization (GPO), a new genetic algorithm\n for sample-efficient deep policy optimization. GPO uses imitation learning\n for policy crossover in the state space and applies policy gradient methods for mutation.\n Our experiments on MuJoCo tasks show that GPO as a genetic algorithm\n is able to provide superior performance over the state-of-the-art policy gradient\n methods and achieves comparable or higher sample efficiency.", "target": ["Auf genetischen Algorithmen basierender Ansatz zur Optimierung von Strategien für tiefe neuronale Netze.", "Die Autoren stellen einen Algorithmus für das Training von Ensembles von Strategienetzwerken vor, der regelmäßig verschiedene Strategien im Ensemble miteinander mischt.", "In diesem Beitrag wird eine vom genetischen Algorithmus inspirierte Methode zur Optimierung von Richtlinien vorgeschlagen, die die Mutations- und Crossover-Operatoren in Richtliniennetzwerken nachahmt."]} +{"source": "To make deep neural networks feasible in resource-constrained environments (such as mobile devices), it is beneficial to quantize models by using low-precision weights. One common technique for quantizing neural networks is the straight-through gradient method, which enables back-propagation through the quantization mapping. Despite its empirical success, little is understood about why the straight-through gradient method works.\n Building upon a novel observation that the straight-through gradient method is in fact identical to the well-known Nesterov’s dual-averaging algorithm on a quantization constrained optimization problem, we propose a more principled alternative approach, called ProxQuant , that formulates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method. ProxQuant does back-propagation on the underlying full-precision vector and applies an efficient prox-operator in between stochastic gradient steps to encourage quantizedness. For quantizing ResNets and LSTMs, ProxQuant outperforms state-of-the-art results on binary quantization and is on par with state-of-the-art on multi-bit quantization. We further perform theoretical analyses showing that ProxQuant converges to stationary points under mild smoothness assumptions, whereas variants such as lazy prox-gradient method can fail to converge in the same setting.", "target": ["Ein prinzipielles Framework für die Modellquantisierung unter Verwendung der proximalen Gradientenmethode, mit empirischer Bewertung und theoretischen Konvergenzanalysen.", "Schlägt die ProxQuant-Methode zum Trainieren neuronaler Netze mit quantisierten Gewichten vor.", "Schlägt vor, binäre Netze und ihre Varianten mit Hilfe des proximalen Gradientenabstiegs zu lösen."]} +{"source": "Background: Statistical mechanics results (Dauphin et al. (2014); Choromanska et al. (2015)) suggest that local minima with high error are exponentially rare in high dimensions. However, to prove low error guarantees for Multilayer Neural Networks (MNNs), previous works so far required either a heavily modified MNN model or training method, strong assumptions on the labels (e.g., “near” linear separability), or an unrealistically wide hidden layer with \\Omega\\(N) units. \n\n Results: We examine a MNN with one hidden layer of piecewise linear units, a single output, and a quadratic loss. We prove that, with high probability in the limit of N\\rightarrow\\infty datapoints, the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given standard normal input of dimension d_0=\\tilde{\\Omega}(\\sqrt{N}), and a more realistic number of d_1=\\tilde{\\Omega}(N/d_0) hidden units. We demonstrate our results numerically: for example, 0% binary classification training error on CIFAR with only N/d_0 = 16 hidden neurons.", "target": ["Schlechte lokale Minima verschwinden in einem mehrschichtigen neuronalen Netz: Ein Beweis mit vernünftigeren Annahmen als bisher.", "In Netzen mit einer einzigen verborgenen Schicht nimmt das Volumen der suboptimalen lokalen Minima im Vergleich zu den globalen Minima exponentiell ab.", "In diesem Beitrag wird untersucht, warum Standard-SGD-Algorithmen auf der Grundlage neuronaler Netze zu \"guten\" Lösungen konvergieren."]} +{"source": "Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness, and several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we identify the attention shift phenomenon, which may hinder the transferability of adversarial examples to the defense models. It indicates that the defenses rely on different discriminative regions to make predictions compared with normally trained models. Therefore, we propose an attention-invariant attack method to generate more transferable adversarial examples. Extensive experiments on the ImageNet dataset validate the effectiveness of the proposed method. Our best attack fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the defense techniques.", "target": ["Wir schlagen eine aufmerksamkeitsinvariante Angriffsmethode vor, um mehr übertragbare adversarial Beispiele für Blackbox-Angriffe zu generieren, die modernste Verteidigungsmaßnahmen mit einer hohen Erfolgsquote überlisten können.", "Die Arbeit schlägt einen neuen Weg vor, um den Stand der Technik bei der Abwehr von gegnerischen Angriffen auf CNN zu überwinden.", "In diesem Beitrag wird die Vermutung geäußert, dass die \"Aufmerksamkeitsverschiebung\" eine Schlüsseleigenschaft ist, die dazu führt, dass adversarial Angriffe nicht übertragen werden können, und es wird eine aufmerksamkeitsinvariante Angriffsmethode vorgeschlagen."]} +{"source": "We present Merged-Averaged Classifiers via Hashing (MACH) for $K$-classification with large $K$. Compared to traditional one-vs-all classifiers that require $O(Kd)$ memory and inference cost, MACH only need $O(d\\log{K})$ memory while only requiring $O(K\\log{K} + d\\log{K})$ operation for inference. MACH is the first generic $K$-classification algorithm, with provably theoretical guarantees, which requires $O(\\log{K})$ memory without any assumption on the relationship between classes. MACH uses universal hashing to reduce classification with a large number of classes to few independent classification task with very small (constant) number of classes. We provide theoretical quantification of accuracy-memory tradeoff by showing the first connection between extreme classification and heavy hitters. With MACH we can train ODP dataset with 100,000 classes and 400,000 features on a single Titan X GPU (12GB), with the classification accuracy of 19.28\\%, which is the best-reported accuracy on this dataset. Before this work, the best performing baseline is a one-vs-all classifier that requires 40 billion parameters (320 GB model size) and achieves 9\\% accuracy. In contrast, MACH can achieve 9\\% accuracy with 480x reduction in the model size (of mere 0.6GB). With MACH, we also demonstrate complete training of fine-grained imagenet dataset (compressed size 104GB), with 21,000 classes, on a single GPU.", "target": ["Wie man 100.000 Klassen auf einem einzigen Grafikprozessor trainiert.", "Vorschlagen einer effizienten Hashing-Methode MACH für die Softmax-Approximation im Kontext eines großen Ausgaberaums, die sowohl Speicher als auch Rechenzeit spart.", "Eine Methode zur Klassifizierung von Problemen mit einer großen Anzahl von Klassen in einem Mehrklassenumfeld, demonstriert an ODP- und Imagenet-21K Datensätzen.", "Die Arbeit stellt ein auf Hashing basierendes Verfahren zur Verringerung der Speicher- und Berechnungszeit für die K-Wege-Klassifizierung vor, wenn K groß ist."]} +{"source": "Gradient-based optimization is the foundation of deep learning and reinforcement learning.\n Even when the mechanism being optimized is unknown or not differentiable, optimization using high-variance or biased gradient estimates is still often the best strategy. We introduce a general framework for learning low-variance, unbiased gradient estimators for black-box functions of random variables, based on gradients of a learned function.\n These estimators can be jointly trained with model parameters or policies, and are applicable in both discrete and continuous settings. We give unbiased, adaptive analogs of state-of-the-art reinforcement learning methods such as advantage actor-critic. We also demonstrate this framework for training discrete latent-variable models.", "target": ["Wir stellen eine allgemeine Methode zur unverzerrten Schätzung von Gradienten von Black-Box-Funktionen von Zufallsvariablen vor. Wir wenden diese Methode auf diskrete Variationsinferenz und Reinforcement Learning an. ", "Schlägt einen neuen Ansatz zur Durchführung des Gradientenabstiegs für die Blackbox-Optimierung oder das Training diskreter latenter Variablenmodelle vor."]} +{"source": "Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of Goodfellow et al. (2014) suggested they do, if they were given sufficiently large deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al. (2017) raised doubts whether the same holds when discriminator has bounded size. It showed that the training objective can approach its optimum value even if the generated distribution has very low support. In other words, the training objective is unable to prevent mode collapse. The current paper makes two contributions. (1) It proposes a novel test for estimating support size using the birthday paradox of discrete probability. Using this evidence is presented that well-known GANs approaches do learn distributions of fairly low support. (2) It theoretically studies encoder-decoder GANs architectures (e.g., BiGAN/ALI), which were proposed to learn more meaningful features via GANs, and consequently to also solve the mode-collapse issue. Our result shows that such encoder-decoder training objectives also cannot guarantee learning of the full distribution because they cannot prevent serious mode collapse. More seriously, they cannot prevent learning meaningless codes for data, contrary to usual intuition.", "target": ["Wir schlagen einen Schätzer für die Unterstützungsgröße der gelernten Verteilung von GANs vor, um zu zeigen, dass sie in der Tat unter einem Mode-Kollaps leiden, und wir beweisen, dass Encoder-Decoder GANs dieses Problem ebenfalls nicht vermeiden.", "In dem Beitrag wird versucht, die Größe der Unterstützung für Lösungen, die von typischen GANs erzeugt werden, experimentell zu schätzen. ", "Diese Arbeit schlägt einen cleveren neuen Test vor, der auf dem Geburtstags-Paradoxon basiert, um die Vielfalt in generativen Beispielen zu messen. Die Ergebnisse des Experiments werden dahingehend interpretiert, dass der Modus-Kollaps in einer Reihe von modernen generativen Modellen stark ist.", "Die Arbeit verwendet das Geburtstagsparadoxon, um zu zeigen, dass einige GAN-Architekturen Verteilungen mit relativ geringer Unterstützung erzeugen."]} +{"source": "Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers.", "target": ["Wir schlagen ein Framework vor, um natürliche Gegner gegen Black-Box-Klassifikatoren sowohl für visuelle als auch für textuelle Dom��nen zu generieren, indem wir die Suche nach Gegnern im latenten semantischen Raum vornehmen.", "Schlägt eine Methode zur Erstellung von semantischen Gegenbeispielen vor.", "Schlägt einen Rahmen zur Erzeugung natürlicher Gegenbeispiele durch die Suche nach Gegenspielern in einem latenten Raum mit dichter und kontinuierlicher Datendarstellung vor."]} +{"source": "Kronecker-factor Approximate Curvature (Martens & Grosse, 2015) (K-FAC) is a 2nd-order optimization method which has been shown to give state-of-the-art performance on large-scale neural network optimization tasks (Ba et al., 2017). It is based on an approximation to the Fisher information matrix (FIM) that makes assumptions about the particular structure of the network and the way it is parameterized. The original K-FAC method was applicable only to fully-connected networks, although it has been recently extended by Grosse & Martens (2016) to handle convolutional networks as well. In this work we extend the method to handle RNNs by introducing a novel approximation to the FIM for RNNs. This approximation works by modelling the covariance structure between the gradient contributions at different time-steps using a chain-structured linear Gaussian graphical model, summing the various cross-covariances, and computing the inverse in closed form. We demonstrate in experiments that our method significantly outperforms general purpose state-of-the-art optimizers like SGD with momentum and Adam on several challenging RNN training tasks.", "target": ["Wir erweitern die K-FAC Methode auf RNNs, indem wir eine neue Familie von Fisher Approximationen entwickeln.", "Die Autoren erweitern die K-FAC-Methode auf RNNs und stellen drei Möglichkeiten der Annäherung von F vor. Sie zeigen Optimierungsergebnisse für drei Datensätze, die ADAM sowohl in der Anzahl der Aktualisierungen als auch in der Rechenzeit übertreffen.", "Schlägt vor, die Optimierungsmethode mit dem Kronecker-Faktor Appropriate Curvature auf rekurrente neuronale Netze zu erweitern.", "Die Autoren stellen eine Methode zweiter Ordnung vor, die speziell für RNNs konzipiert ist."]} +{"source": "Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for convolutional neural networks, deep weight prior (DWP), that exploit generative models to encourage a specific structure of trained convolutional filters e.g., spatial correlations of weights. We define DWP in the form of an implicit distribution and propose a method for variational inference with such type of implicit priors. In experiments, we show that DWP improves the performance of Bayesian neural networks when training data are limited, and initialization of weights with samples from DWP accelerates training of conventional convolutional neural networks.\n", "target": ["Das generative Modell für Kernel von Convolutional Neural Networks, das beim Training auf neuen Datensätzen als Prior-Verteilung fungiert.", "Eine Methode zur Modellierung von Convolutional Neural Networks unter Verwendung einer Bayes-Methode.", "Die Idee ist, einen Prior für einen Hilfsdatensatz zu ermitteln und diesen Prior dann über die CNN-Filter zu verwenden, um die Inferenz für einen Datensatz von Interesse zu starten.", "Diese Arbeit untersucht das Erlernen informativer Prioritäten für Convolutional Neural Network Modelle mit ähnlichen Problemdomänen, indem es Autoencoder verwendet, um eine aussagekräftige Priorität für die gefilterten Gewichte der trainierten Netze zu erhalten."]} +{"source": "The high dimensionality of hyperspectral imaging forces unique challenges in scope, size and processing requirements. Motivated by the potential for an in-the-field cell sorting detector, we examine a Synechocystis sp. PCC 6803 dataset wherein cells are grown alternatively in nitrogen rich or deplete cultures. We use deep learning techniques to both successfully classify cells and generate a mask segmenting the cells/condition from the background. Further, we use the classification accuracy to guide a data-driven, iterative feature selection method, allowing the design neural networks requiring 90% fewer input features with little accuracy degradation.", "target": ["Wir haben Deep Learning Techniken auf die Segmentierung hyperspektraler Bilder und die iterative Auswahl von Merkmalen angewandt.", "Schlägt ein gieriges Verfahren zur Auswahl einer Teilmenge hochkorrelierter Spektralmerkmale bei einer Klassifizierungsaufgabe vor.", "Die Arbeit untersucht die Verwendung neuronaler Netze für die Klassifizierung und Segmentierung der hyperspektralen Bildgebung (HSI) von Zellen.", "Klassifizierung von Zellen und Implementierung von Zellsegmentierung auf der Grundlage von Deep-Learning-Techniken mit Reduzierung der Eingangsmerkmale."]} +{"source": "Data Interpretation is an important part of Quantitative Aptitude exams and requires an individual to answer questions grounded in plots such as bar charts, line graphs, scatter plots, \\textit{etc}. Recently, there has been an increasing interest in building models which can perform this task by learning from datasets containing triplets of the form \\{plot, question, answer\\}. Two such datasets have been proposed in the recent past which contain plots generated from synthetic data with limited (i) $x-y$ axes variables (ii) question templates and (iii) answer vocabulary and hence do not adequately capture the challenges posed by this task. To overcome these limitations of existing datasets, we introduce a new dataset containing $9.7$ million question-answer pairs grounded over $270,000$ plots with three main differentiators. First, the plots in our dataset contain a wide variety of realistic $x$-$y$ variables such as CO2 emission, fertility rate, \\textit{etc. } extracted from real word data sources such as World Bank, government sites, \\textit{etc}. Second, the questions in our dataset are more complex as they are based on templates extracted from interesting questions asked by a crowd of workers using a fraction of these plots. Lastly, the answers in our dataset are not restricted to a small vocabulary and a large fraction of the answers seen at test time are not present in the training vocabulary. As a result, existing models for Visual Question Answering which largely use end-to-end models in a multi-class classification framework cannot be used for this task. We establish initial results on this dataset and emphasize the complexity of the task using a multi-staged modular pipeline with various sub-components to (i) extract relevant data from the plot and convert it to a semi-structured table (ii) combine the question with this table and use compositional semantic parsing to arrive at a logical form from which the answer can be derived. We believe that such a modular framework is the best way to go forward as it would enable the research community to independently make progress on all the sub-tasks involved in plot question answering.", "target": ["Wir haben einen neuen Datensatz für die Dateninterpretation über Parzellen erstellt und schlagen auch eine Basislinie für dieselben vor.", "Die Autoren schlagen eine Pipeline zur Lösung des DIP-Problems vor, die das Lernen aus Datensätzen mit Tripletts der Form {Plot, Frage, Antwort} beinhaltet.", "Schlägt einen Algorithmus vor, der die in wissenschaftlichen Diagrammen dargestellten Daten interpretieren kann."]} +{"source": "Learning to predict complex time-series data is a fundamental challenge in a range of disciplines including Machine Learning, Robotics, and Natural Language Processing. Predictive State Recurrent Neural Networks (PSRNNs) (Downey et al.) are a state-of-the-art approach for modeling time-series data which combine the benefits of probabilistic filters and Recurrent Neural Networks into a single model. PSRNNs leverage the concept of Hilbert Space Embeddings of distributions (Smola et al.) to embed predictive states into a Reproducing Kernel Hilbert Space, then estimate, predict, and update these embedded states using Kernel Bayes Rule. Practical implementations of PSRNNs are made possible by the machinery of Random Features, where input features are mapped into a new space where dot products approximate the kernel well. Unfortunately PSRNNs often require a large number of RFs to obtain good results, resulting in large models which are slow to execute and slow to train. Orthogonal Random Features (ORFs) (Choromanski et al.) is an improvement on RFs which has been shown to decrease the number of RFs required for pointwise kernel approximation. Unfortunately, it is not clear that ORFs can be applied to PSRNNs, as PSRNNs rely on Kernel Ridge Regression as a core component of their learning algorithm, and the theoretical guarantees of ORF do not apply in this setting. In this paper, we extend the theory of ORFs to Kernel Ridge Regression and show that ORFs can be used to obtain Orthogonal PSRNNs (OPSRNNs), which are smaller and faster than PSRNNs. In particular, we show that OPSRNN models clearly outperform LSTMs and furthermore, can achieve accuracy similar to PSRNNs with an order of magnitude smaller number of features needed.", "target": ["Verbesserung prädiktiver rekurrenter neuronaler Netze durch orthogonale Zufallsmerkmale.", "Schlägt vor, die Leistung von rekurrenten neuronalen Netzen mit Vorhersagecharakter durch Berücksichtigung orthogonaler Zufallsmerkmale zu verbessern.", "Die Arbeit befasst sich mit dem Problem der Ausbildung prädiktiver rekurrenter neuronaler Netze und liefert zwei Beiträge."]} +{"source": "Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental quality- versus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we propose “fidelity-weighted learning” (FWL), a semi-supervised student- teacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on two tasks in information retrieval and natural language processing where we outperform state-of-the-art alternative semi-supervised methods, indicating that our approach makes better use of strong and weak labels, and leads to better task-dependent data representations.", "target": ["Wir schlagen Fidelity-weighted Learning vor, einen halb-überwachten Lehrer-Schüler-Ansatz für das Training neuronaler Netze mit schwach beschrifteten Daten.", "Diese Arbeit schlägt einen Ansatz für das Lernen mit schwacher Überwachung vor, indem es einen sauberen und einen gestörten Datensatz verwendet und von einem Lehrer- und einem Schülernetzwerk ausgeht.", "In dem Beitrag wird versucht, tiefe neuronale Netzmodelle mit wenigen gelabelten Trainingsbeispielen zu trainieren.", "Die Autoren schlagen einen Ansatz für das Training von Deep-Learning-Modellen für Situationen vor, in denen es nicht genügend zuverlässige kommentierte Daten gibt."]} +{"source": " Online learning has attracted great attention due to the increasing demand for systems that have the ability of learning and evolving. When the data to be processed is also high dimensional and dimension reduction is necessary for visualization or prediction enhancement, online dimension reduction will play an essential role. The purpose of this paper is to propose new online learning approaches for supervised dimension reduction. Our first algorithm is motivated by adapting the sliced inverse regression (SIR), a pioneer and effective algorithm for supervised dimension reduction, and making it implementable in an incremental manner. The new algorithm, called incremental sliced inverse regression (ISIR), is able to update the subspace of significant factors with intrinsic lower dimensionality fast and efficiently when new observations come in. We also refine the algorithm by using an overlapping technique and develop an incremental overlapping sliced inverse regression (IOSIR) algorithm. We verify the effectiveness and efficiency of both algorithms by simulations and real data applications.", "target": ["Wir haben zwei neue Ansätze vorgeschlagen, die inkrementelle geschnittene inverse Regression und die inkrementelle überlappende geschnittene inverse Regression, um eine überwachte Dimensionsreduktion in einer Online-Lernmethode zu implementieren.", "Untersucht das Problem der ausreichenden Dimensionsreduzierung und schlägt einen inkrementellen Algorithmus zur invertierten Regression vor.", "In diesem Beitrag wird ein Online-Lernalgorithmus für die überwachte Dimensionsreduzierung vorgeschlagen, die so genannte inkrementelle geschnittene inverse Regression."]} +{"source": "This paper presents a storage-efficient learning model titled Recursive Binary Neural Networks for embedded and mobile devices having a limited amount of on-chip data storage such as hundreds of kilo-Bytes. The main idea of the proposed model is to recursively recycle data storage of weights (parameters) during training. This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip, achieving better classification accuracy. Such efficient use of on-chip storage reduces off-chip storage accesses, improving energy-efficiency and speed of training. We verified the proposed training model with deep and convolutional neural network classifiers on the MNIST and voice activity detection benchmarks. For the deep neural network, our model achieves data storage requirement of as low as 2 bits/weight, whereas the conventional binary neural network learning models require data storage of 8 to 32 bits/weight. With the same amount of data storage, our model can train a bigger network having more weights, achieving 1% less test error than the conventional binary neural network learning model. To achieve the similar classification error, the conventional binary neural network model requires 4× more data storage for weights than our proposed model. For the convolution neural network classifier, the proposed model achieves 2.4% less test error for the same on-chip storage or 6× storage savings to achieve the similar accuracy.\n", "target": ["Wir schlagen ein Lernmodell vor, das es DNN ermöglicht, mit nur 2 Bit/Gewicht zu lernen, was besonders nützlich für das Lernen auf dem Gerät ist.", "Schlägt eine Methode zur schrittweisen Diskretisierung eines NN vor, um Speicher und Leistung zu verbessern."]} +{"source": "Within-class variation in a high-dimensional dataset can be modeled as being on a low-dimensional manifold due to the constraints of the physical processes producing that variation (e.g., translation, illumination, etc.). We desire a method for learning a representation of the manifolds induced by identity-preserving transformations that can be used to increase robustness, reduce the training burden, and encourage interpretability in machine learning tasks. In particular, what is needed is a representation of the transformation manifold that can robustly capture the shape of the manifold from the input data, generate new points on the manifold, and extend transformations outside of the training domain without significantly increasing the error. Previous work has proposed algorithms to efficiently learn analytic operators (called transport operators) that define the process of transporting one data point on a manifold to another. The main contribution of this paper is to define two transfer learning methods that use this generative manifold representation to learn natural transformations and incorporate them into new data. The first method uses this representation in a novel randomized approach to transfer learning that employs the learned generative model to map out unseen regions of the data space. These results are shown through demonstrations of transfer learning in a data augmentation task for few-shot image classification. The second method use of transport operators for injecting specific transformations into new data examples which allows for realistic image animation and informed data augmentation. These results are shown on stylized constructions using the classic swiss roll data structure and in demonstrations of transfer learning in a data augmentation task for few-shot image classification. We also propose the use of transport operators for injecting transformations into new data examples which allows for realistic image animation.", "target": ["Das Lernen von Transportoperatoren auf Mannigfaltigkeiten bildet eine wertvolle Darstellung für Aufgaben wie Transferlernen.", "Verwendet ein Wörterbuch-Lernsystem, um vielfältige Transportoperatoren auf erweiterten USPS-Ziffern zu lernen.", "Die Arbeit berücksichtigt den Rahmen des vielfältigen Transportoperator-Lernens von Culpepper und Olshausen (2009), und interpretiert es als Erhalt einer MAP-Schätzung unter einem probabilistischen generativen Modell."]} +{"source": "The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts.", "target": ["Wir stellen ein neuronales Variationsmodell zum Erlernen sprachgeleiteter kompositorischer visueller Konzepte vor.", "Schlägt eine neuartige neuronale Netzarchitektur vor, die Objektkonzepte durch die Kombination von Beta-VAE und SCAN erlernt.", "In diesem Beitrag wird ein VAE-basiertes Modell für die Übersetzung zwischen Bildern und Text vorgestellt, dessen latente Repräsentation sich gut für die Anwendung symbolischer Operationen eignet, was ihnen eine aussagekräftigere Sprache für die Auswahl von Bildern aus Texten verleiht. ", "Diese Arbeit schlägt ein neues Modell namens SCAN (Symbol-Konzept-Assoziationsnetzwerk) für hierarchisches Konzeptlernen vor und ermöglicht die Verallgemeinerung auf neue Konzepte, die mit Hilfe logischer Operatoren aus bestehenden Konzepten zusammengesetzt werden."]} +{"source": "Despite much success in many large-scale language tasks, sequence-to-sequence (seq2seq) models have not been an ideal choice for conversational modeling as they tend to generate generic and repetitive responses. In this paper, we propose a Latent Topic Conversational Model (LTCM) that augments the seq2seq model with a neural topic component to better model human-human conversations. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model to improve generation at each time step. The experimental results show that the proposed LTCM can generate more diverse and interesting responses by sampling from its learnt latent representations. In a subjective human evaluation, the judges also confirm that LTCM is the preferred option comparing to competitive baseline models.\n", "target": ["Latent Topic Conversational Model, eine Mischung aus seq2seq und neuronalem Themenmodell, um vielfältigere und interessantere Antworten zu generieren.", "In diesem Beitrag wird eine Kombination aus Themenmodell und seq2seq-Konversationsmodell vorgeschlagen.", "Schlägt ein Gesprächsmodell mit thematischen Informationen durch die Kombination seq2seq Modell mit neuronalen Thema Modelle und zeigt das vorgeschlagene Modell übertrifft einige der Grundlinien seq2seq Modelle und andere latente Variablen seq2seq Modell Varianten.", "Der Beitrag befasst sich mit dem Problem der anhaltenden Aktualität in Gesprächsmodellen und schlägt ein Modell vor, das eine Kombination aus einem neuronalen Themenmodell und einem seq2seq-basierten Dialogsystem ist. "]} +{"source": "Most of the existing Graph Neural Networks (GNNs) are the mere extension of the Convolutional Neural Networks (CNNs) to graphs. Generally, they consist of several steps of message passing between the nodes followed by a global indiscriminate feature pooling function. In many data-sets, however, the nodes are unlabeled or their labels provide no information about the similarity between the nodes and the locations of the nodes in the graph. Accordingly, message passing may not propagate helpful information throughout the graph. We show that this conventional approach can fail to learn to perform even simple graph classification tasks. We alleviate this serious shortcoming of the GNNs by making them a two step method. In the first of the proposed approach, a graph embedding algorithm is utilized to obtain a continuous feature vector for each node of the graph. The embedding algorithm represents the graph as a point-cloud in the embedding space. In the second step, the GNN is applied to the point-cloud representation of the graph provided by the embedding method. The GNN learns to perform the given task by inferring the topological structure of the graph encoded in the spatial distribution of the embedded vectors. In addition, we extend the proposed approach to the graph clustering problem and a new architecture for graph clustering is proposed. Moreover, the spatial representation of the graph is utilized to design a graph pooling algorithm. We turn the problem of graph down-sampling into a column sampling problem, i.e., the sampling algorithm selects a subset of the nodes whose feature vectors preserve the spatial distribution of all the feature vectors. We apply the proposed approach to several popular benchmark data-sets and it is shown that the proposed geometrical approach strongly improves the state-of-the-art result for several data-sets. For instance, for the PTC data-set, we improve the state-of-the-art result for more than 22 %.", "target": ["Das Problem der Graphenanalyse wird in ein Problem der Punktwolkenanalyse umgewandelt. ", "Schlägt ein tiefes GNN-Netzwerk für Graphenklassifizierungsprobleme vor, das eine adaptive Graphenpooling-Schicht verwendet.", "Die Autoren schlagen eine Methode zum Lernen von Darstellungen für Graphen vor."]} +{"source": "Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results.\n Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. \n In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. \n For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. \n We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly.\n Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.", "target": ["Wir schlagen vor, adversarische Beispiele auf der Grundlage generativer adversarischer Netze in einer Semi-Whitebox- und Blackbox-Umgebung zu generieren.", "Beschreibt AdvGAN, ein Conditional GAN plus Adversarial Loss, und evaluiert AdvGAN in Semi-White-Box- und Black-Box-Settings und berichtet über den aktuellen Stand der Technik.", "In dieser Arbeit wird eine Methode zur Erzeugung von Gegenbeispielen vorgeschlagen, die Klassifizierungssysteme täuschen und die MadryLab mnist-Herausforderung gewinnen."]} +{"source": "This paper proposes a new model for the rating prediction task in recommender systems which significantly outperforms previous state-of-the art models on a time-split Netflix data set. Our model is based on deep autoencoder with 6 layers and is trained end-to-end without any layer-wise pre-training. We empirically demonstrate that: a) deep autoencoder models generalize much better than the shallow ones, b) non-linear activation functions with negative parts are crucial for training deep models, and c) heavy use of regularization techniques such as dropout is necessary to prevent over-fitting. We also propose a new training algorithm based on iterative output re-feeding to overcome natural sparseness of collaborate filtering. The new algorithm significantly speeds up training and improves model performance. Our code is publicly available.", "target": ["In diesem Beitrag wird gezeigt, wie tiefe Autoencoder durchgängig trainiert werden können, um SoA-Ergebnisse auf einem zeitlich geteilten Netflix-Datensatz zu erzielen.", "In diesem Beitrag wird ein Deep Autoencoder Modell für die Vorhersage von Bewertungen vorgestellt, das andere State-of-the-Art-Ansätze auf dem Netflix Preisdatensatz übertrifft. ", "Schlägt vor, eine tiefe AE für die Vorhersage von Bewertungen in Empfehlungssystemen zu verwenden.", "Die Autoren stellen ein Modell für genauere Netflix-Empfehlungen vor, das zeigt, dass ein Deep Autoencoder komplexere RNN-basierte Modelle mit zeitlichen Informationen übertreffen kann."]} +{"source": "Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset.", "target": ["Wir stellen den Universal Transformer vor, ein selbstaufmerksames, parallel-in-time rekurrentes Sequenzmodell, das Transformers und LSTMs bei einer Vielzahl von Sequenz-zu-Sequenz-Aufgaben, einschließlich maschineller Übersetzung, übertrifft.", "Schlägt ein neues Modell UT vor, das auf dem Transformer-Modell basiert, mit zusätzlicher Rekursion und dynamischem Stoppen der Rekursion.", "Diese Arbeit erweitert Transformer durch die rekursive Anwendung eines Mult-Head Self-Attention Blocks, anstatt mehrere Blöcke im Standard Transformer zu stapeln."]} +{"source": "We present a framework for interpretable continual learning (ICL). We show that explanations of previously performed tasks can be used to improve performance on future tasks. ICL generates a good explanation of a finished task, then uses this to focus attention on what is important when facing a new task. The ICL idea is general and may be applied to many continual learning approaches. Here we focus on the variational continual learning framework to take advantage of its flexibility and efficacy in overcoming catastrophic forgetting. We use saliency maps to provide explanations of performed tasks and propose a new metric to assess their quality. Experiments show that ICL achieves state-of-the-art results in terms of overall continual learning performance as measured by average classification accuracy, and also in terms of its explanations, which are assessed qualitatively and quantitatively using the proposed metric.", "target": ["In dem Beitrag wird ein interpretierbarer Rahmen für kontinuierliches Lernen entwickelt, in dem Erklärungen zu den abgeschlossenen Aufgaben verwendet werden, um die Aufmerksamkeit des Lernenden bei zukünftigen Aufgaben zu erhöhen, und in dem auch eine Erklärungsmetrik vorgeschlagen wird. ", "Die Autoren schlagen ein Framework für kontinuierliches Lernen vor, das auf Erklärungen für durchgeführte Klassifizierungen von zuvor gelernten Aufgaben basiert.", "In diesem Beitrag wird eine Erweiterung des Rahmens für kontinuierliches Lernen vorgeschlagen, die auf der Grundlage der bestehenden Methode des variablen kontinuierlichen Lernens und der Beweiskraft der Daten beruht."]} +{"source": "The state-of-the-art (SOTA) for mixed precision training is dominated by variants of low precision floating point operations, and in particular, FP16 accumulating into FP32 Micikevicius et al. (2017). On the other hand, while a lot of research has also happened in the domain of low and mixed-precision Integer training, these works either present results for non-SOTA networks (for instance only AlexNet for ImageNet-1K), or relatively small datasets (like CIFAR-10). In this work, we train state-of-the-art visual understanding neural networks on the ImageNet-1K dataset, with Integer operations on General Purpose (GP) hardware. In particular, we focus on Integer Fused-Multiply-and-Accumulate (FMA) operations which take two pairs of INT16 operands and accumulate results into an INT32 output.We propose a shared exponent representation of tensors and develop a Dynamic Fixed Point (DFP) scheme suitable for common neural network operations. The nuances of developing an efficient integer convolution kernel is examined, including methods to handle overflow of the INT32 accumulator. We implement CNN training for ResNet-50, GoogLeNet-v1, VGG-16 and AlexNet; and these networks achieve or exceed SOTA accuracy within the same number of iterations as their FP32 counterparts without any change in hyper-parameters and with a 1.8X improvement in end-to-end training throughput. To the best of our knowledge these results represent the first INT16 training results on GP hardware for ImageNet-1K dataset using SOTA CNNs and achieve highest reported accuracy using half precision", "target": ["Trainingspipeline mit gemischter Genauigkeit unter Verwendung von 16-Bit Ganzzahlen auf Allzweck Hardware; SOTA-Genauigkeit für CNNs der ImageNet-Klasse; beste gemeldete Genauigkeit für ImageNet-1K Klassifizierungsaufgabe mit Training mit reduzierter Genauigkeit;", "In diesem Beitrag wird gezeigt, dass eine sorgfältige Implementierung der dynamischen Festkommaberechnung mit gemischter Genauigkeit den neuesten Stand der Technik unter Verwendung eines Deep Learning Modells mit reduzierter Genauigkeit und einer 16-Bit Ganzzahldarstellung erreichen kann.", "Schlägt ein \"dynamisches Festkomma\" Schema vor, das den Exponententeil für einen Tensor teilt, und entwickelt Verfahren, um NN-Berechnungen mit diesem Format durchzuführen, und demonstriert dies für Training mit begrenzter Genauigkeit."]} +{"source": "In this paper we introduce a new speech recognition system, leveraging a simple letter-based ConvNet acoustic model. The acoustic model requires only audio transcription for training -- no alignment annotations, nor any forced alignment step is needed. At inference, our decoder takes only a word list and a language model, and is fed with letter scores from the acoustic model -- no phonetic word lexicon is needed. Key ingredients for the acoustic model are Gated Linear Units and high dropout. We show near state-of-the-art results in word error rate on the LibriSpeech corpus with MFSC features, both on the clean and other configurations.\n", "target": ["Ein buchstabenbasiertes akustisches ConvNet-Modell führt zu einer einfachen und konkurrenzfähigen Spracherkennungspipeline.", "Diese Arbeit wendet Gated Convolutional Neural Networks auf die Spracherkennung an, wobei das Trainingskriterium ASG verwendet wird."]} +{"source": "Generative adversarial networks (GANs) are a powerful framework for generative tasks. However, they are difficult to train and tend to miss modes of the true data generation process. Although GANs can learn a rich representation of the covered modes of the data in their latent space, the framework misses an inverse mapping from data to this latent space. We propose Invariant Encoding Generative Adversarial Networks (IVE-GANs), a novel GAN framework that introduces such a mapping for individual samples from the data by utilizing features in the data which are invariant to certain transformations. Since the model maps individual samples to the latent space, it naturally encourages the generator to cover all modes. We demonstrate the effectiveness of our approach in terms of generative performance and learning rich representations on several datasets including common benchmark image generation tasks.", "target": ["Ein neuartiger GAN-Rahmen, der transformationsinvariante Merkmale nutzt, um umfangreiche Repräsentationen und starke Generatoren zu erlernen.", "Schlägt ein modifiziertes GAN-Ziel vor, das aus einem klassischen GAN-Term und einem invarianten Codierungsterm besteht.", "In diesem Beitrag wird das IVE-GAN vorgestellt, ein Modell, das einen Encoder in das Generative Adversarial Network Framework einführt."]} +{"source": "We propose a method for learning the dependency structure between latent variables in deep latent variable models. Our general modeling and inference framework combines the complementary strengths of deep generative models and probabilistic graphical models. In particular, we express the latent variable space of a variational autoencoder (VAE) in terms of a Bayesian network with a learned, flexible dependency structure. The network parameters, variational parameters as well as the latent topology are optimized simultaneously with a single objective. Inference is formulated via a sampling procedure that produces expectations over latent variable structures and incorporates top-down and bottom-up reasoning over latent variable values. We validate our framework in extensive experiments on MNIST, Omniglot, and CIFAR-10. Comparisons to state-of-the-art structured variational autoencoder baselines show improvements in terms of the expressiveness of the learned model.", "target": ["Wir schlagen eine Methode zum Erlernen latenter Abhängigkeitsstrukturen in variablen Autoencodern vor.", "Verwendet eine Matrix binärer Zufallsvariablen zur Erfassung von Abhängigkeiten zwischen latenten Variablen in einem hierarchischen tiefen generativen Modell.", "In dieser Arbeit wird ein VAE-Ansatz vorgestellt, bei dem während des Trainings eine Abhängigkeitsstruktur auf der latenten Variable gelernt wird.", "Die Autoren schlagen vor, den latenten Raum einer VAE um eine autoregressive Struktur zu erweitern, um die Aussagekraft sowohl des Inferenznetzwerks als auch des latenten Priors zu verbessern."]} +{"source": "Many real-world time series, such as in activity recognition, finance, or climate science, have changepoints where the system's structure or parameters change. Detecting changes is important as they may indicate critical events. However, existing methods for changepoint detection face challenges when (1) the patterns of change cannot be modeled using simple and predefined metrics, and (2) changes can occur gradually, at multiple time-scales. To address this, we show how changepoint detection can be treated as a supervised learning problem, and propose a new deep neural network architecture that can efficiently identify both abrupt and gradual changes at multiple scales. Our proposed method, pyramid recurrent neural network (PRNN), is designed to be scale-invariant, by incorporating wavelets and pyramid analysis techniques from multi-scale signal processing. Through experiments on synthetic and real-world datasets, we show that PRNN can detect abrupt and gradual changes with higher accuracy than the state of the art and can extrapolate to detect changepoints at novel timescales that have not been seen in training.", "target": ["Wir stellen eine skaleninvariante neuronale Netzwerkarchitektur für die Erkennung von Veränderungspunkten in multivariaten Zeitreihen vor.", "Die Arbeit nutzt das Konzept der Wavelet-Transformation innerhalb einer tiefen Architektur, um die Erkennung von Änderungspunkten zu lösen.", "In diesem Beitrag wird ein pyramidenbasiertes neuronales Netz vorgeschlagen und auf 1D-Signale angewendet, deren zugrunde liegende Prozesse auf verschiedenen Zeitskalen ablaufen, wobei die Aufgabe darin besteht, Veränderungen zu erkennen."]} +{"source": "We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We focus on backtracking search algorithms for quantified Boolean logics, which already can solve formulas of impressive size - up to 100s of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For challenging problems, the heuristic learned through our approach reduces execution time by a factor of 10 compared to the existing handwritten heuristics.", "target": ["RL findet bessere Heuristiken für automatische Schlussfolgerungsalgorithmen.", "Ziel ist es, eine Heuristik für einen Backtracking-Suchalgorithmus unter Verwendung von Reinforcement Learning zu erlernen, und vorschlagen eines Modells, das grafische neuronale Netze verwendet, um die Einbettung von Wörtern und Klauseln zu erzeugen und sie zur Vorhersage der Qualität jedes Worts zu verwenden, um die Wahrscheinlichkeit jeder Aktion zu bestimmen.", "Die Arbeit schlägt einen Ansatz zum automatischen Lernen von Variablenauswahlheuristiken für QBF unter Verwendung von Deep Learning vor."]} +{"source": "We consider the question of how to assess generative adversarial networks, in particular with respect to whether or not they generalise beyond memorising the training data. We propose a simple procedure for assessing generative adversarial network performance based on a principled consideration of what the actual goal of generalisation is. Our approach involves using a test set to estimate the Wasserstein distance between the generative distribution produced by our procedure, and the underlying data distribution. We use this procedure to assess the performance of several modern generative adversarial network architectures. We find that this procedure is sensitive to the choice of ground metric on the underlying data space, and suggest a choice of ground metric that substantially improves performance. We finally suggest that attending to the ground metric used in Wasserstein generative adversarial network training may be fruitful, and outline a concrete pathway towards doing so.", "target": ["Beurteilen Sie, ob Ihr GAN tatsächlich etwas anderes tut, als sich die Trainingsdaten zu merken oder nicht.", "Ziel ist es, ein Qualitätsmaß/einen Test für GANs bereitzustellen, und es wird vorgeschlagen, die aktuelle Annäherung an eine von einem GAN gelernte Verteilung zu bewerten, indem der Wasserstein Abstand zwischen zwei Verteilungen, die aus einer Summe von Diracs bestehen, als Basisleistung verwendet wird. ", "In diesem Beitrag wird ein Verfahren zur Bewertung der Leistung von GANs vorgeschlagen, bei dem der Beobachtungsschlüssel erneut berücksichtigt wird. Das Verfahren wird verwendet, um die aktuellen GANs zu testen und zu verbessern."]} +{"source": "The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, f(x) = x * sigmoid(beta * x), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.", "target": ["Wir verwenden Suchtechniken, um neue Aktivierungsfunktionen zu entdecken, und unsere beste entdeckte Aktivierungsfunktion, f(x) = x * sigmoid(beta * x), übertrifft ReLU bei einer Reihe von anspruchsvollen Aufgaben wie ImageNet.", "Schlägt einen auf Reinforcement Learning basierenden Ansatz zum Auffinden von Nichtlinearität vor, indem er Kombinationen aus einer Reihe von unären und binären Operatoren durchsucht.", "In dieser Arbeit wird das Reinforcement Learning genutzt, um die Kombination einer Reihe von unären und binären Funktionen zu suchen, die zu einer neuen Aktivierungsfunktion führen.", "Der Autor verwendet Reinforcement Learning, um neue potenzielle Aktivierungsfunktionen aus einer Vielzahl von möglichen Kandidaten zu finden. "]} +{"source": "Successful training of convolutional neural networks is often associated with suffi-\n ciently deep architectures composed of high amounts of features. These networks\n typically rely on a variety of regularization and pruning techniques to converge\n to less redundant states. We introduce a novel bottom-up approach to expand\n representations in fixed-depth architectures. These architectures start from just a\n single feature per layer and greedily increase width of individual layers to attain\n effective representational capacities needed for a specific task. While network\n growth can rely on a family of metrics, we propose a computationally efficient\n version based on feature time evolution and demonstrate its potency in determin-\n ing feature importance and a networks’ effective capacity. We demonstrate how\n automatically expanded architectures converge to similar topologies that benefit\n from lesser amount of parameters or improved accuracy and exhibit systematic\n correspondence in representational complexity with the specified task. In contrast\n to conventional design patterns with a typical monotonic increase in the amount of\n features with increased depth, we observe that CNNs perform better when there is\n more learnable parameters in intermediate, with falloffs to earlier and later layers.", "target": ["Ein Bottom-up-Algorithmus, der CNNs, die mit einem Merkmal pro Schicht beginnen, zu Architekturen mit ausreichender Darstellungskapazität ausbaut.", "Es wird vorgeschlagen, die Tiefe der Merkmalszuordnungen eines voll Convolutional Neural Networks dynamisch anzupassen, ein Maß für die Selbstähnlichkeit zu formulieren und die Leistung zu steigern.", "Einführung einer einfachen korrelationsbasierten Metrik zur Messung der effektiven Nutzung von Filtern in neuronalen Netzen als Indikator für die effektive Kapazität.", "Ziel ist es, das Problem der Suche nach Deep Learning Architekturen durch inkrementelles Hinzufügen und Entfernen von Kanälen in den Zwischenschichten des Netzwerks zu lösen."]} +{"source": "Deep neural networks are almost universally trained with reverse-mode automatic differentiation (a.k.a. backpropagation). Biological networks, on the other hand, appear to lack any mechanism for sending gradients back to their input neurons, and thus cannot be learning in this way. In response to this, Scellier & Bengio (2017) proposed Equilibrium Propagation - a method for gradient-based train- ing of neural networks which uses only local learning rules and, crucially, does not rely on neurons having a mechanism for back-propagating an error gradient. Equilibrium propagation, however, has a major practical limitation: inference involves doing an iterative optimization of neural activations to find a fixed-point, and the number of steps required to closely approximate this fixed point scales poorly with the depth of the network. In response to this problem, we propose Initialized Equilibrium Propagation, which trains a feedforward network to initialize the iterative inference procedure for Equilibrium propagation. This feed-forward network learns to approximate the state of the fixed-point using a local learning rule. After training, we can simply use this initializing network for inference, resulting in a learned feedforward network. Our experiments show that this network appears to work as well or better than the original version of Equilibrium propagation. This shows how we might go about training deep networks without using backpropagation.", "target": ["Wir trainieren ein Feedforward Netzwerk ohne Backpropagation, indem wir ein energiebasiertes Modell verwenden, um lokale Ziele zu liefern.", "Diese Arbeit zielt darauf ab, die iterative Inferenzprozedur in energiebasierten Modellen, die mit Equilibrium Propagation (EP) trainiert werden, zu beschleunigen, indem vorgeschlagen wird, ein Feedforward Netzwerk zu trainieren, um einen Fixpunkt des \"equilibrating network\" vorherzusagen. ", "Training eines separaten Netzes zur Initialisierung von rekurrenten Netzen, die mit Gleichgewichtsvermehrung trainiert wurden."]} +{"source": "We propose a novel generative model architecture designed to learn representations for images that factor out a single attribute from the rest of the representation. A single object may have many attributes which when altered do not change the identity of the object itself. Consider the human face; the identity of a particular person is independent of whether or not they happen to be wearing glasses. The attribute of wearing glasses can be changed without changing the identity of the person. However, the ability to manipulate and alter image attributes without altering the object identity is not a trivial task. Here, we are interested in learning a representation of the image that separates the identity of an object (such as a human face) from an attribute (such as 'wearing glasses'). We demonstrate the success of our factorization approach by using the learned representation to synthesize the same face with and without a chosen attribute. We refer to this specific synthesis process as image attribute manipulation. We further demonstrate that our model achieves competitive scores, with state of the art, on a facial attribute classification task.", "target": ["Lernen von Darstellungen für Bilder, die ein einzelnes Attribut herausrechnen.", "Diese Arbeit baut auf bedingten VAE GANs auf, um die Manipulation von Attributen während des Syntheseprozesses zu ermöglichen.", "Diese Arbeit schlägt ein generatives Modell vor, um die Repräsentation zu erlernen, die die Identität eines Objekts von einem Attribut trennen kann, und erweitert den Autoencoder adversarial durch Hinzufügen eines Hilfsnetzwerks."]} +{"source": "Stochastic video prediction models take in a sequence of image frames, and generate a sequence of consecutive future image frames. These models typically generate future frames in an autoregressive fashion, which is slow and requires the input and output frames to be consecutive. We introduce a model that overcomes these drawbacks by generating a latent representation from an arbitrary set of frames that can then be used to simultaneously and efficiently sample temporally consistent frames at arbitrary time-points. For example, our model can \"jump\" and directly sample frames at the end of the video, without sampling intermediate frames. Synthetic video evaluations confirm substantial gains in speed and functionality without loss in fidelity. We also apply our framework to a 3D scene reconstruction dataset. Here, our model is conditioned on camera location and can sample consistent sets of images for what an occluded region of a 3D scene might look like, even if there are multiple possibilities for what that region might contain. Reconstructions and videos are available at https://bit.ly/2O4Pc4R.\n", "target": ["Wir stellen ein Modell für eine konsistente 3D-Rekonstruktion und eine sprunghafte Videovorhersage vor, d.h. die Erzeugung von Bildern in mehreren Zeitschritten in der Zukunft, ohne Zwischenbilder zu erzeugen.", "In diesem Beitrag wird eine allgemeine Methode zur Modellierung indizierter Daten vorgeschlagen, bei der die Indexinformationen zusammen mit der Beobachtung in einem neuronalen Netz kodiert werden und dann die Beobachtungsbedingung anhand des Zielindexes dekodiert wird.", "Es wird vorgeschlagen, eine VAE zu verwenden, die das Eingangsvideo auf eine permutationsinvariante Weise kodiert, um zukünftige Bilder eines Videos vorherzusagen."]} +{"source": "The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn’t. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of the stochastic gradient, whereas the update magnitude is solely determined by an estimate of its relative variance. We disentangle these two aspects and analyze them in isolation, shedding light on ADAM ’s inner workings. Transferring the \"variance adaptation” to momentum- SGD gives rise to a novel method, completing the practitioner’s toolbox for problems where ADAM fails.", "target": ["Analyse des beliebten Adam-Optimierers.", "In der Arbeit wird versucht, Adam auf der Grundlage der Varianzanpassung mit Schwung zu verbessern, indem zwei Algorithmen vorgeschlagen werden.", "Dieser Beitrag analysiert die Skaleninvarianz und die besondere Form der in Adam verwendeten Lernrate und argumentiert, dass Adams Update eine Kombination aus einem Sign-Update und einer varianzbasierten Lernrate ist.", "Die Arbeit teilt den ADAM-Algorithmus in zwei Komponenten: Stochastische Richtung im Zeichen der Steigung und adaptive schrittweise mit relativer Varianz, und zwei Algorithmen werden vorgeschlagen, um jede von ihnen zu testen."]} +{"source": "We propose a novel framework to adaptively adjust the dropout rates for the deep neural network based on a Rademacher complexity bound. The state-of-the-art deep learning algorithms impose dropout strategy to prevent feature co-adaptation. However, choosing the dropout rates remains an art of heuristics or relies on empirical grid-search over some hyperparameter space. In this work, we show the network Rademacher complexity is bounded by a function related to the dropout rate vectors and the weight coefficient matrices. Subsequently, we impose this bound as a regularizer and provide a theoretical justified way to trade-off between model complexity and representation power. Therefore, the dropout rates and the empirical loss are unified into the same objective function, which is then optimized using the block coordinate descent algorithm. We discover that the adaptively adjusted dropout rates converge to some interesting distributions that reveal meaningful patterns.Experiments on the task of image and document classification also show our method achieves better performance compared to the state-of the-art dropout algorithms.", "target": ["Wir schlagen ein neues Framework vor, um die Dropout-Raten für das tiefe neuronale Netz auf der Grundlage einer Rademacher-Komplexitätsgrenze adaptiv anzupassen.", "Die Autoren verbinden Dropout-Parameter mit einer Begrenzung der Rademacher-Komplexität des Netzes.", "Bezieht die Komplexität der Lernfähigkeit von Netzen auf die Abbruchraten bei der Backpropagation."]} +{"source": "Sensor fusion is a key technology that integrates various sensory inputs to allow for robust decision making in many applications such as autonomous driving and robot control. Deep neural networks have been adopted for sensor fusion in a body of recent studies. Among these, the so-called netgated architecture was proposed, which has demonstrated improved performances over the conventional convolu- tional neural networks (CNN). In this paper, we address several limitations of the baseline negated architecture by proposing two further optimized architectures: a coarser-grained gated architecture employing (feature) group-level fusion weights and a two-stage gated architectures leveraging both the group-level and feature- level fusion weights. Using driving mode prediction and human activity recogni- tion datasets, we demonstrate the significant performance improvements brought by the proposed gated architectures and also their robustness in the presence of sensor noise and failures.\n", "target": ["Es werden optimierte Gated Deep Learning Architekturen für die Sensorfusion vorgeschlagen.", "Die Autoren verbessern mehrere Einschränkungen der grundlegenden negierten Architektur, indem sie eine grobkörnigere Gated-Fusion Architektur und eine zweistufige Gated-Fusion Architektur vorschlagen.", "Er schlägt zwei Gated Deep Learning Architekturen für die Sensorfusion vor und zeigt durch die gruppierten Merkmale eine verbesserte Leistung, insbesondere bei zufälligen Sensorstörungen und Ausfällen."]} +{"source": "We develop a mean field theory for batch normalization in fully-connected feedforward neural networks. In so doing, we provide a precise characterization of signal propagation and gradient backpropagation in wide batch-normalized networks at initialization. Our theory shows that gradient signals grow exponentially in depth and that these exploding gradients cannot be eliminated by tuning the initial weight variances or by adjusting the nonlinear activation function. Indeed, batch normalization itself is the cause of gradient explosion. As a result, vanilla batch-normalized networks without skip connections are not trainable at large depths for common initialization schemes, a prediction that we verify with a variety of empirical simulations. While gradient explosion cannot be eliminated, it can be reduced by tuning the network close to the linear regime, which improves the trainability of deep batch-normalized networks without residual connections. Finally, we investigate the learning dynamics of batch-normalized networks and observe that after a single step of optimization the networks achieve a relatively stable equilibrium in which gradients have dramatically smaller dynamic range. Our theory leverages Laplace, Fourier, and Gegenbauer transforms and we derive new identities that may be of independent interest.", "target": ["Batch-Normalisierung verursacht explodierende Gradienten in einfachen Feedforward Netzen.", "Entwickelt eine Mean Field Theorie für Batch Normalisierung (BN) in vollständig verbundenen Netzwerken mit zufällig initialisierten Gewichten.", "Bietet eine dynamische Perspektive auf tiefe neuronale Netze unter Verwendung der Entwicklung der Kovarianzmatrix zusammen mit den Schichten."]} +{"source": "We present NeuroSAT, a message passing neural network that learns to solve SAT problems after only being trained as a classifier to predict satisfiability. Although it is not competitive with state-of-the-art SAT solvers, NeuroSAT can solve problems that are substantially larger and more difficult than it ever saw during training by simply running for more iterations. Moreover, NeuroSAT generalizes to novel distributions; after training only on random SAT problems, at test time it can solve SAT problems encoding graph coloring, clique detection, dominating set, and vertex cover problems, all on a range of distributions over small random graphs.", "target": ["Wir trainieren ein Graphennetz zur Vorhersage boolescher Erfüllbarkeit und zeigen, dass es lernt, nach Lösungen zu suchen, und dass die gefundenen Lösungen aus seinen Aktivierungen entschlüsselt werden können.", "Die Arbeit beschreibt eine allgemeine neuronale Netzarchitektur zur Vorhersage der Erfüllbarkeit.", "In diesem Beitrag wird die NeuroSAT-Architektur vorgestellt, die ein tiefes neuronales Netz zur Vorhersage der Erfüllbarkeit von CNF-Instanzen verwendet."]} +{"source": "Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large-scale road network traffic datasets and observe consistent improvement of 12% - 15% over state-of-the-art baselines", "target": ["Ein neuronales Sequenzmodell, das lernt, Prognosen auf einem gerichteten Graphen zu erstellen.", "Die Arbeit schlägt die Diffusion Convolutional Recurrent Neural Network Architektur für das räumlich-zeitliche Verkehrsprognose Problem vor.", "Es wird vorgeschlagen, ein Verkehrsprognosemodell unter Verwendung eines Diffusionsprozesses für Convolutional Recurrent Neural Networks zu erstellen, um die saptio-temporale Autokorrelation zu berücksichtigen."]} +{"source": "Obtaining reliable uncertainty estimates of neural network predictions is a long standing challenge. Bayesian neural networks have been proposed as a solution, but it remains open how to specify their prior. In particular, the common practice of a standard normal prior in weight space imposes only weak regularities, causing the function posterior to possibly generalize in unforeseen ways on inputs outside of the training distribution. We propose noise contrastive priors (NCPs) to obtain reliable uncertainty estimates. The key idea is to train the model to output high uncertainty for data points outside of the training distribution. NCPs do so using an input prior, which adds noise to the inputs of the current mini batch, and an output prior, which is a wide distribution given these inputs. NCPs are compatible with any model that can output uncertainty estimates, are easy to scale, and yield reliable uncertainty estimates throughout training. Empirically, we show that NCPs prevent overfitting outside of the training distribution and result in uncertainty estimates that are useful for active learning. We demonstrate the scalability of our method on the flight delays data set, where we significantly improve upon previously published results.", "target": ["Wir trainieren neuronale Netze so, dass sie bei verrauschten Eingaben unsicher sind, um übermäßige Vorhersagen außerhalb der Trainingsverteilung zu vermeiden.", "Es wird ein Ansatz zur Ermittlung von Unsicherheitsschätzungen für Vorhersagen in neuronalen Netzen vorgestellt, der eine gute Leistung bei der Quantifizierung der Vorhersageunsicherheit an Punkten außerhalb der Trainingsverteilung aufweist.", "Die Arbeit befasst sich mit dem Problem der Unsicherheitsabschätzung von neuronalen Netzen und schlägt vor, einen Bayes'schen Ansatz mit einem kontrastiven Prior zu verwenden."]} +{"source": "Convolutional neural networks (CNNs) were inspired by human vision and, in some settings, achieve a performance comparable to human object recognition. This has lead to the speculation that both systems use similar mechanisms to perform recognition. In this study, we conducted a series of simulations that indicate that there is a fundamental difference between human vision and CNNs: while object recognition in humans relies on analysing shape, CNNs do not have such a shape-bias. We teased apart the type of features selected by the model by modifying the CIFAR-10 dataset so that, in addition to containing objects with shape, the images concurrently contained non-shape features, such as a noise-like mask. When trained on these modified set of images, the model did not show any bias towards selecting shapes as features. Instead it relied on whichever feature allowed it to perform the best prediction -- even when this feature was a noise-like mask or a single predictive pixel amongst 50176 pixels. We also found that regularisation methods, such as batch normalisation or Dropout, did not change this behaviour and neither did past or concurrent experience with images from other datasets.", "target": ["Diese Studie hebt einen wesentlichen Unterschied zwischen dem menschlichen Sehen und CNNs hervor: Während die Objekterkennung beim Menschen auf der Analyse der Form beruht, haben CNNs keine solche Formvorliebe.", "Versucht anhand einer Reihe von gut konzipierten Experimenten nachzuweisen, dass CNNs, die für die Bildklassifizierung trainiert wurden, keine Formverzerrungen wie das menschliche Sehen kodieren.", "In diesem Beitrag wird die Tatsache hervorgehoben, dass CNNs nicht unbedingt lernen, Objekte anhand ihrer Form zu erkennen, und dass sie bei auf Rauschen basierenden Merkmalen überreagieren."]} +{"source": "The development of high-dimensional generative models has recently gained a great surge of interest with the introduction of variational auto-encoders and generative adversarial neural networks. Different variants have been proposed where the underlying latent space is structured, for example, based on attributes describing the data to generate. We focus on a particular problem where one aims at generating samples corresponding to a number of objects under various views. We assume that the distribution of the data is driven by two independent latent factors: the content, which represents the intrinsic features of an object, and the view, which stands for the settings of a particular observation of that object. Therefore, we propose a generative model and a conditional variant built on such a disentangled latent space. This approach allows us to generate realistic samples corresponding to various objects in a high variety of views. Unlike many multi-view approaches, our model doesn't need any supervision on the views but only on the content. Compared to other conditional generation approaches that are mostly based on binary or categorical attributes, we make no such assumption about the factors of variations. Our model can be used on problems with a huge, potentially infinite, number of categories. We experiment it on four images datasets on which we demonstrate the effectiveness of the model and its ability to generalize.", "target": ["Wir beschreiben ein neuartiges generatives Modell mit mehreren Ansichten, das mehrere Ansichten desselben Objekts oder mehrere Objekte in derselben Ansicht generieren kann, ohne dass eine Kennzeichnung der Ansichten erforderlich ist.", "In diesem Beitrag wird eine GAN basierte Methode zur Bilderzeugung vorgestellt, die versucht, latente Variablen, die den Bildinhalt beschreiben, von denen zu trennen, die die Eigenschaften der Ansicht beschreiben.", "In diesem Beitrag wird eine GAN Architektur vorgeschlagen, die darauf abzielt, die zugrunde liegende Verteilung einer bestimmten Klasse in \"Inhalt\" und \"Ansicht\" zu zerlegen.", "Es wird ein neues generatives Modell auf der Grundlage des Generative Adversarial Network (GAN) vorgeschlagen, das den Inhalt und die Ansicht von Objekten ohne Überwachung der Ansicht entkoppelt und GMV zu einem bedingten generativen Modell erweitert, das ein Eingabebild nimmt und verschiedene Ansichten des Objekts im Eingabebild erzeugt. "]} +{"source": "The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.", "target": ["Es wird ein verlustsensitiver Gewichtsquantisierungsalgorithmus vorgeschlagen, der seine Auswirkungen auf den Verlust direkt berücksichtigt.", "Schlägt eine Methode zur Komprimierung von Netzen durch Gewichtsternarisierung vor. ", "Die Arbeit schlägt eine neue Methode vor, DNNs mit quantisierten Gewichten zu trainieren, indem die Quantisierung als Einschränkung in einen proximalen Quasi-Newton-Algorithmus aufgenommen wird, der gleichzeitig eine Skalierung für die quantisierten Werte lernt.", "Die Arbeit erweitert das verlustbewusste Gewichts-Binarisierungsschema auf Terarisierung und beliebige m-Bit Quantisierung und demonstriert eine vielversprechende Leistung."]} +{"source": "In the pursuit of increasingly intelligent learning systems, abstraction plays a vital role in enabling sophisticated decisions to be made in complex environments. The options framework provides formalism for such abstraction over sequences of decisions. However most models require that options be given a priori, presumably specified by hand, which is neither efficient, nor scalable. Indeed, it is preferable to learn options directly from interaction with the environment. Despite several efforts, this remains a difficult problem: many approaches require access to a model of the environmental dynamics, and inferred options are often not interpretable, which limits our ability to explain the system behavior for verification or debugging purposes. In this work we develop a novel policy gradient method for the automatic learning of policies with options. This algorithm uses inference methods to simultaneously improve all of the options available to an agent, and thus can be employed in an off-policy manner, without observing option labels. Experimental results show that the options learned can be interpreted. Further, we find that the method presented here is more sample efficient than existing methods, leading to faster and more stable learning of policies with options.", "target": ["Wir entwickeln eine neuartige Gradientenmethode für das automatische Lernen von Regeln mit Optionen unter Verwendung eines differenzierbaren Inferenzschritts.", "In der Arbeit wird eine neue Gradiententechnik für das Lernen von Optionen vorgestellt, bei der eine einzige Stichprobe zur Aktualisierung aller Optionen verwendet werden kann.", "Schlägt eine Methode zum Lernen von Optionen bei komplexen kontinuierlichen Problemen vor."]} +{"source": "The paper, interested in unsupervised feature selection, aims to retain the features best accounting for the local patterns in the data. The proposed approach, called Locally Linear Unsupervised Feature Selection, relies on a dimensionality reduction method to characterize such patterns; each feature is thereafter assessed according to its compliance w.r.t. the local patterns, taking inspiration from Locally Linear Embedding (Roweis and Saul, 2000). The experimental validation of the approach on the scikit-feature benchmark suite demonstrates its effectiveness compared to the state of the art.", "target": ["Unüberwachte Merkmalsauswahl durch Erfassung der lokalen linearen Struktur von Daten.", "Schlägt eine lokal lineare unüberwachte Merkmalsauswahl vor.", "In dem Papier wird die LLUFS-Methode für die Merkmalsauswahl vorgeschlagen."]} +{"source": "Humans can understand and produce new utterances effortlessly, thanks to their systematic compositional skills. Once a person learns the meaning of a new verb \"dax,\" he or she can immediately understand the meaning of \"dax twice\" or \"sing and dax.\" In this paper, we introduce the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences. We then test the zero-shot generalization capabilities of a variety of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence methods. We find that RNNs can generalize well when the differences between training and test commands are small, so that they can apply \"mix-and-match\" strategies to solve the task. However, when generalization requires systematic compositional skills (as in the \"dax\" example above), RNNs fail spectacularly. We conclude with a proof-of-concept experiment in neural machine translation, supporting the conjecture that lack of systematicity is an important factor explaining why neural networks need very large training sets.", "target": ["Anhand einer einfachen sprachgesteuerten Navigationsaufgabe untersuchen wir die kompositorischen Fähigkeiten moderner rekurrenter seq2seq-Netzwerke.", "Dieser Beitrag konzentriert sich auf die kompositorischen Fähigkeiten moderner Sequenz-zu-Sequenz RNNs und zeigt die Schwächen der aktuellen seq2seq RNN-Architekturen auf.", "Die Arbeit analysiert die Komposition Fähigkeiten von RNNs, insbesondere die Verallgemeinerung Fähigkeit der RNNs auf zufällige Teilmenge von SCAN-Befehle, auf längere SCAN-Befehle, und der Zusammensetzung über primitive Befehle. ", "Die Autoren stellen einen neuen Datensatz vor, der die Analyse eines Seq2Seq Lernfalls erleichtert."]} +{"source": "This paper addresses the challenging problem of retrieval and matching of graph structured objects, and makes two key contributions. First, we demonstrate how Graph Neural Networks (GNN), which have emerged as an effective model for various supervised prediction problems defined on structured data, can be trained to produce embedding of graphs in vector spaces that enables efficient similarity reasoning. Second, we propose a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism. We demonstrate the effectiveness of our models on different domains including the challenging problem of control-flow-graph based function similarity search that plays an important role in the detection of vulnerabilities in software systems. The experimental analysis demonstrates that our models are not only able to exploit structure in the context of similarity learning but they can also outperform domain-specific baseline systems that have been carefully hand-engineered for these problems.", "target": ["Wir befassen uns mit dem Problem des Ähnlichkeitslernens für strukturierte Objekte mit Anwendungen insbesondere im Bereich der Computersicherheit und schlagen ein neues Modell für Graph-Matching Netzwerke vor, das sich bei dieser Aufgabe auszeichnet.", "Die Autoren stellen ein Graph-Matching Netzwerk für die Wiederauffindung und das Matching von graphisch strukturierten Objekten vor.", "Die Autoren gehen das Problem des Graphenabgleichs an, indem sie eine Erweiterung der Grapheneinbettungsnetze vorschlagen.", "Die Autoren stellen zwei Methoden zum Erlernen einer Ähnlichkeitsbewertung zwischen Graphenpaaren vor und zeigen die Vorteile der Einführung von Ideen aus dem Graphenabgleich in neuronale Netze."]} +{"source": "Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. As a result, we build an encoder-decoder architecture with an RNN encoder and a CNN decoder, and we show that neither an autoregressive decoder nor an RNN decoder is required. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabeled corpora, and in both cases transferability is evaluated on a set of downstream language understanding tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks.", "target": ["Wir haben ein RNN-CNN Encoder-Decoder Modell für schnelles unbeaufsichtigtes Lernen von Satzrepräsentationen vorgeschlagen.", "Modifikationen des Skip-thought Frameworks für das Lernen von Satzeinbettungen.", "In diesem Beitrag wird ein neues hybrides Design für RNN-Encoder und CNN-Decoder vorgestellt, das beim Pretraining von Encodern keinen autoregressiven Decoder benötigt.", "Die Autoren erweitern Skip-thought, indem sie nur einen Zielsatz mit einem CNN-Decoder dekodieren."]} +{"source": "Building on the success of deep learning, two modern approaches to learn a probability model of the observed data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs). VAEs consider an explicit probability model for the data and compute a generative distribution by maximizing a variational lower-bound on the log-likelihood function. GANs, however, compute a generative model by minimizing a distance between observed and generated probability distributions without considering an explicit model for the observed data. The lack of having explicit probability models in GANs prohibits computation of sample likelihoods in their frameworks and limits their use in statistical inference problems. In this work, we show that an optimal transport GAN with the entropy regularization can be viewed as a generative model that maximizes a lower-bound on average sample likelihoods, an approach that VAEs are based on. In particular, our proof constructs an explicit probability model for GANs that can be used to compute likelihood statistics within GAN's framework. Our numerical results on several datasets demonstrate consistent trends with the proposed theory.", "target": ["Ein statistischer Ansatz zur Berechnung von Beispielwahrscheinlichkeiten in generativen adversarial Netzen.", "Zeigen Sie, dass WGAN mit entropischer Regularisierung eine untere Schranke für die Wahrscheinlichkeit der beobachteten Datenverteilung maximiert.", "Die Autoren behaupten, dass es möglich ist, die obere Schranke eines entropie-regulierten optimalen Transports zu nutzen, um ein Maß für die \"Stichprobenwahrscheinlichkeit\" zu finden."]} +{"source": "We introduce geomstats, a Python package for Riemannian modelization and optimization over manifolds such as hyperspheres, hyperbolic spaces, SPD matrices or Lie groups of transformations. Our contribution is threefold. First, geomstats allows the flexible modeling of many a machine learning problem through an efficient and extensively unit-tested implementations of these manifolds, as well as the set of useful Riemannian metrics, exponential and logarithm maps that we provide. Moreover, the wide choice of loss functions and our implementation of the corresponding gradients allow fast and easy optimization over manifolds. Finally, geomstats is the only package to provide a unified framework for Riemannian geometry, as the operations implemented in geomstats are available with different computing backends (numpy,tensorflow and keras), as well as with a GPU-enabled mode–-thus considerably facilitating the application of Riemannian geometry in machine learning. In this paper, we present geomstats through a review of the utility and advantages of manifolds in machine learning, using the concrete examples that they span to show the efficiency and practicality of their implementation using our package", "target": ["Wir stellen geomstats vor, ein effizientes Python-Paket für Riemannsche Modellierung und Optimierung über Mannigfaltigkeiten, das sowohl mit numpy als auch mit tensorflow kompatibel ist.", "Die Arbeit stellt das Softwarepaket geomstats vor, das die einfache Nutzung von Riemannschen Mannigfaltigkeiten und Metriken in maschinellen Lernmodellen ermöglicht.", "Schlägt ein Python-Paket für Optimierung und Anwendungen auf Riemannschen Mannigfaltigkeiten vor und hebt die Unterschiede zwischen dem Geomstats-Paket und anderen Paketen hervor.", "Stellt eine geometrische Toolbox, Geomstats, für maschinelles Lernen auf Riemannschen Mannigfaltigkeiten vor."]} +{"source": "We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference. The great success of DNNs motivates the pursuing of lightweight models for the deployment onto embedded devices. However, most of the previous studies optimize for inference while neglect training or even complicate it. Training is far more intractable, since (i) the neurons dominate the memory cost rather than the weights in inference; (ii) the dynamic activation makes previous sparse acceleration via one-off optimization on fixed weight invalid; (iii) batch normalization (BN) is critical for maintaining accuracy while its activation reorganization damages the sparsity. To address these issues, DSG activates only a small amount of neurons with high selectivity at each iteration via a dimensionreduction search and obtains the BN compatibility via a double-mask selection. Experiments show significant memory saving (1.7-4.5x) and operation reduction (2.3-4.4x) with little accuracy loss on various benchmarks.", "target": ["Wir konstruieren einen dynamischen spärlichen Graphen mittels Dimensionsreduktionssuche, um die Rechen- und Speicherkosten sowohl beim DNN-Training als auch bei der Inferenz zu reduzieren.", "Die Autoren schlagen vor, einen dynamischen, spärlichen Berechnungsgraphen zu verwenden, um die Speicher- und Zeitkosten in tiefen neuronalen Netzen (DNN) zu reduzieren.", "In diesem Beitrag wird eine Methode zur Beschleunigung des Trainings und der Inferenz von tiefen neuronalen Netzen durch dynamisches Pruning des Berechnungsgraphen vorgeschlagen."]} +{"source": "Efficient exploration remains a major challenge for reinforcement learning. One reason is that the variability of the returns often depends on the current state and action, and is therefore heteroscedastic. Classical exploration strategies such as upper confidence bound algorithms and Thompson sampling fail to appropriately account for heteroscedasticity, even in the bandit setting. Motivated by recent findings that address this issue in bandits, we propose to use Information-Directed Sampling (IDS) for exploration in reinforcement learning. As our main contribution, we build on recent advances in distributional reinforcement learning and propose a novel, tractable approximation of IDS for deep Q-learning. The resulting exploration strategy explicitly accounts for both parametric uncertainty and heteroscedastic observation noise. We evaluate our method on Atari games and demonstrate a significant improvement over alternative approaches.", "target": ["Wir entwickeln eine praktische Erweiterung des Information-Directed Sampling für Reinforcement Learning, die parametrische Unsicherheit und Heteroskedastizität in der Renditeverteilung für Exploration berücksichtigt.", "Die Autoren schlagen einen Weg vor, Information-Directed Sampling auf Reinforcement Learning zu erweitern, indem sie zwei Arten von Unsicherheiten kombinieren, um eine einfache, auf IDS basierende Explorationsstrategie zu erhalten. ", "Diese Arbeit untersucht sophistische Explorationsansätze für Reinforcement Learning, die auf Information Direct Sampling und auf Distributional Reinforcement Learning aufbauen."]} +{"source": "We address the problem of learning structured policies for continuous control. In traditional reinforcement learning, policies of agents are learned by MLPs which take the concatenation of all observations from the environment as input for predicting actions. In this work, we propose NerveNet to explicitly model the structure of an agent, which naturally takes the form of a graph. Specifically, serving as the agent's policy network, NerveNet first propagates information over the structure of the agent and then predict actions for different parts of the agent. In the experiments, we first show that our NerveNet is comparable to state-of-the-art methods on standard MuJoCo environments. We further propose our customized reinforcement learning environments for benchmarking two types of structure transfer learning tasks, i.e., size and disability transfer. We demonstrate that policies learned by NerveNet are significantly better than policies learned by other models and are able to transfer even in a zero-shot setting.\n", "target": ["Verwendung eines neuronalen Graphen Netzes zur Modellierung struktureller Informationen der Agenten, um die Politik und die Übertragbarkeit zu verbessern.", "Eine Methode zur Darstellung und zum Erlernen strukturierter Strategien für kontinuierliche Steuerungsaufgaben unter Verwendung neuronaler Graphennetze.", "In der Vorlage wird vorgeschlagen, zusätzliche Strukturen in Probleme des Reinforcement Learnings einzubeziehen, insbesondere die Struktur der Morphologie des Agenten.", "Vorschlag für eine Anwendung graph neuronaler Netze zum Erlernen von Strategien zur Steuerung von \"Tausendfüßler\"-Robotern unterschiedlicher Länge."]} +{"source": "Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. \n In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks.", "target": ["In diesem Beitrag wird ein hierarchischer Rahmen für das Reinforcement Learning vorgestellt, der auf deterministischen Optionsstrategien und der Maximierung der gegenseitigen Information basiert. ", "Schlägt einen HRL-Algorithmus vor, der versucht, Optionen zu lernen, die ihre gegenseitige Information mit der Zustands-Aktions Dichte unter optimalen Regeln maximieren.", "In diesem Beitrag wird ein HRL-System vorgeschlagen, bei dem die wechselseitige Information der latenten Variablen und der Zustands-Aktions-Paare annähernd maximiert wird.", "Schlägt ein Kriterium vor, das darauf abzielt, die gegenseitige Information zwischen Optionen und Zustands-Aktions Paaren zu maximieren, und zeigt empirisch, dass die gelernten Optionen den Zustands-Aktions Raum zerlegen, nicht aber den Zustandsraum. "]} +{"source": "Deep neural networks (DNNs) have achieved impressive predictive performance due to their ability to learn complex, non-linear relationships between variables. However, the inability to effectively visualize these relationships has led to DNNs being characterized as black boxes and consequently limited their applications. To ameliorate this problem, we introduce the use of hierarchical interpretations to explain DNN predictions through our proposed method: agglomerative contextual decomposition (ACD). Given a prediction from a trained DNN, ACD produces a hierarchical clustering of the input features, along with the contribution of each cluster to the final prediction. This hierarchy is optimized to identify clusters of features that the DNN learned are predictive. We introduce ACD using examples from Stanford Sentiment Treebank and ImageNet, in order to diagnose incorrect predictions, identify dataset bias, and extract polarizing phrases of varying lengths. Through human experiments, we demonstrate that ACD enables users both to identify the more accurate of two DNNs and to better trust a DNN's outputs. We also find that ACD's hierarchy is largely robust to adversarial perturbations, implying that it captures fundamental aspects of the input and ignores spurious noise.", "target": ["Wir führen hierarchische lokale Interpretationen ein und validieren sie. Dies ist die erste Technik, die automatisch nach wichtigen Interaktionen für individuelle Vorhersagen von LSTMs und CNNs sucht und diese anzeigt.", "Ein neuartiger Ansatz zur Erklärung der Vorhersagen neuronaler Netze durch das Erlernen hierarchischer Darstellungen von Gruppen von Eingangsmerkmalen und deren Beitrag zur endgültigen Vorhersage.", "Erweitert eine bestehende Merkmalsinterpretationsmethode für LSTMs auf allgemeinere DNNs und führt ein hierarchisches Clustering der Eingangsmerkmale und die Beiträge jedes Clusters zur endgültigen Vorhersage ein.", "In diesem Beitrag wird eine hierarchische Erweiterung der kontextuellen Dekomposition vorgeschlagen."]} +{"source": "Principal Filter Analysis (PFA) is an easy to implement, yet effective method for neural network compression. PFA exploits the intrinsic correlation between filter responses within network layers to recommend a smaller network footprint. We propose two compression algorithms: the first allows a user to specify the proportion of the original spectral energy that should be preserved in each layer after compression, while the second is a heuristic that leads to a parameter-free approach that automatically selects the compression used at each layer. Both algorithms are evaluated against several architectures and datasets, and we show considerable compression rates without compromising accuracy, e.g., for VGG-16 on CIFAR-10, CIFAR-100 and ImageNet, PFA achieves a compression rate of 8x, 3x, and 1.4x with an accuracy gain of 0.4%, 1.4% points, and 2.4% respectively. In our tests we also demonstrate that networks compressed with PFA achieve an accuracy that is very close to the empirical upper bound for a given compression ratio. Finally, we show how PFA is an effective tool for simultaneous compression and domain adaptation.", "target": ["Wir schlagen eine einfach zu implementierende, aber effektive Methode zur Komprimierung neuronaler Netze vor. PFA nutzt die intrinsische Korrelation zwischen den Filterantworten innerhalb der Netzwerkschichten, um einen kleineren Netzwerkfußabdruck zu empfehlen.", "Schlägt vor, Convolutional Networks durch Analyse der beobachteten Korrelation zwischen den Filtern einer Schicht zu Prunen, ausgedrückt durch das Eigenwertspektrum ihrer Kovarianzmatrix.", "In diesem Beitrag wird ein Ansatz zur Komprimierung neuronaler Netze vorgestellt, bei dem die Korrelation der Filterantworten in jeder Schicht durch zwei Strategien berücksichtigt wird.", "In diesem Papier wird eine auf der Spektralanalyse basierende Komprimierungsmethode vorgeschlagen."]} +{"source": "We propose a method to efficiently learn diverse strategies in reinforcement learning for query reformulation in the tasks of document retrieval and question answering. In the proposed framework an agent consists of multiple specialized sub-agents and a meta-agent that learns to aggregate the answers from sub-agents to produce a final answer. Sub-agents are trained on disjoint partitions of the training data, while the meta-agent is trained on the full training set. Our method makes learning faster, because it is highly parallelizable, and has better generalization performance than strong baselines, such as an ensemble of agents trained on the full data. We show that the improved performance is due to the increased diversity of reformulation strategies.", "target": ["Mehrere verschiedene Agenten für die Umformulierung von Suchanfragen, die mit Reinforcement Learning trainiert wurden, um Suchmaschinen zu verbessern.", "Parellelisierung der Ensemble-Methode beim Reinforcement Learning für die Umformulierung von Anfragen, Beschleunigung des Trainings und Verbesserung der Vielfalt der gelernten Formulierungen.", "Die Autoren schlagen vor, mehrere verschiedene Agenten zu trainieren, jeder mit einer anderen Teilmenge der Trainingsmenge.", "Die Autoren schlagen einen Ensemble-Ansatz für die Reformulierung von Anfragen vor."]} +{"source": "Network Embeddings (NEs) map the nodes of a given network into $d$-dimensional Euclidean space $\\mathbb{R}^d$. Ideally, this mapping is such that 'similar' nodes are mapped onto nearby points, such that the NE can be used for purposes such as link prediction (if 'similar' means being 'more likely to be connected') or classification (if 'similar' means 'being more likely to have the same label'). In recent years various methods for NE have been introduced, all following a similar strategy: defining a notion of similarity between nodes (typically some distance measure within the network), a distance measure in the embedding space, and a loss function that penalizes large distances for similar nodes and small distances for dissimilar nodes.\n\n A difficulty faced by existing methods is that certain networks are fundamentally hard to embed due to their structural properties: (approximate) multipartiteness, certain degree distributions, assortativity, etc. To overcome this, we introduce a conceptual innovation to the NE literature and propose to create \\emph{Conditional Network Embeddings} (CNEs); embeddings that maximally add information with respect to given structural properties (e.g. node degrees, block densities, etc.). We use a simple Bayesian approach to achieve this, and propose a block stochastic gradient descent algorithm for fitting it efficiently.\n\n We demonstrate that CNEs are superior for link prediction and multi-label classification when compared to state-of-the-art methods, and this without adding significant mathematical or computational complexity. Finally, we illustrate the potential of CNE for network visualization.", "target": ["Wir stellen eine Methode zur Einbettung von Netzwerken vor, die Vorabinformationen über das Netzwerk berücksichtigt und dadurch eine bessere empirische Leistung erzielt.", "Die Arbeit schlägt vor, eine Prioritätsverteilung zu verwenden, um die Netzwerkeinbettung einzuschränken. Für die Formulierung dieser Arbeit wurden sehr eingeschränkte Gaußsche Verteilungen verwendet.", "Schlägt vor, unbeaufsichtigte Knoteneinbettungen zu lernen, indem die strukturellen Eigenschaften von Netzwerken berücksichtigt werden."]} +{"source": "This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the ''``Adam-type'', includes the popular algorithms such as Adam, AMSGrad, AdaGrad. Despite their popularity in training deep neural networks (DNNs), the convergence of these algorithms for solving non-convex problems remains an open question. In this paper, we develop an analysis framework and a set of mild sufficient conditions that guarantee the convergence of the Adam-type methods, with a convergence rate of order $O(\\log{T}/\\sqrt{T})$ for non-convex stochastic optimization. Our convergence analysis applies to a new algorithm called AdaFom (AdaGrad with First Order Momentum). We show that the conditions are essential, by identifying concrete examples in which violating the conditions makes an algorithm diverge. Besides providing one of the first comprehensive analysis for Adam-type methods in the non-convex setting, our results can also help the practitioners to easily monitor the progress of algorithms and determine their convergence behavior.", "target": ["Wir analysieren die Konvergenz von Algorithmen des Adam-Typs und stellen milde hinreichende Bedingungen zur Verfügung, um ihre Konvergenz zu garantieren. Wir zeigen auch, dass eine Verletzung der Bedingungen dazu führen kann, dass ein Algorithmus divergiert.", "Präsentiert eine Konvergenzanalyse im nicht-konvexen Umfeld für eine Familie von Optimierungsalgorithmen.", "In diesem Beitrag wird die Konvergenzbedingung von Optimierern des Adam-Typs bei ungebundenen nicht-konvexen Optimierungsproblemen untersucht."]} +{"source": "This research paper describes a simplistic architecture named as AANN: Absolute Artificial Neural Network, which can be used to create highly interpretable representations of the input data. These representations are generated by penalizing the learning of the network in such a way that those learned representations correspond to the respective labels present in the labelled dataset used for supervised training; thereby, simultaneously giving the network the ability to classify the input data. The network can be used in the reverse direction to generate data that closely resembles the input by feeding in representation vectors as required. This research paper also explores the use of mathematical abs (absolute valued) functions as activation functions which constitutes the core part of this neural network architecture. Finally the results obtained on the MNIST dataset by using this technique are presented and discussed in brief.", "target": ["Auto-Encoder mit gebundenen Gewichten und abs-Funktion als Aktivierungsfunktion, lernt die Klassifizierung in Vorwärtsrichtung und die Regression in Rückwärtsrichtung aufgrund einer speziell definierten Kostenfunktion.", "In dem Beitrag wird die Verwendung der Absolutwert-Aktivierungsfunktion in einer Autoencoder-Architektur mit einem zusätzlichen überwachten Lernterm in der Zielfunktion vorgeschlagen.", "In dieser Arbeit wird ein reversibles Netz mit dem absoluten Wert als Aktivierungsfunktion eingeführt."]} +{"source": "Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.", "target": ["Wir schlagen ein auf Transformer basierendes Modell zur Extraktion von Beziehungen vor, das anstelle von expliziten linguistischen Merkmalen vortrainierte Sprachrepräsentationen verwendet.", "Es wird ein Transformer basiertes Modell zur Extraktion von Beziehungen vorgestellt, das ein Vortraining auf unbeschriftetem Text mit einem Sprachmodellierungsziel nutzt.", "Dieser Artikel beschreibt eine neuartige Anwendung von Transformer-Netzwerken zur Extraktion von Beziehungen.", "Die Arbeit stellt eine auf Transformer basierende Architektur zur Ausspannungsextraktion vor, die an zwei Datensätzen evaluiert wurde."]} +{"source": "Neural networks have recently had a lot of success for many tasks. However, neural\n network architectures that perform well are still typically designed manually\n by experts in a cumbersome trial-and-error process. We propose a new method\n to automatically search for well-performing CNN architectures based on a simple\n hill climbing procedure whose operators apply network morphisms, followed\n by short optimization runs by cosine annealing. Surprisingly, this simple method\n yields competitive results, despite only requiring resources in the same order of\n magnitude as training a single network. E.g., on CIFAR-10, our method designs\n and trains networks with an error rate below 6% in only 12 hours on a single GPU;\n training for one day reduces this error further, to almost 5%.", "target": ["Wir schlagen ein einfaches und effizientes Verfahren zur Architektursuche für Convolutional Neural Networks vor.", "Es wird eine Suchmethode für neuronale Architekturen vorgeschlagen, die bei CIFAR10 eine Genauigkeit erreicht, die dem neuesten Stand der Technik entspricht, und die viel weniger Rechenressourcen benötigt.", "Es wird eine Methode zur Suche von Architekturen für neuronale Netze zur gleichen Zeit wie das Training vorgestellt, die eine erhebliche Zeitersparnis beim Training und bei der Architektursuche mit sich bringt.", "Schlägt eine Variante der neuronalen Architektursuche unter Verwendung von Netzwerkmorphismen vor, um einen Suchraum unter Verwendung von CNN-Architekturen zu definieren, der die CIFAR-Bildklassifizierungsaufgabe erfüllt."]} +{"source": "We propose GraphGAN - the first implicit generative model for graphs that enables to mimic real-world networks.\n We pose the problem of graph generation as learning the distribution of biased random walks over a single input graph.\n Our model is based on a stochastic neural network that generates discrete output samples, and is trained using the Wasserstein GAN objective. GraphGAN enables us to generate sibling graphs, which have similar properties yet are not exact replicas of the original graph. Moreover, GraphGAN learns a semantic mapping from the latent input space to the generated graph's properties. We discover that sampling from certain regions of the latent space leads to varying properties of the output graphs, with smooth transitions between them. Strong generalization properties of GraphGAN are highlighted by its competitive performance in link prediction as well as promising results on node classification, even though not specifically trained for these tasks.", "target": ["Verwendung von GANs zur Erzeugung von Graphen über Random Walks.", "Die Autoren schlagen ein generatives Modell von Random Walks auf Graphen vor, das modellagnostisches Lernen, kontrollierbare Anpassung und die Erzeugung von Ensemble-Graphen ermöglicht.", "Schlägt eine WGAN-Formulierung zur Erzeugung von Graphen auf der Grundlage von Random Walks unter Verwendung von Knoteneinbettungen und einer LSTM-Architektur zur Modellierung vor."]} +{"source": "The ability of a classifier to recognize unknown inputs is important for many classification-based systems. We discuss the problem of simultaneous classification and novelty detection, i.e. determining whether an input is from the known set of classes and from which specific class, or from an unknown domain and does not belong to any of the known classes. We propose a method based on the Generative Adversarial Networks (GAN) framework. We show that a multi-class discriminator trained with a generator that generates samples from a mixture of nominal and novel data distributions is the optimal novelty detector. We approximate that generator with a mixture generator trained with the Feature Matching loss and empirically show that the proposed method outperforms conventional methods for novelty detection. Our findings demonstrate a simple, yet powerful new application of the GAN framework for the task of novelty detection.", "target": ["Wir schlagen vor, ein Problem der gleichzeitigen Klassifizierung und Neuheitserkennung im GAN Framework zu lösen.", "Er schlägt ein GAN vor, um Klassifizierung und Neuheitserkennung zu vereinen.", "In diesem Beitrag wird eine Methode zur Neuheitserkennung vorgestellt, die auf einem Mehrklassen GAN basiert, das auf die Ausgabe von Bildern trainiert wird, die aus einer Mischung von nominalen und neuartigen Verteilungen erzeugt werden.", "In der Arbeit wird ein GAN für die Erkennung von Neuheiten vorgeschlagen, das einen Mischungsgenerator mit Verlusten bei der Merkmalsanpassung verwendet."]} +{"source": " Verifying a person's identity based on their voice is a challenging, real-world problem in biometric security. A crucial requirement of such speaker verification systems is to be domain robust. Performance should not degrade even if speakers are talking in languages not seen during training. To this end, we present a flexible and interpretable framework for learning domain invariant speaker embeddings using Generative Adversarial Networks. We combine adversarial training with an angular margin loss function, which encourages the speaker embedding model to be discriminative by directly optimizing for cosine similarity between classes. We are able to beat a strong baseline system using a cosine distance classifier and a simple score-averaging strategy. Our results also show that models with adversarial adaptation perform significantly better than unadapted models. In an attempt to better understand this behavior, we quantitatively measure the degree of invariance induced by our proposed methods using Maximum Mean Discrepancy and Frechet distances. Our analysis shows that our proposed adversarial speaker embedding models significantly reduce the distance between source and target data distributions, while performing similarly on the former and better on the latter.", "target": ["Die Leistung der Sprecherverifikation kann durch die Anpassung des Modells an In-Domain-Daten mit Hilfe generativer adversarialer Netze erheblich verbessert werden. Außerdem kann die Anpassung auf unüberwachte Weise erfolgen.", "Vorschlag einer Reihe von GAN-Varianten für die Aufgabe der Sprechererkennung unter der Bedingung, dass die Domäne nicht übereinstimmt."]} +{"source": "Learning disentangling representations of the independent factors of variations that explain the data in an unsupervised setting is still a major challenge. In the following paper we address the task of disentanglement and introduce a new state-of-the-art approach called Non-synergistic variational Autoencoder (Non-Syn VAE). Our model draws inspiration from population coding, where the notion of synergy arises when we describe the encoded information by neurons in the form of responses from the stimuli. If those responses convey more information together than separate as independent sources of encoding information, they are acting synergetically. By penalizing the synergistic mutual information within the latents we encourage information independence and by doing that disentangle the latent factors. Notably, our approach could be added to the VAE framework easily, where the new ELBO function is still a lower bound on the log likelihood. In addition, we qualitatively compare our model with Factor VAE and show that this one implicitly minimises the synergy of the latents.", "target": ["Minimierung der synergetischen wechselseitigen Information innerhalb der Latenzen und der Daten für die Aufgabe der Entflechtung unter Verwendung des VAE-Rahmens.", "Schlägt eine neue Zielfunktion für das Erlernen von verschränkten Darstellungen in einem Variationsrahmen vor, indem die Synergie der bereitgestellten Informationen minimiert wird.", "Die Autoren zielen darauf ab, eine VAE zu trainieren, die latente Repräsentationen in einer \"synergetischen\" maximalen Weise entwirrt. ", "In diesem Beitrag wird ein neuer Ansatz zur Durchsetzung der Entflechtung in VAEs vorgeschlagen, der einen Term verwendet, der die synergetische gegenseitige Information zwischen den latenten Variablen bestraft."]} +{"source": " Metric embeddings are immensely useful representations of associations between entities (images, users, search queries, words, and more). Embeddings are learned by optimizing a loss objective of the general form of a sum over example associations. Typically, the optimization uses stochastic gradient updates over minibatches of examples that are arranged independently at random. In this work, we propose the use of {\\em structured arrangements} through randomized {\\em microbatches} of examples that are more likely to include similar ones. We make a principled argument for the properties of our arrangements that accelerate the training and present efficient algorithms to generate microbatches that respect the marginal distribution of training examples. Finally, we observe experimentally that our structured arrangements accelerate training by 3-20\\%. Structured arrangements emerge as a powerful and novel performance knob for SGD that is independent and complementary to other SGD hyperparameters and thus is a candidate for wide deployment.", "target": ["Beschleunigung von SGD durch eine andere Anordnung der Beispiele.", "Die Arbeit stellt eine Methode zur Verbesserung der Konvergenzrate des stochastischen Gradientenabstiegs für das Lernen von Einbettungen vor, indem ähnliche Trainingsbeispiele gruppiert werden.", "Schlägt eine ungleichmäßige Sampling-Strategie zur Konstruktion von Minibatches in SGD für die Aufgabe des Lernens von Einbettungen für Objektassoziationen vor."]} +{"source": "Ubuntu dialogue corpus is the largest public available dialogue corpus to make it feasible to build end-to-end\ndeep neural network models directly from the conversation data. One challenge of Ubuntu dialogue corpus is \nthe large number of out-of-vocabulary words. In this paper we proposed an algorithm which combines the general pre-trained word embedding vectors with those generated on the task-specific training set to address this issue. We integrated character embedding into Chen et al's Enhanced LSTM method (ESIM) and used it to evaluate the effectiveness of our proposed method. For the task of next utterance selection, the proposed method has demonstrated a significant performance improvement against original ESIM and the new model has achieved state-of-the-art results on both Ubuntu dialogue corpus and Douban conversation corpus. In addition, we investigated the performance impact of end-of-utterance and end-of-turn token tags.", "target": ["Kombinieren von Informationen zwischen vorgefertigten Worteinbettungen und aufgabenspezifischen Wortdarstellungen, um das Problem des fehlenden Vokabulars zu lösen.", "Diese Arbeit schlägt einen Ansatz vor, um die Vorhersage der Einbettung außerhalb des Vokabulars für die Aufgabe der Modellierung von Dialoggesprächen mit beträchtlichen Gewinnen gegenüber den Basislinien zu verbessern.", "Schlägt vor, externe vortrainierte Worteinbettungen und vortrainierte Worteinbettungen auf Trainingsdaten zu kombinieren, indem sie als zwei Ansichten beibehalten werden.", "Schlägt eine Methode zur Erweiterung der Abdeckung von vortrainierten Worteinbettungen vor, um das OOV-Problem zu bewältigen, das bei der Anwendung auf Gesprächsdatensätze auftritt, und wendet neue Varianten von LSTM basierten Modellen auf die Aufgabe der Antwortauswahl bei der Dialogmodellierung an."]} +{"source": "Deep learning has shown that learned functions can dramatically outperform hand-designed functions on perceptual tasks. Analogously, this suggests that learned update functions may similarly outperform current hand-designed optimizers, especially for specific tasks. However, learned optimizers are notoriously difficult to train and have yet to demonstrate wall-clock speedups over hand-designed optimizers, and thus are rarely used in practice. Typically, learned optimizers are trained by truncated backpropagation through an unrolled optimization process. The resulting gradients are either strongly biased (for short truncations) or have exploding norm (for long truncations). In this work we propose a training scheme which overcomes both of these difficulties, by dynamically weighting two unbiased gradient estimators for a variational loss on optimizer performance. This allows us to train neural networks to perform optimization faster than well tuned first-order methods. Moreover, by training the optimizer against validation loss, as opposed to training loss, we are able to use it to train models which generalize better than those trained by first order methods. We demonstrate these results on problems where our learned optimizer trains convolutional networks in a fifth of the wall-clock time compared to tuned first-order methods, and with an improvement", "target": ["Wir analysieren Probleme beim Training von gelernten Optimierern, lösen diese Probleme durch Variationsoptimierung unter Verwendung von zwei komplementären Gradientenschätzern und trainieren Optimierer, die in der Wanduhrzeit 5x schneller sind als Basisoptimierer (z.B. Adam).", "In diesem Beitrag wird die ungerollte Optimierung verwendet, um neuronale Netze für die Optimierung zu lernen.", "Diese Arbeit befasst sich mit dem Problem des Lernens eines Optimierers, insbesondere konzentrieren sich die Autoren darauf, sauberere Gradienten aus dem aufgerollten Trainingsverfahren zu erhalten.", "Stellt eine Methode zum \"Lernen eines Optimierers\" vor, indem eine Variationsoptimierung für den \"äußeren\" Optimiererverlust verwendet wird, und schlägt die Idee vor, sowohl den reparametrisierten Gradienten als auch den Score-Funktionsschätzer für das Variationsziel zu kombinieren und sie mit Hilfe einer Produkt-Gauß-Formel für den Mittelwert zu gewichten."]} +{"source": "Asynchronous distributed gradient descent algorithms for training of deep neural\n networks are usually considered as inefficient, mainly because of the Gradient delay\n problem. In this paper, we propose a novel asynchronous distributed algorithm\n that tackles this limitation by well-thought-out averaging of model updates, computed\n by workers. The algorithm allows computing gradients along the process\n of gradient merge, thus, reducing or even completely eliminating worker idle time\n due to communication overhead, which is a pitfall of existing asynchronous methods.\n We provide theoretical analysis of the proposed asynchronous algorithm,\n and show its regret bounds. According to our analysis, the crucial parameter for\n keeping high convergence rate is the maximal discrepancy between local parameter\n vectors of any pair of workers. As long as it is kept relatively small, the\n convergence rate of the algorithm is shown to be the same as the one of a sequential\n online learning. Furthermore, in our algorithm, this discrepancy is bounded\n by an expression that involves the staleness parameter of the algorithm, and is\n independent on the number of workers. This is the main differentiator between\n our approach and other solutions, such as Elastic Asynchronous SGD or Downpour\n SGD, in which that maximal discrepancy is bounded by an expression that\n depends on the number of workers, due to gradient delay problem. To demonstrate\n effectiveness of our approach, we conduct a series of experiments on image\n classification task on a cluster with 4 machines, equipped with a commodity communication\n switch and with a single GPU card per machine. Our experiments\n show a linear scaling on 4-machine cluster without sacrificing the test accuracy,\n while eliminating almost completely worker idle time. Since our method allows\n using commodity communication switch, it paves a way for large scale distributed\n training performed on commodity clusters.", "target": ["Eine Methode für ein effizientes asynchrones verteiltes Training von Deep-Learning-Modellen zusammen mit theoretischen Regret-Grenzen.", "Die Arbeit schlägt einen Algorithmus zur Begrenzung der Staleness in asynchronen SGD und bietet eine theoretische Analyse.", "Schlägt einen Hybrid-Algorithmus vor, um die Gradientenverzögerung von asynchronen Methoden zu eliminieren."]} +{"source": "Neural network quantization is becoming an industry standard to efficiently deploy deep learning models on hardware platforms, such as CPU, GPU, TPU, and FPGAs. However, we observe that the conventional quantization approaches are vulnerable to adversarial attacks. This paper aims to raise people's awareness about the security of the quantized models, and we designed a novel quantization methodology to jointly optimize the efficiency and robustness of deep learning models. We first conduct an empirical study to show that vanilla quantization suffers more from adversarial attacks. We observe that the inferior robustness comes from the error amplification effect, where the quantization operation further enlarges the distance caused by amplified noise. Then we propose a novel Defensive Quantization (DQ) method by controlling the Lipschitz constant of the network during quantization, such that the magnitude of the adversarial noise remains non-expansive during inference. Extensive experiments on CIFAR-10 and SVHN datasets demonstrate that our new quantization method can defend neural networks against adversarial examples, and even achieves superior robustness than their full-precision counterparts, while maintaining the same hardware efficiency as vanilla quantization approaches. As a by-product, DQ can also improve the accuracy of quantized models without adversarial attack.", "target": ["Wir haben eine neuartige Quantisierungsmethode entwickelt, um die Effizienz und Robustheit von Deep-Learning-Modellen gemeinsam zu optimieren.", "Schlägt ein Regularisierungsschema vor, um quantisierte neuronale Netze vor adversarial Angriffen zu schützen, indem es eine Lipschitz-Konstante zur Filterung des Inpout-Outputs der inneren Schichten verwendet."]} +{"source": "Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. Source code is publicly available at https://imatge-upc.github.io/skiprnn-2017-telecombcn/.", "target": ["Eine Modifikation für bestehende RNN-Architekturen, die es ihnen ermöglicht, Zustandsaktualisierungen zu überspringen und dabei die Leistung der ursprünglichen Architekturen beizubehalten.", "Schlägt das Skip-RNN-Modell vor, das es einem rekurrenten Netzwerk ermöglicht, die Aktualisierung seines verborgenen Zustands für einige Eingaben selektiv zu überspringen, was zu einer reduzierten Berechnung zur Testzeit führt.", "Schlägt ein neuartiges RNN-Modell vor, bei dem sowohl die Eingabe als auch die Zustandsaktualisierung der rekurrenten Zellen adaptiv für einige Zeitschritte übersprungen wird."]} +{"source": "We propose a fast second-order method that can be used as a drop-in replacement for current deep learning solvers. Compared to stochastic gradient descent (SGD), it only requires two additional forward-mode automatic differentiation operations per iteration, which has a computational cost comparable to two standard forward passes and is easy to implement. Our method addresses long-standing issues with current second-order solvers, which invert an approximate Hessian matrix every iteration exactly or by conjugate-gradient methods, procedures that are much slower than a SGD step. Instead, we propose to keep a single estimate of the gradient projected by the inverse Hessian matrix, and update it once per iteration with just two passes over the network. This estimate has the same size and is similar to the momentum variable that is commonly used in SGD. No estimate of the Hessian is maintained.\n We first validate our method, called CurveBall, on small problems with known solutions (noisy Rosenbrock function and degenerate 2-layer linear networks), where current deep learning solvers struggle. We then train several large models on CIFAR and ImageNet, including ResNet and VGG-f networks, where we demonstrate faster convergence with no hyperparameter tuning. We also show our optimiser's generality by testing on a large set of randomly-generated architectures.", "target": ["Ein schneller Solver zweiter Ordnung für Deep Learning, der bei ImageNet-Problemen ohne Abstimmung der Hyperparameter funktioniert.", "Wahl der Richtung durch Verwendung eines einzelnen Schritts des Gradientenabstiegs \"in Richtung Newton-Schritt\" von einer ursprünglichen Schätzung aus, und dann Übernahme dieser Richtung anstelle des ursprünglichen Gradienten.", "Ein neues approximatives Optimierungsverfahren zweiter Ordnung mit geringem Rechenaufwand, das die Berechnung der Hessian Matrix durch einen einzigen Gradientenschritt und eine Warmstart Strategie ersetzt."]} +{"source": "The recently presented idea to learn heuristics for combinatorial optimization problems is promising as it can save costly development. However, to push this idea towards practical implementation, we need better models and better ways of training. We contribute in both directions: we propose a model based on attention layers with benefits over the Pointer Network and we show how to train this model using REINFORCE with a simple baseline based on a deterministic greedy rollout, which we find is more efficient than using a value function. We significantly improve over recent learned heuristics for the Travelling Salesman Problem (TSP), getting close to optimal results for problems up to 100 nodes. With the same hyperparameters, we learn strong heuristics for two variants of the Vehicle Routing Problem (VRP), the Orienteering Problem (OP) and (a stochastic variant of) the Prize Collecting TSP (PCTSP), outperforming a wide range of baselines and getting results close to highly optimized and specialized algorithms.", "target": ["Aufmerksamkeitsbasiertes Modell, das mit REINFORCE trainiert wurde, um Heuristiken mit konkurrenzfähigen Ergebnissen bei TSP und anderen Routing-Problemen zu lernen.", "Präsentiert einen aufmerksamkeitsbasierten Ansatz zum Erlernen einer Strategie zur Lösung von TSP und anderen kombinatorischen Optimierungsproblemen vom Typ Routing.", "In diesem Beitrag wird versucht, Heuristiken für die Lösung kombinatorischer Optimierungsprobleme zu erlernen."]} +{"source": "We propose an efficient online hyperparameter optimization method which uses a joint dynamical system to evaluate the gradient with respect to the hyperparameters. While similar methods are usually limited to hyperparameters with a smooth impact on the model, we show how to apply it to the probability of dropout in neural networks. Finally, we show its effectiveness on two distinct tasks.", "target": ["Ein Algorithmus zur Optimierung von Regularisierungs-Hyperparametern während des Trainings.", "Die Arbeit schlägt einen Weg vor, um y bei jeder Aktualisierung von Lambda neu zu initialisieren und ein Clipping-Verfahren von y, um die Stabilität des dynamischen Systems zu erhalten.", "Schlägt einen Algorithmus zur Hyperparameter-Optimierung vor, der als Erweiterung von Franceschi 2017 angesehen werden kann, bei dem einige Schätzungen warm neu gestartet werden, um die Stabilität der Methode zu erhöhen.", "Schlägt eine Erweiterung einer bestehenden Methode zur Optimierung von Regularisierungshyperparametern vor."]} +{"source": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.\n", "target": ["Zeigen Sie, dass LSTMs genauso gut oder besser sind als die jüngsten Innovationen für LM und dass die Modellbewertung oft unzuverlässig ist.", "Diese Arbeit beschreibt eine umfassende Validierung von LSTM basierten Wort- und Zeichensprachmodellen, die zu einem bedeutenden Ergebnis in der Sprachmodellierung und einem Meilenstein im Deep Learning führt."]} +{"source": " Residual and skip connections play an important role in many current\n generative models. Although their theoretical and numerical advantages\n are understood, their role in speech enhancement systems has not been\n investigated so far. When performing spectral speech enhancement,\n residual connections are very similar in nature to spectral subtraction,\n which is the one of the most commonly employed speech enhancement approaches.\n Highway networks, on the other hand, can be seen as a combination of spectral\n masking and spectral subtraction. However, when using deep neural networks, such operations would\n normally happen in a transformed spectral domain, as opposed to traditional speech\n enhancement where all operations are often done directly on the spectrum.\n In this paper, we aim to investigate the role of residual and highway\n connections in deep neural networks for speech enhancement, and verify whether\n or not they operate similarly to their traditional, digital signal processing\n counterparts. We visualize the outputs of such connections, projected back to\n the spectral domain, in models trained for speech denoising, and show that while\n skip connections do not necessarily improve performance with regards to the\n number of parameters, they make speech enhancement models more interpretable.", "target": ["Wir zeigen, wie die Verwendung von Skip Verbindungen Sprachverbesserungsmodelle interpretierbarer machen kann, da sie ähnliche Mechanismen verwenden, die in der DSP Literatur erforscht worden sind.", "Die Autoren schlagen vor, Residual-, Highway- und Maskierungsblöcke in eine vollständig gefaltete Pipeline einzubauen, um zu verstehen, wie die iterative Inferenz des Outputs und der Maskierung in einer Sprachverbesserungsaufgabe durchgeführt wird", "Die Autoren interpretieren Autobahn-, Rest- und Verdeckungsverbindungen. ", "Die Autoren erzeugen ihre eigene verrauschte Sprache, indem sie künstlich Rauschen aus einem gut etablierten Rauschdatensatz zu einem weniger bekannten Datensatz mit sauberer Sprache hinzufügen."]} +{"source": "Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal with uncertainty when learning from finite data. Among approaches to realize probabilistic inference in deep neural networks, variational Bayes (VB) is theoretically grounded, generally applicable, and computationally efficient. With wide recognition of potential advantages, why is it that variational Bayes has seen very limited practical use for BNNs in real applications? We argue that variational inference in neural networks is fragile: successful implementations require careful initialization and tuning of prior variances, as well as controlling the variance of Monte Carlo gradient estimates. We provide two innovations that aim to turn VB into a robust inference tool for Bayesian neural networks: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel Empirical Bayes procedure for automatically selecting prior variances. Combining these two innovations, the resulting method is highly efficient and robust. On the application of heteroscedastic regression we demonstrate good predictive performance over alternative approaches.", "target": ["Eine Methode zur Eliminierung der Gradientenvarianz und zur automatischen Abstimmung von Prioritäten für ein effektives Training von Bayes'schen Neuronalen Netzen.", "Schlägt einen neuen Ansatz vor, um deterministische Variationsinferenz für Feed-Forward BNN mit spezifischen nichtlinearen Aktivierungsfunktionen durch Annäherung der schichtweisen Momente durchzuführen.", "Die Arbeit betrachtet einen rein deterministischen Ansatz zum Erlernen von Variationsapproximationen für Bayes'sche neuronale Netze."]} +{"source": "Skills learned through (deep) reinforcement learning often generalizes poorly\n across tasks and re-training is necessary when presented with a new task. We\n present a framework that combines techniques in formal methods with reinforcement\n learning (RL) that allows for the convenient specification of complex temporal\n dependent tasks with logical expressions and construction of new skills from existing\n ones with no additional exploration. We provide theoretical results for our\n composition technique and evaluate on a simple grid world simulation as well as\n a robotic manipulation task.", "target": ["Ein formaler Methodenansatz für die Zusammensetzung von Fähigkeiten in Reinforcement Learning Aufgaben.", "Die Arbeit kombiniert RL und Constraints, die durch logische Formeln ausgedrückt werden, indem es eine Automatisierung aus scTLTL-Formeln einrichtet.", "Schlägt eine Methode vor, die hilft, Richtlinien aus gelernten Teilaufgaben zum Thema Kombination von RL-Aufgaben mit linearen zeitlogischen Formeln zu konstruieren."]} +{"source": "The application of multi-modal generative models by means of a Variational Auto Encoder (VAE) is an upcoming research topic for sensor fusion and bi-directional modality exchange.\n This contribution gives insights into the learned joint latent representation and shows that expressiveness and coherence are decisive properties for multi-modal datasets.\n Furthermore, we propose a multi-modal VAE derived from the full joint marginal log-likelihood that is able to learn the most meaningful representation for ambiguous observations.\n Since the properties of multi-modal sensor setups are essential for our approach but hardly available, we also propose a technique to generate correlated datasets from uni-modal ones.\n", "target": ["Ableitung einer allgemeinen Formulierung einer multimodalen VAE aus der gemeinsamen marginalen Log-Likelihood.", "Vorschlag einer multimodalen VAE mit einer aus der Kettenregel abgeleiteten Variationsschranke.", "In diesem Beitrag wird ein Ziel, M^2VAE, für multimodale VAEs vorgeschlagen, das eine aussagekräftigere latente Raumrepräsentation erlernen soll."]} +{"source": "We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in structured probabilistic models and the flexibility of deep neural networks to model complex conditional probability distributions. We develop additional theoretical insights and introduce a new training procedure which improves both model and proposal learning. We demonstrate that our approach provides a fast, easy-to-implement and scalable means for simultaneous model learning and proposal adaptation in deep generative models.", "target": ["Wir bauen auf der sequenziellen Monte Carlo Methode mit automatischer Kodierung auf, gewinnen neue theoretische Erkenntnisse und entwickeln ein verbessertes Trainingsverfahren auf der Grundlage dieser Erkenntnisse.", "Die Arbeit schlägt eine Version des IWAE-Trainings vor, die SMC anstelle des klassischen Wichtigkeits-Samplings verwendet.", "In dieser Arbeit wird die automatische Kodierung sequentieller Monte Carlo Verfahren (SMC) vorgeschlagen, die den VAE Rahmen um ein neues Monte Carlo Ziel auf der Grundlage von SMC erweitert. "]} +{"source": "A key component for many reinforcement learning agents is to learn a value function, either for policy evaluation or control. Many of the algorithms for learning values, however, are designed for linear function approximation---with a fixed basis or fixed representation. Though there have been a few sound extensions to nonlinear function approximation, such as nonlinear gradient temporal difference learning, these methods have largely not been adopted, eschewed in favour of simpler but not sound methods like temporal difference learning and Q-learning. In this work, we provide a two-timescale network (TTN) architecture that enables linear methods to be used to learn values, with a nonlinear representation learned at a slower timescale. The approach facilitates the use of algorithms developed for the linear setting, such as data-efficient least-squares methods, eligibility traces and the myriad of recently developed linear policy evaluation algorithms, to provide nonlinear value estimates. We prove convergence for TTNs, with particular care given to ensure convergence of the fast linear component under potentially dependent features provided by the learned representation. We empirically demonstrate the benefits of TTNs, compared to other nonlinear value function approximation algorithms, both for policy evaluation and control. ", "target": ["Wir schlagen eine Architektur für das Lernen von Wertfunktionen vor, die den Einsatz beliebiger linearer Algorithmen zur Bewertung von Richtlinien in Verbindung mit nichtlinearem Merkmalslernen ermöglicht.", "Die Arbeit schlägt einen Rahmen mit zwei Zeitskalen für das Lernen der Wertfunktion und einer Zustandsdarstellung mit nichtlinearen Approximatoren vor.", "In diesem Beitrag werden Two-Timescale Networks (TTNs) vorgeschlagen und die Konvergenz dieser Methode mit Methoden der stochastischen Approximation auf zwei Zeitskalen nachgewiesen. ", "In diesem Beitrag wird ein Two-Timescale Network (TTN) vorgestellt, mit dem lineare Methoden zum Lernen von Werten verwendet werden können. "]} +{"source": "Large-scale Long Short-Term Memory (LSTM) cells are often the building blocks of many state-of-the-art algorithms for tasks in Natural Language Processing (NLP). However, LSTMs are known to be computationally inefficient because the memory capacity of the models depends on the number of parameters, and the inherent recurrence that models the temporal dependency is not parallelizable. In this paper, we propose simple, but effective, low-rank matrix factorization (MF) algorithms to compress network parameters and significantly speed up LSTMs with almost no loss of performance (and sometimes even gain). To show the effectiveness of our method across different tasks, we examine two settings: 1) compressing core LSTM layers in Language Models, 2) compressing biLSTM layers of ELMo~\\citep{ELMo} and evaluate in three downstream NLP tasks (Sentiment Analysis, Textual Entailment, and Question Answering). The latter is particularly interesting as embeddings from large pre-trained biLSTM Language Models are often used as contextual word representations. Finally, we discover that matrix factorization performs better in general, additive recurrence is often more important than multiplicative recurrence, and we identify an interesting correlation between matrix norms and compression performance.\n\n", "target": ["Wir schlagen einfache, aber effektive Algorithmen zur Matrixfaktorisierung (MF) mit niedrigem Rang vor, um die Laufzeit zu beschleunigen, Speicher zu sparen und die Leistung von LSTMs zu verbessern.", "Er schlägt vor, LSTM durch die Verwendung von MF als Nachbearbeitungs-Kompressionsstrategie zu beschleunigen und führt umfangreiche Experimente durch, um die Leistung zu zeigen."]} +{"source": "Manipulation and re-use of images in scientific publications is a recurring problem, at present lacking a scalable solution. Existing tools for detecting image duplication are mostly manual or semi-automated, despite the fact that generating data for a learning-based approach is straightforward, as we here illustrate. This paper addresses the problem of determining if, given two images, one is a manipulated version of the other by means of certain geometric and statistical manipulations, e.g. copy, rotation, translation, scale, perspective transform, histogram adjustment, partial erasing, and compression artifacts. We propose a solution based on a 3-branch Siamese Convolutional Neural Network. The ConvNet model is trained to map images into a 128-dimensional space, where the Euclidean distance between duplicate (respectively, unique) images is no greater (respectively, greater) than 1. Our results suggest that such an approach can serve as tool to improve surveillance of the published and in-peer-review literature for image manipulation. We also show that as a byproduct the network learns useful representations for semantic segmentation, with performance comparable to that of domain-specific models.", "target": ["Eine forensische Metrik zur Bestimmung, ob ein bestimmtes Bild eine Kopie (mit möglicher Manipulation) eines anderen Bildes aus einem bestimmten Datensatz ist.", "Einführung des siamesischen Netzwerks zur Identifizierung von doppelten und kopierten/veränderten Bildern, das zur Verbesserung der Überwachung der veröffentlichten und begutachteten Literatur eingesetzt werden kann.", "Die Arbeit präsentiert eine Anwendung von tiefen Convolutional Networks für die Aufgabe der Erkennung von Bildduplikaten.", "Diese Arbeit befasst sich mit dem Problem des Auffindens von doppelten/fast doppelten Bildern aus biomedizinischen Veröffentlichungen und schlägt ein Standard-CNN und Verlustfunktionen vor und wendet sie auf diesen Bereich an."]} +{"source": "Training generative adversarial networks is unstable in high-dimensions as the true data distribution tends to be concentrated in a small fraction of the ambient space. The discriminator is then quickly able to classify nearly all generated samples as fake, leaving the generator without meaningful gradients and causing it to deteriorate after a point in training. In this work, we propose training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data. Individual discriminators, now provided with restricted views of the input, are unable to reject generated samples perfectly and continue to provide meaningful gradients to the generator throughout training. Meanwhile, the generator learns to produce samples consistent with the full data distribution to satisfy all discriminators simultaneously. We demonstrate the practical utility of this approach experimentally, and show that it is able to produce image samples with higher quality than traditional training with a single discriminator.", "target": ["Stabiles GAN-Training in hohen Dimensionen durch Verwendung eines Arrays von Diskriminatoren, jeder mit einer niedrigdimensionalen Ansicht der erzeugten Beispiele.", "In dem Beitrag wird vorgeschlagen, das GAN-Training zu stabilisieren, indem ein Ensemble von Diskriminatoren verwendet wird, von denen jeder auf einer zufälligen Projektion der Eingabedaten arbeitet, um das Trainingssignal für das Generatormodell zu liefern.", "In der Arbeit wird eine GAN-Trainingsmethode zur Verbesserung der Trainingsstabilität vorgeschlagen. ", "In dem Beitrag wird ein neuer Ansatz für das GAN-Training vorgeschlagen, der stabile Gradienten für das Training des Generators liefert."]} +{"source": "We present a novel method to precisely impose tree-structured category information onto word-embeddings, resulting in ball embeddings in higher dimensional spaces (N-balls for short). Inclusion relations among N-balls implicitly encode subordinate relations among categories. The similarity measurement in terms of the cosine function is enriched by category information. Using a geometric construction method instead of back-propagation, we create large N-ball embeddings that satisfy two conditions: (1) category trees are precisely imposed onto word embeddings at zero energy cost; (2) pre-trained word embeddings are well preserved. A new benchmark data set is created for validating the category of unknown words. Experiments show that N-ball embeddings, carrying category information, significantly outperform word embeddings in the test of nearest neighborhoods, and demonstrate surprisingly good performance in validating categories of unknown words. Source codes and data-sets are free for public access \\url{https://github.com/gnodisnait/nball4tree.git} and \\url{https://github.com/gnodisnait/bp94nball.git}.", "target": ["Zeigen wir eine geometrische Methode zur perfekten Kodierung von Kategoriebaum-Informationen in vortrainierte Worteinbettungen.", "Die Arbeit schlägt eine N-Ball-Einbettung für taxonomische Daten vor, wobei ein N-Ball ein Paar aus einem Schwerpunktvektor und dem Radius vom Zentrum ist.", "In diesem Beitrag wird eine Methode vorgestellt, mit der bestehende Vektoreinbettungen von kategorialen Objekten (wie z. B. Wörtern) so verändert werden können, dass sie in Balleinbettungen umgewandelt werden, die Hierarchien folgen.", "Konzentriert sich auf die Anpassung der vortrainierten Worteinbettungen, so dass sie die Hypernymie/Hyponymie-Beziehung durch geeignete n-ball Kapselung respektieren."]} +{"source": "For the challenging semantic image segmentation task the best performing models\n have traditionally combined the structured modelling capabilities of Conditional\n Random Fields (CRFs) with the feature extraction power of CNNs. In more recent\n works however, CRF post-processing has fallen out of favour. We argue that this\n is mainly due to the slow training and inference speeds of CRFs, as well as the\n difficulty of learning the internal CRF parameters. To overcome both issues we\n propose to add the assumption of conditional independence to the framework of\n fully-connected CRFs. This allows us to reformulate the inference in terms of\n convolutions, which can be implemented highly efficiently on GPUs.Doing so\n speeds up inference and training by two orders of magnitude. All parameters of\n the convolutional CRFs can easily be optimized using backpropagation. Towards\n the goal of facilitating further CRF research we have made our implementations\n publicly available.", "target": ["Wir schlagen Convolutional CRFs als schnelle, leistungsstarke und trainierbare Alternative zu Fully Connected CRFs vor.", "Die Autoren ersetzen den großen Filterschritt im permutoförmigen Gitter durch einen räumlich variierenden Convolutional Kernel und zeigen, dass die Inferenz effizienter und das Training einfacher ist. ", "Schlägt vor, die Nachrichtenübermittlung auf einer CRF mit abgeschnittenem Gauß-Kernel unter Verwendung eines definierten Kernels und parallelisierter Nachrichten��bermittlung auf einer GPU durchzuführen."]} +{"source": "Deep Learning NLP domain lacks procedures for the analysis of model robustness. In this paper we propose a framework which validates robustness of any Question Answering model through model explainers. We propose that output of a robust model should be invariant to alterations that do not change its semantics. We test this property by manipulating question in two ways: swapping important question word for 1) its semantically correct synonym and 2) for word vector that is close in embedding space. We estimate importance of words in asked questions with Locally Interpretable Model Agnostic Explanations method (LIME). With these two steps we compare state-of-the-art Q&A models. We show that although accuracy of state-of-the-art models is high, they are very fragile to changes in the input. We can choose architecture that is more immune to attacks and thus more robust and stable in production environment. Morevoer, we propose 2 adversarial training scenarios which raise model sensitivity to true synonyms by up to 7% accuracy measure. Our findings help to understand which models are more stable and how they can be improved. In addition, we have created and published a new dataset that may be used for validation of robustness of a Q&A model.", "target": ["Wir schlagen einen modellunabhängigen Ansatz zur Validierung der Robustheit von Q&A-Systemen vor und demonstrieren die Ergebnisse anhand modernster Q&A-Modelle.", "Befasst sich mit dem Problem der Robustheit gegenüber gegnerischen Informationen bei der Beantwortung von Fragen.", "Verbesserung der Robustheit des maschinellen Verstehens/Fragenbeantwortens."]} +{"source": "In this paper, we propose a mix-generator generative adversarial networks (PGAN) model that works in parallel by mixing multiple disjoint generators to approximate a complex real distribution. In our model, we propose an adjustment component that collects all the generated data points from the generators, learns the boundary between each pair of generators, and provides error to separate the support of each of the generated distributions. To overcome the instability in a multiplayer game, a shrinkage adjustment component method is introduced to gradually reduce the boundary between generators during the training procedure. To address the linearly growing training time problem in a multiple generators model, we propose a method to train the generators in parallel. This means that our work can be scaled up to large parallel computation frameworks. We present an efficient loss function for the discriminator, an effective adjustment component, and a suitable generator. We also show how to introduce the decay factor to stabilize the training procedure. We have performed extensive experiments on synthetic datasets, MNIST, and CIFAR-10. These experiments reveal that the error provided by the adjustment component could successfully separate the generated distributions and each of the generators can stably learn a part of the real distribution even if only a few modes are contained in the real distribution.", "target": ["Multi-Generator zur Erfassung von Pdata, zur Lösung des Wettbewerbs und des One-beat-all-Problems.", "Schlägt parallele GANs vor, um durch eine Kombination mehrerer schwacher Generatoren einen Moduskollaps in GANs zu vermeiden. "]} +{"source": "We capitalize on the natural compositional structure of images in order to learn object segmentation with weakly labeled images. The intuition behind our approach is that removing objects from images will yield natural images, however removing random patches will yield unnatural images. We leverage this signal to develop a generative model that decomposes an image into layers, and when all layers are combined, it reconstructs the input image. However, when a layer is removed, the model learns to produce a different image that still looks natural to an adversary, which is possible by removing objects. Experiments and visualizations suggest that this model automatically learns object segmentation on images labeled only by scene better than baselines.", "target": ["Schwach überwachte Bildsegmentierung unter Verwendung der kompositorischen Struktur von Bildern und generativen Modellen.", "In diesem Beitrag wird eine mehrschichtige Darstellung erstellt, um die Segmentierung von unbeschrifteten Bildern besser zu lernen.", "In dieser Arbeit wird ein generatives Modell auf GAN-Basis vorgeschlagen, das Bilder in mehrere Schichten zerlegt, wobei das Ziel des GAN darin besteht, echte Bilder von Bildern zu unterscheiden, die durch die Kombination der Schichten entstehen.", "In diesem Beitrag wird eine neuronale Netzarchitektur vorgeschlagen, die auf der Idee einer mehrschichtigen Szenenkomposition basiert."]} +{"source": "Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. We propose a geometric framework, drawing on tools from the manifold reconstruction literature, to analyze the high-dimensional geometry of adversarial examples. In particular, we highlight the importance of codimension: for low-dimensional data manifolds embedded in high-dimensional space there are many directions off the manifold in which to construct adversarial examples. Adversarial examples are a natural consequence of learning a decision boundary that classifies the low-dimensional data manifold well, but classifies points near the manifold incorrectly. Using our geometric framework we prove (1) a tradeoff between robustness under different norms, (2) that adversarial training in balls around the data is sample inefficient, and (3) sufficient sampling conditions under which nearest neighbor classifiers and ball-based adversarial training are robust.", "target": ["Wir stellen einen geometrischen Rahmen für den Nachweis von Robustheitsgarantien vor und heben die Bedeutung der Kodimension in gegnerischen Beispielen hervor. ", "In dieser Arbeit wird eine theoretische Analyse von adversarial Beispielen durchgeführt, die zeigt, dass es einen Kompromiss zwischen der Robustheit in verschiedenen Normen gibt, dass adversarial Training ineffizient ist und dass der Nearest Neighbor Classifier unter bestimmten Bedingungen robust sein kann."]} +{"source": "Character-based neural machine translation (NMT) models alleviate out-of-vocabulary issues, learn morphology, and move us closer to completely end-to-end translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic and natural sources of noise. We find that state-of-the-art models fail to translate even moderately noisy texts that humans have no trouble comprehending. We explore two approaches to increase model robustness: structure-invariant word representations and robust training on noisy texts. We find that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise.", "target": ["CharNMT ist spröde.", "In dieser Arbeit werden die Auswirkungen von Rauschen auf Zeichenebene auf 4 verschiedene neuronale maschinelle Übersetzungssysteme untersucht.", "In dieser Arbeit wird die Leistung von NMT-Systemen auf Zeichenebene angesichts von synthetischen und natürlichen Geräuschen auf Zeichenebene empirisch untersucht.", "Diese Arbeit untersucht die Auswirkungen von verrauschten Eingaben auf die maschinelle Übersetzung und testet Möglichkeiten, NMT-Modelle robuster zu machen."]} +{"source": "As neural networks grow deeper and wider, learning networks with hard-threshold activations is becoming increasingly important, both for network quantization, which can drastically reduce time and energy requirements, and for creating large integrated systems of deep networks, which may have non-differentiable components and must avoid vanishing and exploding gradients for effective learning. However, since gradient descent is not applicable to hard-threshold functions, it is not clear how to learn them in a principled way. We address this problem by observing that setting targets for hard-threshold hidden units in order to minimize loss is a discrete optimization problem, and can be solved as such. The discrete optimization goal is to find a set of targets such that each unit, including the output, has a linearly separable problem to solve. Given these targets, the network decomposes into individual perceptrons, which can then be learned with standard convex approaches. Based on this, we develop a recursive mini-batch algorithm for learning deep hard-threshold networks that includes the popular but poorly justified straight-through estimator as a special case. Empirically, we show that our algorithm improves classification accuracy in a number of settings, including for AlexNet and ResNet-18 on ImageNet, when compared to the straight-through estimator.", "target": ["Wir lernen tiefe Netzwerke von Einheiten mit harten Schwellenwerten, indem wir Ziele für versteckte Einheiten durch kombinatorische Optimierung und Gewichte durch konvexe Optimierung festlegen, was zu einer verbesserten Leistung bei ImageNet führt.", "Die Arbeit erklärt und verallgemeinert Ansätze zum Lernen neuronaler Netze mit harter Aktivierung.", "In dieser Arbeit wird das Problem der Optimierung von tiefen Netzen mit hartschwelligen Einheiten untersucht.", "Die Arbeit erörtert das Problem der Optimierung neuronaler Netze mit harten Schwellenwerten und schlägt eine neuartige Lösung mit einer Sammlung von Heuristiken/Annäherungen dafür vor."]} +{"source": "The robust and efficient recognition of visual relations in images is a hallmark of biological vision. Here, we argue that, despite recent progress in visual recognition, modern machine vision algorithms are severely limited in their ability to learn visual relations. Through controlled experiments, we demonstrate that visual-relation problems strain convolutional neural networks (CNNs). The networks eventually break altogether when rote memorization becomes impossible such as when the intra-class variability exceeds their capacity. We further show that another type of feedforward network, called a relational network (RN), which was shown to successfully solve seemingly difficult visual question answering (VQA) problems on the CLEVR datasets, suffers similar limitations. Motivated by the comparable success of biological vision, we argue that feedback mechanisms including working memory and attention are the key computational components underlying abstract visual reasoning.", "target": ["Anhand einer neuartigen, kontrollierten visuellen Beziehungsaufgabe zeigen wir, dass gleich-unterschiedliche Aufgaben die Kapazität von CNNs kritisch belasten; wir argumentieren, dass visuelle Beziehungen mit Hilfe von Aufmerksamkeits-Gedächtnisstrategien besser gelöst werden können.", "Zeigt, dass Convolutional und relationale neuronale Netze visuelle Beziehungsprobleme nicht lösen können, indem die Netze mit künstlich erzeugten visuellen Beziehungsdaten trainiert werden. ", "In diesem Beitrag wird untersucht, wie aktuelle CNNs und Relationale Netzwerke visuelle Beziehungen in Bildern nicht erkennen können."]} +{"source": "Visual Active Tracking (VAT) aims at following a target object by autonomously controlling the motion system of a tracker given visual observations. Previous work has shown that the tracker can be trained in a simulator via reinforcement learning and deployed in real-world scenarios. However, during training, such a method requires manually specifying the moving path of the target object to be tracked, which cannot ensure the tracker’s generalization on the unseen object moving patterns. To learn a robust tracker for VAT, in this paper, we propose a novel adversarial RL method which adopts an Asymmetric Dueling mechanism, referred to as AD-VAT. In AD-VAT, both the tracker and the target are approximated by end-to-end neural networks, and are trained via RL in a dueling/competitive manner: i.e., the tracker intends to lockup the target, while the target tries to escape from the tracker. They are asymmetric in that the target is aware of the tracker, but not vice versa. Specifically, besides its own observation, the target is fed with the tracker’s observation and action, and learns to predict the tracker’s reward as an auxiliary task. We show that such an asymmetric dueling mechanism produces a stronger target, which in turn induces a more robust tracker. To stabilize the training, we also propose a novel partial zero-sum reward for the tracker/target. The experimental results, in both 2D and 3D environments, demonstrate that the proposed method leads to a faster convergence in training and yields more robust tracking behaviors in different testing scenarios. For supplementary videos, see: https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS \n The code is available at https://github.com/zfw1226/active_tracking_rl", "target": ["Wir schlagen AD-VAT vor, bei dem der Tracker und das Zielobjekt, die als zwei lernfähige Agenten betrachtet werden, Gegner sind und sich während des Trainings gegenseitig verbessern können.", "Diese Arbeit zielt darauf ab, das visuelle aktive Verfolgungsproblem mit einem Trainingsmechanismus anzugehen, bei dem der Tracker und das Ziel als gegenseitige Gegner dienen.", "In diesem Beitrag wird eine einfache Multi-Agenten-Deep-RL-Aufgabe vorgestellt, bei der ein beweglicher Tracker versucht, einem beweglichen Ziel zu folgen.", "Er schlägt eine neuartige Belohnungsfunktion vor - eine \"partielle Nullsumme\", die den Wettbewerb zwischen Tracker und Ziel nur dann fördert, wenn sie sich nahe beieinander befinden, und bestraft, wenn sie zu weit entfernt sind."]} +{"source": "Identifying the hypernym relations that hold between words is a fundamental task in NLP. Word embedding methods have recently shown some capability to encode hypernymy. However, such methods tend not to explicitly encode the hypernym hierarchy that exists between words. In this paper, we propose a method to learn a hierarchical word embedding in a specific order to capture the hypernymy. To learn the word embeddings, the proposed method considers not only the hypernym relations that exists between words on a taxonomy, but also their contextual information in a large text corpus. The experimental results on a supervised hypernymy detection and a newly-proposed hierarchical path completion tasks show the ability of the proposed method to encode the hierarchy. Moreover, the proposed method outperforms previously proposed methods for learning word and hypernym-specific word embeddings on multiple benchmarks.", "target": ["Wir präsentierten eine Methode zum gemeinsamen Erlernen einer hierarchischen Worteinbettung (Hierarchical Word Embedding, HWE) unter Verwendung eines Korpus und einer Taxonomie zur Identifizierung der hypernymischen Beziehungen zwischen Wörtern.", "In diesem Beitrag wird eine Methode zum gemeinsamen Lernen von Worteinbettungen unter Verwendung von Koinzidenzstatistiken und unter Einbeziehung von hierarchischen Informationen aus semantischen Netzwerken vorgestellt.", "In dieser Arbeit wird eine gemeinsame Lernmethode für Hypernyme aus Rohtext und überwachten Taxonomiedaten vorgeschlagen. ", "In diesem Beitrag wird vorgeschlagen, dem GloVE Ziel ein Maß für die \"distributionelle Einschlussdifferenz\" hinzuzufügen, um Hypernym-Relationen darzustellen."]} +{"source": "While self-organizing principles have motivated much of early learning models, such principles have rarely been included in deep learning architectures. Indeed, from a supervised learning perspective it seems that topographic constraints are rather decremental to optimal performance. Here we study a network model that incorporates self-organizing maps into a supervised network and show how gradient learning results in a form of a self-organizing learning rule. Moreover, we show that such a model is robust in the sense of its application to a variety of areas, which is believed to be a hallmark of biological learning systems.", "target": ["Integration von Selbstorganisation und überwachtem Lernen in einem hierarchischen neuronalen Netz.", "Der Beitrag diskutiert das Lernen in einem neuronalen Netz mit drei Schichten, wobei die mittlere Schicht topographisch organisiert ist, und untersucht das Zusammenspiel von unbeaufsichtigtem und hierarchisch überwachtem Lernen im biologischen Kontext.", "Eine überwachte Variante der selbstorganisierenden Karte (SOM) von Kohonen, bei der jedoch die lineare Ausgabeschicht mit quadratischem Fehler durch eine Softmax-Schicht mit Cross-Entropy ersetzt wird.", "Schlägt ein Modell mit versteckten Neuronen mit selbstorganisierender Aktivierungsfunktion vor, deren Ausgänge einen Klassifikator mit Softmax-Ausgangsfunktion speisen. "]} +{"source": "Quantization of a neural network has an inherent problem called accumulated quantization error, which is the key obstacle towards ultra-low precision, e.g., 2- or 3-bit precision. To resolve this problem, we propose precision highway, which forms an end-to-end high-precision information flow while performing the ultra-low-precision computation. First, we describe how the precision highway reduce the accumulated quantization error in both convolutional and recurrent neural networks. We also provide the quantitative analysis of the benefit of precision highway and evaluate the overhead on the state-of-the-art hardware accelerator. In the experiments, our proposed method outperforms the best existing quantization methods while offering 3-bit weight/activation quantization with no accuracy loss and 2-bit quantization with a 2.45 % top-1 accuracy loss in ResNet-50. We also report that the proposed method significantly outperforms the existing method in the 2-bit quantization of an LSTM for language modeling.", "target": ["Präzisionsautobahn; ein verallgemeinertes Konzept des hochpräzisen Informationsflusses für Sub 4-Bit Quantisierung .", "Untersucht das Problem der Quantisierung neuronaler Netze durch den Einsatz eines Präzisions-Highways von Ende-zu-Ende, um den akkumulierten Quantisierungsfehler zu reduzieren und eine extrem niedrige Präzision in tiefen neuronalen Netzen zu ermöglichen. ", "Diese Arbeit untersucht Methoden zur Verbesserung der Leistung quantisierter neuronaler Netze.", "In diesem Beitrag wird vorgeschlagen, einen hohen Aktivierungs-/Gradientenfluss in zwei Arten von Netzwerkstrukturen, ResNet und LSTM, beizubehalten."]} +{"source": "The vast majority of natural sensory data is temporally redundant. For instance, video frames or audio samples which are sampled at nearby points in time tend to have similar values. Typically, deep learning algorithms take no advantage of this redundancy to reduce computations. This can be an obscene waste of energy. We present a variant on backpropagation for neural networks in which computation scales with the rate of change of the data - not the rate at which we process the data. We do this by implementing a form of Predictive Coding wherein neurons communicate a combination of their state, and their temporal change in state, and quantize this signal using Sigma-Delta modulation. Intriguingly, this simple communication rule give rise to units that resemble biologically-inspired leaky integrate-and-fire neurons, and to a spike-timing-dependent weight-update similar to Spike-Timing Dependent Plasticity (STDP), a synaptic learning rule observed in the brain. We demonstrate that on MNIST, on a temporal variant of MNIST, and on Youtube-BB, a dataset with videos in the wild, our algorithm performs about as well as a standard deep network trained with backpropagation, despite only communicating discrete values between layers. ", "target": ["Ein Algorithmus zum effizienten Training neuronaler Netze auf zeitlich redundanten Daten.", "Die Arbeit beschreibt ein neuronales Kodierungsschema für spike-basiertes Lernen in tiefen neuronalen Netzen.", "In diesem Beitrag wird eine Methode für spike-basiertes Lernen vorgestellt, die darauf abzielt, den Rechenaufwand beim Lernen und Testen zu reduzieren, wenn zeitlich redundante Daten klassifiziert werden.", "In dieser Arbeit wird eine prädiktive Kodierungsversion des Sigma-Delta-Kodierungsschemas angewandt, um die Rechenlast eines Deep-Learning-Netzes zu verringern, wobei die drei Komponenten in einer bisher unbekannten Weise kombiniert werden."]} +{"source": "Information bottleneck (IB) is a method for extracting information from one random variable X that is relevant for predicting another random variable Y. To do so, IB identifies an intermediate \"bottleneck\" variable T that has low mutual information I(X;T) and high mutual information I(Y;T). The \"IB curve\" characterizes the set of bottleneck variables that achieve maximal I(Y;T) for a given I(X;T), and is typically explored by maximizing the \"IB Lagrangian\", I(Y;T) - βI(X;T). In some cases, Y is a deterministic function of X, including many classification problems in supervised learning where the output class Y is a deterministic function of the input X. We demonstrate three caveats when using IB in any situation where Y is a deterministic function of X: (1) the IB curve cannot be recovered by maximizing the IB Lagrangian for different values of β; (2) there are \"uninteresting\" trivial solutions at all points of the IB curve; and (3) for multi-layer classifiers that achieve low prediction error, different layers cannot exhibit a strict trade-off between compression and prediction, contrary to a recent proposal. We also show that when Y is a small perturbation away from being a deterministic function of X, these three caveats arise in an approximate way. To address problem (1), we propose a functional that, unlike the IB Lagrangian, can recover the IB curve in all cases. We demonstrate the three caveats on the MNIST dataset.", "target": ["Informationsengpässe verhalten sich auf überraschende Weise, wenn der Output eine deterministische Funktion des Inputs ist.", "Argumentiert, dass die meisten realen Klassifizierungsprobleme eine solche deterministische Beziehung zwischen den Klassenbezeichnungen und den Eingaben X aufweisen, und untersucht mehrere Probleme, die sich aus solchen Pathologien ergeben.", "Untersuchung von Problemen, die bei der Anwendung von Konzepten des Informationsengpasses auf deterministische überwachte Lernmodelle auftreten.", "Die Autoren erläutern mehrere kontraintuitive Verhaltensweisen der Informationsengpass-Methode für das überwachte Lernen einer deterministischen Regel."]} +{"source": "We prove, under two sufficient conditions, that idealised models can have no adversarial examples. We discuss which idealised models satisfy our conditions, and show that idealised Bayesian neural networks (BNNs) satisfy these. We continue by studying near-idealised BNNs using HMC inference, demonstrating the theoretical ideas in practice. We experiment with HMC on synthetic data derived from MNIST for which we know the ground-truth image density, showing that near-perfect epistemic uncertainty correlates to density under image manifold, and that adversarial images lie off the manifold in our setting. This suggests why MC dropout, which can be seen as performing approximate inference, has been observed to be an effective defence against adversarial examples in practice; We highlight failure-cases of non-idealised BNNs relying on dropout, suggesting a new attack for dropout models and a new defence as well. Lastly, we demonstrate the defence on a cats-vs-dogs image classification task with a VGG13 variant.", "target": ["Wir beweisen, dass idealisierte Bayes'sche neuronale Netze keine gegnerischen Beispiele haben können, und geben empirische Beweise mit realen BNNs.", "Die Arbeit untersucht die Robustheit von Bayes'schen Klassifizierern gegenüber widrigen Umständen und nennt zwei Bedingungen, die nachweislich ausreichen, damit \"idealisierte Modelle\" auf \"idealisierten Datensätzen\" keine adversarial Beispiele haben.", "Die Arbeit stellt eine Klasse von diskriminativen Bayes'schen Klassifikatoren vor, die keine gegnerischen Beispiele haben."]} +{"source": "Deep neural networks are susceptible to adversarial attacks. In computer vision, well-crafted perturbations to images can cause neural networks to make mistakes such as confusing a cat with a computer. Previous adversarial attacks have been designed to degrade performance of models or cause machine learning models to produce specific outputs chosen ahead of time by the attacker. We introduce attacks that instead reprogram the target model to perform a task chosen by the attacker without the attacker needing to specify or compute the desired output for each test-time input. This attack finds a single adversarial perturbation, that can be added to all test-time inputs to a machine learning model in order to cause the model to perform a task chosen by the adversary—even if the model was not trained to do this task. These perturbations can thus be considered a program for the new task. We demonstrate adversarial reprogramming on six ImageNet classification models, repurposing these models to perform a counting task, as well as classification tasks: classification of MNIST and CIFAR-10 examples presented as inputs to the ImageNet model.", "target": ["Wir stellen die erste Instanz von adversarial Angriffen vor, die das Zielmodell so umprogrammieren, dass es eine vom Angreifer gewählte Aufgabe ausführt - ohne dass der Angreifer die gewünschte Ausgabe für jede Testzeiteingabe angeben oder berechnen muss.", "Die Autoren stellen ein neuartiges adversarial Angriffsschema vor, bei dem ein neuronales Netz umfunktioniert wird, um eine andere Aufgabe zu erfüllen als die, für die es ursprünglich trainiert wurde.", "In diesem Beitrag wird die \"adversarial Umprogrammierung\" von gut trainierten und festen neuronalen Netzen vorgeschlagen und gezeigt, dass die adversarial Umprogrammierung bei untrainierten Netzen weniger effektiv ist.", "Die Arbeit erweitert die Idee der \"adversarial Angriffe\" beim überwachten Lernen von NNs auf eine vollständige Umwidmung der Lösung eines trainierten Netzes."]} +{"source": "As shown in recent research, deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon indicates that loss functions such as cross-entropy are not a reliable indicator of generalization. This leads to the crucial question of how generalization gap should be predicted from the training data and network parameters. In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap. Our measure is based on the concept of margin distribution, which are the distances of training points to the decision boundary. We find that it is necessary to use margin distributions at multiple layers of a deep network. On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum).\n Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization.", "target": ["Wir entwickeln ein neues Verfahren, um die Generalisierungslücke in tiefen Netzen mit hoher Genauigkeit vorherzusagen.", "Die Autoren schlagen vor, eine geometrische Marge und eine schichtweise Margenverteilung zur Vorhersage der Generalisierungslücke zu verwenden.", "Empirisch zeigt sich ein interessanter Zusammenhang zwischen den vorgeschlagenen Margin-Statistiken und der Generalisierungslücke, der genutzt werden kann, um einige präskriptive Erkenntnisse zum Verständnis der Generalisierung in tiefen neuronalen Netzen zu liefern. "]} +{"source": "We propose a new algorithm to learn a one-hidden-layer convolutional neural network where both the convolutional weights and the outputs weights are parameters to be learned. Our algorithm works for a general class of (potentially overlapping) patches, including commonly used structures for computer vision tasks. Our algorithm draws ideas from (1) isotonic regression for learning neural networks and (2) landscape analysis of non-convex matrix factorization problems. We believe these findings may inspire further development in designing provable algorithms for learning neural networks and other complex models. While our focus is theoretical, we also present experiments that illustrate our theoretical findings.", "target": ["Wir schlagen einen Algorithmus zur beweisbaren Wiederherstellung von Parametern (Convolutional und Ausgangsgewichte) eines Convolutional Networks mit überlappenden Patches vor.", "In dieser Arbeit wird das theoretische Lernen von einschichtigen Convolutional Neural Networks untersucht. Das Ergebnis ist ein Lernalgorithmus und nachweisbare Garantien, die diesen Algorithmus verwenden.", "In diesem Beitrag wird ein neuer Algorithmus für das Lernen eines zweischichtigen neuronalen Netzes vorgestellt, der einen einzigen Convolutional Filter und einen Gewichtsvektor für verschiedene Orte umfasst."]} +{"source": "Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a different metric, but thereby introduce a Lipschitz constraint into the optimization problem. A simple way to enforce the Lipschitz constraint on the class of functions, which can be modeled by the neural network, is weight clipping. Augmenting the loss by a regularization term that penalizes the deviation of the gradient norm of the critic (as a function of the network's input) from one, was proposed as an alternative that improves training. We present theoretical arguments why using a weaker regularization term enforcing the Lipschitz constraint is preferable. These arguments are supported by experimental results on several data sets.", "target": ["Ein neuer Regularisierungsbegriff kann das Training von Wasserstein GANs verbessern.", "Die Arbeit schlägt ein Regularisierungsschema für Wasserstein GAN vor, das auf einer Lockerung der Beschränkungen der Lipschitz-Konstante von 1 basiert.", "Der Artikel befasst sich mit der Regularisierung/Bestrafung bei der Anpassung von GANs, wenn diese auf einer L_1 Wasserstein-Metrik basieren."]} +{"source": "We introduce a new method for training GANs by applying the Wasserstein-2 metric proximal on the generators. \n The approach is based on the gradient operator induced by optimal transport, which connects the geometry of sample space and parameter space in implicit deep generative models. From this theory, we obtain an easy-to-implement regularizer for the parameter updates. Our experiments demonstrate that this method improves the speed and stability in training GANs in terms of wall-clock time and Fr\\'echet Inception Distance (FID) learning curves.", "target": ["Wir schlagen die Wasserstein-Proximal-Methode für das Training von GANs vor. ", "Schlägt ein neues GAN-Verfahren vor, das die in der vorangegangenen Iteration erzeugten Punkte berücksichtigt und den Generator aktualisiert, der l-mal auszuführen ist.", "Betrachtet das natürliche Gradientenlernen beim GAN-Lernen, wobei die durch den Wasserstein-2-Abstand induzierte Riemannsche Struktur verwendet wird.", "Die Arbeit beabsichtigt, den durch die Wasserstein-2-Distanz induzierten natürlichen Gradienten zu nutzen, um den Generator im GAN zu trainieren, und die Autoren schlagen den Wasserstein-Proximaloperator als Regularisierung vor."]} +{"source": "Influence diagrams provide a modeling and inference framework for sequential decision problems, representing the probabilistic knowledge by a Bayesian network and the preferences of an agent by utility functions over the random variables and decision variables.\n MDPs and POMDPS, widely used for planning under uncertainty can also be represented by influence diagrams.\n The time and space complexity of computing the maximum expected utility (MEU) and its maximizing policy is exponential in the induced width of the underlying graphical model, which is often prohibitively large due to the growth of the information set under the sequence of decisions.\n In this paper, we develop a weighted mini-bucket approach for bounding the MEU. These bounds can be used as a stand-alone approximation that can be improved as a function of a controlling i-bound parameter .\nThey can also be used as heuristic functions to guide search, especially for planning \n such as MDPs and POMDPs.\n We evaluate the scheme empirically against state-of-the-art, thus illustrating its potential.\n", "target": ["In diesem Beitrag wird eine auf Eliminierung basierende heuristische Funktion für die sequentielle Entscheidungsfindung vorgestellt, die sich zur Steuerung von AND/OR-Suchalgorithmen zur Lösung von Einflussdiagrammen eignet.", "Verallgemeinert die Minibuckets-Inferenzheuristik auf Einflussdiagramme."]} +{"source": "Probabilistic Neural Networks deal with various sources of stochasticity: input noise, dropout, stochastic neurons, parameter uncertainties modeled as random variables, etc.\n In this paper we revisit a feed-forward propagation approach that allows one to estimate for each neuron its mean and variance w.r.t. all mentioned sources of stochasticity. In contrast, standard NNs propagate only point estimates, discarding the uncertainty.\n Methods propagating also the variance have been proposed by several authors in different context. The view presented here attempts to clarify the assumptions and derivation behind such methods, relate them to classical NNs and broaden their scope of applicability.\n The main technical contributions are new approximations for the distributions of argmax and max-related transforms, which allow for fully analytic uncertainty propagation in networks with softmax and max-pooling layers as well as leaky ReLU activations.\n We evaluate the accuracy of the approximation and suggest a simple calibration. Applying the method to networks with dropout allows for faster training and gives improved test likelihoods without the need of sampling.", "target": ["Annäherung von Mittelwert und Varianz des NN-Outputs bei verrauschtem Input / Dropout / unsicheren Parametern. Analytische Approximationen für Argmax-, Softmax- und Max-Schichten.", "Die Autoren konzentrieren sich auf das Problem der Unsicherheitsfortpflanzung in DNN.", "In diesem Beitrag wird die Feed-Forward Ausbreitung von Mittelwert und Varianz in Neuronen neu betrachtet, indem das Problem der Ausbreitung von Unsicherheit durch Max-Pooling Schichten und Softmax behandelt wird."]} +{"source": "Generative Adversarial Networks are one of the leading tools in generative modeling, image editing and content creation. \n However, they are hard to train as they require a delicate balancing act between two deep networks fighting a never ending duel. Some of the most promising adversarial models today minimize a Wasserstein objective. It is smoother and more stable to optimize. In this paper, we show that the Wasserstein distance is just one out of a large family of objective functions that yield these properties. By making the discriminator of a GAN robust to adversarial attacks we can turn any GAN objective into a smooth and stable loss. We experimentally show that any GAN objective, including Wasserstein GANs, benefit from adversarial robustness both quantitatively and qualitatively. The training additionally becomes more robust to suboptimal choices of hyperparameters, model architectures, or objective functions.", "target": ["Ein Diskriminator, der sich nicht so leicht durch adversarial Beispiele täuschen lässt, macht das GAN-Training robuster und führt zu einem glatteren Ziel.", "Dieser Beitrag schlägt einen neuen Weg vor, um den Trainingsprozess von GAN zu stabilisieren, indem der Diskriminator so reguliert wird, dass er gegenüber adversarial Beispielen robust ist.", "Die Arbeit schlägt eine systematische Methode für das Training von GANs mit Robustheits-Regularisierungs Terms vor, die ein reibungsloseres Training von GANs ermöglicht. ", "Es wird die Idee vorgestellt, dass das GAN Ziel durch die Robustheit eines Diskriminators gegenüber negativen Störungen geglättet werden kann, was zu besseren Ergebnissen sowohl visuell als auch in Bezug auf die FID führt."]} +{"source": "We propose a method to learn stochastic activation functions for use in probabilistic neural networks.\n First, we develop a framework to embed stochastic activation functions based on Gaussian processes in probabilistic neural networks.\n Second, we analytically derive expressions for the propagation of means and covariances in such a network, thus allowing for an efficient implementation and training without the need for sampling.\n Third, we show how to apply variational Bayesian inference to regularize and efficiently train this model.\n The resulting model can deal with uncertain inputs and implicitly provides an estimate of the confidence of its predictions.\n Like a conventional neural network it can scale to datasets of arbitrary size and be extended with convolutional and recurrent connections, if desired.", "target": ["Wir modellieren die Aktivierungsfunktion jedes Neurons als Gaußschen Prozess und lernen sie zusammen mit dem Gewicht mit Variational Inference.", "Es wird vorgeschlagen, die funktionale Form jeder Aktivierungsfunktion im neuronalen Netz mit Gaußschen Prozessprioritäten zu versehen, um die Form der Aktivierungsfunktionen zu lernen."]} +{"source": "Recent results from linear algebra stating that any matrix can be decomposed into products of diagonal and circulant matrices has lead to the design of compact deep neural network architectures that perform well in practice. In this paper, we bridge the gap between these good empirical results \n and the theoretical approximation capabilities of Deep diagonal-circulant ReLU networks. More precisely, we first demonstrate that a Deep diagonal-circulant ReLU networks of\n bounded width and small depth can approximate a deep ReLU network in which the dense matrices are\n of low rank. Based on this result, we provide new bounds on the expressive power and universal approximativeness of this type of networks. We support our experimental results with thorough experiments on a large, real world video classification problem.", "target": ["Wir bieten eine theoretische Studie über die Eigenschaften von tiefen zirkulant-diagonalen ReLU-Netzen und zeigen, dass sie universelle Approximatoren mit begrenzter Breite sind.", "In dem Beitrag wird vorgeschlagen, zirkulierende und diagonale Matrizen zu verwenden, um die Berechnung zu beschleunigen und den Speicherbedarf in neuronalen Netzen zu verringern.", "Diese Arbeit beweist, dass diagonal-zirkulierende ReLU-Netze mit begrenzter Breite (DC-ReLU) universelle Approximatoren sind."]} +{"source": "Camera drones, a rapidly emerging technology, offer people the ability to remotely inspect an environment with a high degree of mobility and agility. However, manual remote piloting of a drone is prone to errors. In contrast, autopilot systems can require a significant degree of environmental knowledge and are not necessarily designed to support flexible visual inspections. Inspired by camera manipulation techniques in interactive graphics, we designed StarHopper, a novel touch screen interface for efficient object-centric camera drone navigation, in which a user directly specifies the navigation of a drone camera relative to a specified object of interest. The system relies on minimal environmental information and combines both manual and automated control mechanisms to give users the freedom to remotely explore an environment with efficiency and accuracy. A lab study shows that StarHopper offers an efficiency gain of 35.4% over manual piloting, complimented by an overall user preference towards our object-centric navigation system.", "target": ["StarHopper ist ein neuartiges Touchscreen-Interface für die effiziente und flexible objektzentrierte Kameradrohnen-Navigation.", "Die Autoren skizzieren die von ihnen entwickelte neue Drohnensteuerungsschnittstelle StarHopper, die automatisierte und manuelle Steuerung in einer neuen hybriden Navigationsschnittstelle kombiniert und sich durch den Einsatz einer zusätzlichen Überkopfkamera von der Annahme befreit, dass sich das Zielobjekt bereits im Blickfeld der Drohne befindet.", "In diesem Beitrag wird StarHopper vorgestellt, ein System zur halbautomatischen Drohnennavigation im Rahmen von Ferninspektionen.", "Stellt StarHopper vor, eine Anwendung, die Computer-Vision-Techniken mit Touch-Eingabe nutzt, um die Drohnensteuerung mit einem objektzentrierten Ansatz zu unterstützen."]} +{"source": "Recurrent neural networks (RNN), convolutional neural networks (CNN) and self-attention networks (SAN) are commonly used to produce context-aware representations. RNN can capture long-range dependency but is hard to parallelize and not time-efficient. CNN focuses on local dependency but does not perform well on some tasks. SAN can model both such dependencies via highly parallelizable computation, but memory requirement grows rapidly in line with sequence length. In this paper, we propose a model, called \"bi-directional block self-attention network (Bi-BloSAN)\", for RNN/CNN-free sequence encoding. It requires as little memory as RNN but with all the merits of SAN. Bi-BloSAN splits the entire sequence into blocks, and applies an intra-block SAN to each block for modeling local context, then applies an inter-block SAN to the outputs for all blocks to capture long-range dependency. Thus, each SAN only needs to process a short sequence, and only a small amount of memory is required. Additionally, we use feature-level attention to handle the variation of contexts around the same word, and use forward/backward masks to encode temporal order information. On nine benchmark datasets for different NLP tasks, Bi-BloSAN achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN/CNN/SAN.", "target": ["Ein Selbstbeobachtungsnetzwerk für RNN/CNN-freie Sequenzkodierung mit geringem Speicherverbrauch, hochgradig parallelisierbarer Berechnung und modernster Leistung bei verschiedenen NLP-Aufgaben.", "Es wird vorgeschlagen, die Selbstaufmerksamkeit auf zwei Ebenen anzuwenden, um den Speicherbedarf in aufmerksamkeitsbasierten Modellen mit vernachlässigbaren Auswirkungen auf die Geschwindigkeit zu begrenzen.", "In diesem Beitrag wird ein bidirektionales Block-Selbstaufmerksamkeitsmodell als Allzweck-Encoder für verschiedene Sequenzmodellierungsaufgaben im NLP vorgestellt."]} +{"source": "End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and self-attention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new state-of-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders.", "target": ["Ein neues hochmodernes Modell für die Beantwortung von Fragen mit mehreren Beweisen unter Verwendung grobkörniger, feinkörniger, hierarchischer Aufmerksamkeit.", "Schlägt eine Methode für die Multi-Hop-QS vor, die auf zwei getrennten Modulen (grobkörnige und feinkörnige Module) basiert.", "Dieses Arbeit schlägt eine interessante Grobkorn-Feinkorn Co-Attention Netzwerk Architektur zur Beantwortung von Fragen mit mehreren Evidenzen vor.", "Konzentriert sich auf Multi-Choice QA und schlägt ein Framework für die Grob- bis Feinbewertung vor."]} +{"source": "Generative adversarial networks (GANs) are an expressive class of neural generative models with tremendous success in modeling high-dimensional continuous measures. In this paper, we present a scalable method for unbalanced optimal transport (OT) based on the generative-adversarial framework. We formulate unbalanced OT as a problem of simultaneously learning a transport map and a scaling factor that push a source measure to a target measure in a cost-optimal manner. We provide theoretical justification for this formulation, showing that it is closely related to an existing static formulation by Liero et al. (2018). We then propose an algorithm for solving this problem based on stochastic alternating gradient updates, similar in practice to GANs, and perform numerical experiments demonstrating how this methodology can be applied to population modeling.", "target": ["Wir schlagen eine neue Methode für den unausgewogenen optimalen Transport unter Verwendung generativer adversarialer Netzwerke vor.", "Die Autoren betrachten das unausgewogene optimale Transportproblem zwischen zwei Maßnahmen mit unterschiedlicher Gesamtmasse unter Verwendung eines stochastischen Min-Max Algorithmus und lokaler Skalierung.", "Die Autoren schlagen einen Ansatz zur Schätzung des unausgewogenen optimalen Transports zwischen Stichprobenmaßen vor, der gut in der Dimension und in der Anzahl der Stichproben skaliert.", "Die Arbeit führt eine statische Formulierung für unausgewogenen optimalen Transport durch gleichzeitiges Lernen einer Transportzoprdnung T und eines Skalierungsfaktors xi ein."]} +{"source": "Extracting saliency maps, which indicate parts of the image important to classification, requires many tricks to achieve satisfactory performance when using classifier-dependent methods. Instead, we propose classifier-agnostic saliency map extraction, which finds all parts of the image that any classifier could use, not just one given in advance. We observe that the proposed approach extracts higher quality saliency maps and outperforms existing weakly-supervised localization techniques, setting the new state of the art result on the ImageNet dataset.", "target": ["Wir schlagen eine neue Methode zur Extraktion von Auffälligkeitszuordnungen vor, die zu einer höheren Qualität der Zuordnungen führt.", "Schlägt eine klassifikatorunabhängige Methode zur Extraktion von Saliency Zuordnungen vor.", "In diesem Beitrag wird ein neuer Saliency Zuordnungen Extraktor vorgestellt, der die Ergebnisse des Standes der Technik zu verbessern scheint.", "Die Autoren argumentieren, dass eine extrahierte Saliency Zuordnung, die direkt von einem Modell abhängt, für einen anderen Klassifikator möglicherweise nicht nützlich ist, und schlagen ein Schema zur Annäherung der Lösung vor."]} +{"source": "Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs). However, previous methods often fail in challenging cases, in particular, when an image has multiple target instances and a translation task involves significant changes in shape, e.g., translating pants to skirts in fashion images. To tackle the issues, we propose a novel method, coined instance-aware GAN (InstaGAN), that incorporates the instance information (e.g., object segmentation masks) and improves multi-instance transfiguration. The proposed method translates both an image and the corresponding set of instance attributes while maintaining the permutation invariance property of the instances. To this end, we introduce a context preserving loss that encourages the network to learn the identity function outside of target instances. We also propose a sequential mini-batch inference/training technique that handles multiple instances with a limited GPU memory and enhances the network to generalize better for multiple instances. Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases. Code and results are available in https://github.com/sangwoomo/instagan", "target": ["Wir schlagen eine neue Methode vor, um die Menge der Instanzattribute für die Bild-zu-Bild Übersetzung einzubeziehen.", "Diese Arbeit schlägt eine Methode - InstaGAN - vor, die auf CycleGAN aufbaut, indem sie Instanzinformationen in Form von Segmentierungsmasken pro Instanz berücksichtigt, mit Ergebnissen, die mit CycleGAN und anderen Baselines vergleichbar sind.", " Schlägt vor, instanzspezifische Segmentierungsmasken für das Problem der ungepaarten Bild-zu-Bild Übersetzung hinzuzufügen."]} +{"source": "Deep neural networks (DNNs) generalize remarkably well without explicit regularization even in the strongly over-parametrized regime where classical learning theory would instead predict that they would severely overfit. While many proposals for some kind of implicit regularization have been made to rationalise this success, there is no consensus for the fundamental reason why DNNs do not strongly overfit. In this paper, we provide a new explanation. By applying a very general probability-complexity bound recently derived from algorithmic information theory (AIT), we argue that the parameter-function map of many DNNs should be exponentially biased towards simple functions. We then provide clear evidence for this strong simplicity bias in a model DNN for Boolean functions, as well as in much larger fully connected and convolutional networks trained on CIFAR10 and MNIST.\n As the target functions in many real problems are expected to be highly structured, this intrinsic simplicity bias helps explain why deep networks generalize well on real world problems.\n This picture also facilitates a novel PAC-Bayes approach where the prior is taken over the DNN input-output function space, rather than the more conventional prior over parameter space. If we assume that the training algorithm samples parameters close to uniformly within the zero-error region then the PAC-Bayes theorem can be used to guarantee good expected generalization for target functions producing high-likelihood training sets. By exploiting recently discovered connections between DNNs and Gaussian processes to estimate the marginal likelihood, we produce relatively tight generalization PAC-Bayes error bounds which correlate well with the true error on realistic datasets such as MNIST and CIFAR10 and for architectures including convolutional and fully connected networks.", "target": ["Die Parameter-Funktionszuordnung von tiefen Netzwerken ist stark verzerrt; dies kann erklären, warum sie verallgemeinern. Wir verwenden PAC-Bayes und Gauß-Prozesse, um nicht-variable Grenzen zu erhalten.", "Die Arbeit untersucht die Generalisierungsfähigkeiten von tiefen neuronalen Netzen mit Hilfe der PAC-Bayesianischen Lerntheorie und empirisch gestützten Intuitionen.", "In diesem Beitrag wird eine Erklärung für das Generalisierungsverhalten von großen, überparametrisierten neuronalen Netzen vorgeschlagen, indem behauptet wird, dass die Parameter-Funktionszuordnung in neuronalen Netzen auf \"einfache\" Funktionen ausgerichtet ist und das Generalisierungsverhalten gut ist, wenn das Zielkonzept ebenfalls \"einfach\" ist."]} +{"source": "We establish a theoretical link between evolutionary algorithms and variational parameter optimization of probabilistic generative models with binary hidden variables.\n While the novel approach is independent of the actual generative model, here we use two such models to investigate its applicability and scalability: a noisy-OR Bayes Net (as a standard example of binary data) and Binary Sparse Coding (as a model for continuous data).\n\n Learning of probabilistic generative models is first formulated as approximate maximum likelihood optimization using variational expectation maximization (EM).\n We choose truncated posteriors as variational distributions in which discrete latent states serve as variational parameters. In the variational E-step,\n the latent states are then \noptimized according to a tractable free-energy objective . Given a data point, we can show that evolutionary algorithms can be used for the variational optimization loop by (A)~considering the bit-vectors of the latent states as genomes of individuals, and by (B)~defining the fitness of the\n individuals as the (log) joint probabilities given by the used generative model.\n\n As a proof of concept, we apply the novel evolutionary EM approach to the optimization of the parameters of noisy-OR Bayes nets and binary sparse coding on artificial and real data (natural image patches). Using point mutations and single-point cross-over for the evolutionary algorithm, we find that scalable variational EM algorithms are obtained which efficiently improve the data likelihood. In general we believe that, with the link established here, standard as well as recent results in the field of evolutionary optimization can be leveraged to address the difficult problem of parameter optimization in generative models.", "target": ["Wir stellen Evolutionary EM als einen neuartigen Algorithmus für das unbeaufsichtigte Training generativer Modelle mit binären latenten Variablen vor, der eine enge Verbindung zwischen variationalem EM und evolutionärer Optimierung herstellt.", "Der Beitrag stellt eine Kombination aus evolutionärer Berechnung und Variations-EM für Modelle mit binären latenten Variablen vor, die durch eine partikelbasierte Approximation dargestellt werden.", "In diesem Beitrag wird der Versuch unternommen, Trainingsalgorithmen mit Erwartungsmaximierung und evolutionäre Algorithmen eng zu integrieren."]} +{"source": "While deep neural networks have achieved groundbreaking prediction results in many tasks, there is a class of data where existing architectures are not optimal -- sequences of probability distributions. Performing forward prediction on sequences of distributions has many important applications. However, there are two main challenges in designing a network model for this task. First, neural networks are unable to encode distributions compactly as each node encodes just a real value. A recent work of Distribution Regression Network (DRN) solved this problem with a novel network that encodes an entire distribution in a single node, resulting in improved accuracies while using much fewer parameters than neural networks. However, despite its compact distribution representation, DRN does not address the second challenge, which is the need to model time dependencies in a sequence of distributions. In this paper, we propose our Recurrent Distribution Regression Network (RDRN) which adopts a recurrent architecture for DRN. The combination of compact distribution representation and shared weights architecture across time steps makes RDRN suitable for modeling the time dependencies in a distribution sequence. Compared to neural networks and DRN, RDRN achieves the best prediction performance while keeping the network compact.", "target": ["Wir schlagen ein effizientes rekurrentes Netzwerkmodell für Forward Prediction bei zeitlich variierenden Verteilungen vor.", "In diesem Beitrag wird eine Methode zur Erstellung neuronaler Netze vorgeschlagen, die historische Verteilungen auf Verteilungen abbildet, und die Methode wird auf verschiedene Aufgaben der Verteilungsvorhersage angewendet.", "Schlägt ein Reccurent Distribution Regression Network vor, das eine rekurrente Architektur auf einem früheren Distribution Regression Network Modell verwendet.", "Diese Arbeit befasst sich mit der Regression über Wahrscheinlichkeitsverteilungen durch die Untersuchung zeitlich variierender Verteilungen in einem rekurrenten neuronalen Netz."]} +{"source": "We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of computationally intensive matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).", "target": ["Ein neuartiger Ansatz zur Verarbeitung graphenstrukturierter Daten durch neuronale Netze, der die Aufmerksamkeit auf die Nachbarschaft eines Knotens lenkt. Erzielt Spitzenergebnisse bei transduktiven Zitationsnetzwerk Aufgaben und einer induktiven Protein-Protein Interaktionsaufgabe.", "In diesem Beitrag wird eine neue Methode zur Klassifizierung von Knoten in einem Graphen vorgeschlagen, die in halbüberwachten Szenarien und auf einem völlig neuen Graphen eingesetzt werden kann. ", "Die Arbeit stellt eine neuronale Netzarchitektur vor, die mit graphisch strukturierten Daten arbeitet, die Graph Attention Networks.", "Bietet eine faire und nahezu umfassende Diskussion über den Stand der Technik beim Lernen von Vektordarstellungen für die Knoten eines Graphen."]} +{"source": "While bigger and deeper neural network architectures continue to advance the state-of-the-art for many computer vision tasks, real-world adoption of these networks is impeded by hardware and speed constraints. Conventional model compression methods attempt to address this problem by modifying the architecture manually or using pre-defined heuristics. Since the space of all reduced architectures is very large, modifying the architecture of a deep neural network in this way is a difficult task. In this paper, we tackle this issue by introducing a principled method for learning reduced network architectures in a data-driven way using reinforcement learning. Our approach takes a larger 'teacher' network as input and outputs a compressed 'student' network derived from the 'teacher' network. In the first stage of our method, a recurrent policy network aggressively removes layers from the large 'teacher' model. In the second stage, another recurrent policy network carefully reduces the size of each remaining layer. The resulting network is then evaluated to obtain a reward -- a score based on the accuracy and compression of the network. Our approach uses this reward signal with policy gradients to train the policies to find a locally optimal student network. Our experiments show that we can achieve compression rates of more than 10x for models such as ResNet-34 while maintaining similar performance to the input 'teacher' network. We also present a valuable transfer learning result which shows that policies which are pre-trained on smaller 'teacher' networks can be used to rapidly speed up training on larger 'teacher' networks.", "target": ["Ein neuartiger, auf Reinforcement Learning basierender Ansatz zur Komprimierung tiefer neuronaler Netze mit Wissensdestillation.", "In diesem Beitrag wird vorgeschlagen, anstelle von vordefinierten Heuristiken Reinforcement Learning einzusetzen, um die Struktur des komprimierten Modells im Prozess der Wissensdestillation zu bestimmen.", "Stellt eine prinzipielle Methode der Netz-zu-Netz Komprimierung vor, die Policy-Gradienten zur Optimierung von zwei Strategien verwendet, die ein starkes Lehrermodell in ein starkes, aber kleineres Schülermodell komprimieren."]} +{"source": "Recent advances in conditional image generation tasks, such as image-to-image translation and image inpainting, are largely accounted to the success of conditional GAN models, which are often optimized by the joint use of the GAN loss with the reconstruction loss. However, we reveal that this training recipe shared by almost all existing methods causes one critical side effect: lack of diversity in output samples. In order to accomplish both training stability and multimodal output generation, we propose novel training schemes with a new set of losses named moment reconstruction losses that simply replace the reconstruction loss. We show that our approach is applicable to any conditional generation tasks by performing thorough experiments on image-to-image translation, super-resolution and image inpainting using Cityscapes and CelebA dataset. Quantitative evaluations also confirm that our methods achieve a great diversity in outputs while retaining or even improving the visual fidelity of generated samples.", "target": ["Wir beweisen, dass der Modus-Kollaps in bedingten GANs größtenteils auf ein Missverhältnis zwischen Rekonstruktionsverlust und GAN-Verlust zurückzuführen ist, und stellen eine Reihe neuartiger Verlustfunktionen als Alternativen zum Rekonstruktionsverlust vor.", "Die Arbeit schlägt eine Modifikation des traditionellen bedingten GAN-Ziels vor, um eine vielfältige, multimodale Erzeugung von Bildern zu fördern. ", "In diesem Beitrag wird eine Alternative zu L1/L2-Fehlern vorgeschlagen, die beim Training von bedingten GANs als Ergänzung zu den Verlusten der Gegner verwendet werden."]} +{"source": "Generative models are important tools to capture and investigate the properties of complex empirical data. Recent developments such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) use two very similar, but \\textit{reverse}, deep convolutional architectures, one to generate and one to extract information from data. Does learning the parameters of both architectures obey the same rules? We exploit the causality principle of independence of mechanisms to quantify how the weights of successive layers adapt to each other. Using the recently introduced Spectral Independence Criterion, we quantify the dependencies between the kernels of successive convolutional layers and show that those are more independent for the generative process than for information extraction, in line with results from the field of causal inference. In addition, our experiments on generation of human faces suggest that more independence between successive layers of generators results in improved performance of these architectures.\n", "target": ["Wir verwenden kausale Schlussfolgerungen, um die Architektur von generativen Modellen zu charakterisieren.", "In diesem Beitrag wird die Beschaffenheit von Convolutional Filtern im Kodierer und Dekodierer einer VAE sowie in einem Generator und einem Diskriminator eines GAN untersucht.", "Diese Arbeit nutzt das Kausalitätsprinzip, um zu quantifizieren, wie sich die Gewichte aufeinanderfolgender Schichten aneinander anpassen."]} +{"source": "Many deep reinforcement learning approaches use graphical state representations,\n this means visually distinct games that share the same underlying structure cannot\n effectively share knowledge. This paper outlines a new approach for learning\n underlying game state embeddings irrespective of the visual rendering of the game\n state. We utilise approaches from multi-task learning and domain adaption in\n order to place visually distinct game states on a shared embedding manifold. We\n present our results in the context of deep reinforcement learning agents.", "target": ["Ein Ansatz zum Erlernen eines gemeinsamen Einbettungsraums zwischen visuell unterschiedlichen Spielen.", "Ein neuer Ansatz zum Erlernen der zugrundeliegenden Struktur visuell unterschiedlicher Spiele, der Convolutional Layers zur Verarbeitung von Eingabebildern, asynchrone Advantage Actor Critic für tiefes Reinforcement Learning und einen gegnerischen Ansatz kombiniert, um die Einbettungsrepräsentation unabhängig von der visuellen Repräsentation von Spielen zu machen.", "Es wird eine Methode zum Erlernen einer Strategie für visuell unterschiedliche Spiele durch die Anpassung von Deep Reinforcement Learning vorgestellt.", "In diesem Beitrag wird eine Agentenarchitektur diskutiert, die eine gemeinsame Darstellung verwendet, um mehrere Aufgaben mit unterschiedlichen visuellen Statistiken auf Sprite-Ebene zu trainieren."]} +{"source": "We study discrete time dynamical systems governed by the state equation $h_{t+1}=ϕ(Ah_t+Bu_t)$. Here A,B are weight matrices, ϕ is an activation function, and $u_t$ is the input data. This relation is the backbone of recurrent neural networks (e.g. LSTMs) which have broad applications in sequential learning tasks. We utilize stochastic gradient descent to learn the weight matrices from a finite input/state trajectory $(u_t,h_t)_{t=0}^N$. We prove that SGD estimate linearly converges to the ground truth weights while using near-optimal sample size. Our results apply to increasing activations whose derivatives are bounded away from zero. The analysis is based on i) an SGD convergence result with nonlinear activations and ii) careful statistical characterization of the state vector. Numerical experiments verify the fast convergence of SGD on ReLU and leaky ReLU in consistence with our theory.", "target": ["Wir untersuchen die Zustandsgleichung eines rekurrenten neuronalen Netzes. Wir zeigen, dass SGD die unbekannte Dynamik aus wenigen Input/Output-Beobachtungen unter geeigneten Annahmen effizient erlernen kann.", "Die Arbeit untersucht zeitdiskrete dynamische Systeme mit einer nichtlinearen Zustandsgleichung und beweist, dass die Ausführung von SGD auf einer Trajektorie fester Länge logarithmische Konvergenz ergibt.", "Diese Arbeit befasst sich mit dem Problem des Lernens eines nichtlinearen dynamischen Systems, bei dem die Ausgabe gleich dem Zustand ist. ", "In diesem Beitrag wird die Fähigkeit von SGD untersucht, die Dynamik eines linearen Systems und die nichtlineare Aktivierung zu erlernen."]} +{"source": "Although deep neural networks show their extraordinary power in various tasks, they are not feasible for deploying such large models on embedded systems due to high computational cost and storage space limitation. The recent work knowledge distillation (KD) aims at transferring model knowledge from a well-trained teacher model to a small and fast student model which can significantly help extending the usage of large deep neural networks on portable platform. In this paper, we show that, by properly defining the neuron manifold of deep neuron network (DNN), we can significantly improve the performance of student DNN networks through approximating neuron manifold of powerful teacher network. To make this, we propose several novel methods for learning neuron manifold from DNN model. Empowered with neuron manifold knowledge, our experiments show the great improvement across a variety of DNN architectures and training data. Compared with other KD methods, our Neuron Manifold Transfer (NMT) has best transfer ability of the learned features.", "target": ["Eine neue Methode zur Wissensdestillation für das Transferlernen.", "In der Arbeit wird eine Methode zur Wissensdestillation vorgestellt, die das vorgeschlagene Konzept der Neuronenverteiler nutzt. ", "Schlägt eine Methode zur Wissensdestillation vor, bei der die neuronalen Verteiler als übertragenes Wissen betrachtet werden."]} +{"source": "We present a simple and general method to train a single neural network executable at different widths (number of channels in a layer), permitting instant and adaptive accuracy-efficiency trade-offs at runtime. Instead of training individual networks with different width configurations, we train a shared network with switchable batch normalization. At runtime, the network can adjust its width on the fly according to on-device benchmarks and resource constraints, rather than downloading and offloading different models. Our trained networks, named slimmable neural networks, achieve similar (and in many cases better) ImageNet classification accuracy than individually trained models of MobileNet v1, MobileNet v2, ShuffleNet and ResNet-50 at different widths respectively. We also demonstrate better performance of slimmable models compared with individual ones across a wide range of applications including COCO bounding-box object detection, instance segmentation and person keypoint detection without tuning hyper-parameters. Lastly we visualize and discuss the learned features of slimmable networks. Code and models are available at: https://github.com/JiahuiYu/slimmable_networks", "target": ["Wir stellen eine einfache und allgemeine Methode vor, um ein einzelnes neuronales Netz zu trainieren, das mit verschiedenen Breiten (Anzahl der Kanäle in einer Schicht) ausgeführt werden kann, was sofortige und adaptive Kompromisse zwischen Genauigkeit und Effizienz zur Laufzeit ermöglicht.", "In dem Beitrag wird vorgeschlagen, verschiedene Größenmodelle in einem gemeinsamen Netz zu kombinieren, was die Erkennungsleistung erheblich verbessert.", "In dieser Arbeit wird ein einziges ausführbares Netz mit unterschiedlichen Breiten trainiert."]} +{"source": "Measuring visual (dis)similarity between two or more instances within a data distribution is a fundamental task in many applications, specially in image retrieval. Theoretically, non-metric distances are able to generate a more complex and accurate similarity model than metric distances, provided that the non-linear data distribution is precisely captured by the similarity model. In this work, we analyze a simple approach for deep learning networks to be used as an approximation of non-metric similarity functions and we study how these models generalize across different image retrieval datasets.", "target": ["Ähnlichkeitsnetz zum Erlernen einer nicht-metrischen visuellen Ähnlichkeitsschätzung zwischen einem Bildpaar.", "Die Autoren schlagen ein lernendes Ähnlichkeitsmaß für visuelle Ähnlichkeit vor und erzielen damit eine Verbesserung in sehr bekannten Datensätzen von Oxford und Paris für die Bildwiedererkennung.", "In dem Beitrag wird argumentiert, dass es besser ist, nicht-metrische Entfernungen anstelle von metrischen Entfernungen zu verwenden."]} +{"source": "Training a model to perform a task typically requires a large amount of data from the domains in which the task will be applied.\n However, it is often the case that data are abundant in some domains but scarce in others. Domain adaptation deals with the challenge of adapting a model trained from a data-rich source domain to perform well in a data-poor target domain. In general, this requires learning plausible mappings between domains. CycleGAN is a powerful framework that efficiently learns to map inputs from one domain to another using adversarial training and a cycle-consistency constraint. However, the conventional approach of enforcing cycle-consistency via reconstruction may be overly restrictive in cases where one or more domains have limited training data. In this paper, we propose an augmented cyclic adversarial learning model that enforces the cycle-consistency constraint via an external task specific model, which encourages the preservation of task-relevant content as opposed to exact reconstruction. We explore digit classification in a low-resource setting in supervised, semi and unsupervised situation, as well as high resource unsupervised. In low-resource supervised setting, the results show that our approach improves absolute performance by 14% and 4% when adapting SVHN to MNIST and vice versa, respectively, which outperforms unsupervised domain adaptation methods that require high-resource unlabeled target domain. Moreover, using only few unsupervised target data, our approach can still outperforms many high-resource unsupervised models. Our model also outperforms on USPS to MNIST and synthetic digit to SVHN for high resource unsupervised adaptation. In speech domains, we similarly adopt a speech recognition model from each domain as the task specific model. Our approach improves absolute performance of speech recognition by 2% for female speakers in the TIMIT dataset, where the majority of training samples are from male voices.", "target": ["Ein neues zyklisches kontradiktorisches Lernen, ergänzt durch ein Modell für Hilfsaufgaben, das die Leistung der Bereichsanpassung in überwachten und unbeaufsichtigten Situationen mit geringen Ressourcen verbessert.", "Es wird eine Erweiterung der zykluskonsistenten adversen Anpassungsmethoden vorgeschlagen, um die Domänenanpassung zu bewältigen, wenn nur begrenzte überwachte Zieldaten verfügbar sind.", "In diesem Beitrag wird ein Ansatz zur Domänenanpassung vorgestellt, der auf der Idee des zyklischen GAN basiert, und es werden zwei verschiedene Algorithmen vorgeschlagen."]} +{"source": "Nodes residing in different parts of a graph can have similar structural roles within their local network topology. The identification of such roles provides key insight into the organization of networks and can also be used to inform machine learning on graphs. However, learning structural representations of nodes is a challenging unsupervised-learning task, which typically involves manually specifying and tailoring topological features for each node. Here we develop GraphWave, a method that represents each node’s local network neighborhood via a low-dimensional embedding by leveraging spectral graph wavelet diffusion patterns. We prove that nodes with similar local network neighborhoods will have similar GraphWave embeddings even though these nodes may reside in very different parts of the network. Our method scales linearly with the number of edges and does not require any hand-tailoring of topological features. We evaluate performance on both synthetic and real-world datasets, obtaining improvements of up to 71% over state-of-the-art baselines.", "target": ["Wir entwickeln eine Methode zum Erlernen von strukturellen Signaturen in Netzwerken, die auf der Diffusion von Spektralgraphen-Wavelets basiert.", "Verwendung spektraler Graph-Wavelet Diffusionsmuster der lokalen Nachbarschaft eines Knotens zur Einbettung des Knotens in einen niedrigdimensionalen Raum.", "In der Arbeit wird eine Methode zum Vergleich von Knoten in einem Graphen auf der Grundlage der Wavelet-Analyse des Graphen-Laplacian abgeleitet. "]} +{"source": "Driving simulators play an important role in vehicle research. However, existing virtual reality simulators do not give users a true sense of presence. UniNet is our driving simulator, designed to allow users to interact with and visualize simulated traffic in mixed reality. It is powered by SUMO and Unity. UniNet's modular architecture allows us to investigate interdisciplinary research topics such as vehicular ad-hoc networks, human-computer interaction, and traffic management. We accomplish this by giving users the ability to observe and interact with simulated traffic in a high fidelity driving simulator. We present a user study that subjectively measures user's sense of presence in UniNet. Our findings suggest that our novel mixed reality system does increase this sensation.", "target": ["Ein Mixed-Reality Fahrsimulator mit Stereokameras und Passthrough-VR wurde in einer Nutzerstudie mit 24 Teilnehmern evaluiert.", "Er schlägt ein kompliziertes System zur Fahrsimulation vor.", "In diesem Beitrag wird ein Mixed-Reality Fahrsimulator vorgestellt, der das Gefühl der Anwesenheit verstärkt.", "Er schlägt einen Mixed-Reality Fahrsimulator vor, der die Erzeugung von Verkehr einbezieht und eine verbesserte \"Präsenz\" durch ein MR-System verspricht."]} +{"source": "We consider the problem of improving kernel approximation via feature maps. These maps arise as Monte Carlo approximation to integral representations of kernel functions and scale up kernel methods for larger datasets. We propose to use more efficient numerical integration technique to obtain better estimates of the integrals compared to the state-of-the-art methods. Our approach allows to use information about the integrand to enhance approximation and facilitates fast computations. We derive the convergence behavior and conduct an extensive empirical study that supports our hypothesis.", "target": ["Quadraturregeln für die Kernel-Approximation.", "In dem Beitrag wird vorgeschlagen, die Kernel-Approximation von Zufallsmerkmalen durch die Verwendung von Quadraturregeln wie stochastischen sphärisch-radialen Regeln zu verbessern.", "Die Autoren schlagen eine neue Version des Random-Feature-Map Ansatzes zur näherungsweisen Lösung großer Kernel-Probleme vor.", "In diesem Beitrag wird gezeigt, dass die Techniken von Genz & Monahan (1998) verwendet werden können, um einen geringen Fehler bei der Kernel Approximation im Rahmen eines zufälligen Fourier Merkmals zu erreichen, eine neue Methode zur Anwendung von Quadraturregeln zur Verbesserung der Kernel Approximation."]} +{"source": "Human world knowledge is both structured and flexible. When people see an object, they represent it not as a pixel array but as a meaningful arrangement of semantic parts. Moreover, when people refer to an object, they provide descriptions that are not merely true but also relevant in the current context. Here, we combine these two observations in order to learn fine-grained correspondences between language and contextually relevant geometric properties of 3D objects. To do this, we employed an interactive communication task with human participants to construct a large dataset containing natural utterances referring to 3D objects from ShapeNet in a wide variety of contexts. Using this dataset, we developed neural listener and speaker models with strong capacity for generalization. By performing targeted lesions of visual and linguistic input, we discovered that the neural listener depends heavily on part-related words and associates these words correctly with the corresponding geometric properties of objects, suggesting that it has learned task-relevant structure linking the two input modalities. We further show that a neural speaker that is `listener-aware' --- that plans its utterances according to how an imagined listener would interpret its words in context --- produces more discriminative referring expressions than an `listener-unaware' speaker, as measured by human performance in identifying the correct object.", "target": ["Wie kann man neuronale Sprecher / Hörer entwickeln, die anhand von Referenzsprache feinkörnige Merkmale von 3D-Objekten lernen?", "Die Autoren stellen eine Studie über das Lernen von 3D Objekten vor, in der sie einen Datensatz von referenziellen Ausdrücken sammeln und verschiedene Modelle trainieren, indem sie mit einer Reihe von architektonischen Entscheidungen experimentieren."]} +{"source": "Object-based factorizations provide a useful level of abstraction for interacting with the world. Building explicit object representations, however, often requires supervisory signals that are difficult to obtain in practice. We present a paradigm for learning object-centric representations for physical scene understanding without direct supervision of object properties. Our model, Object-Oriented Prediction and Planning (O2P2), jointly learns a perception function to map from image observations to object representations, a pairwise physics interaction function to predict the time evolution of a collection of objects, and a rendering function to map objects back to pixels. For evaluation, we consider not only the accuracy of the physical predictions of the model, but also its utility for downstream tasks that require an actionable representation of intuitive physics. After training our model on an image prediction task, we can use its learned representations to build block towers more complicated than those observed during training.", "target": ["Wir stellen einen Rahmen für das Erlernen von objektzentrierten Repräsentationen vor, die sich für die Planung von Aufgaben eignen, die ein Verständnis der Physik erfordern.", "Die Arbeit stellt eine Plattform für die Vorhersage von Bildern von Objekten vor, die unter der Wirkung von Gravitationskräften miteinander interagieren.", "Die Arbeit stellt eine Methode vor, die lernt, \"Blocktürme\" aus einem gegebenen Bild zu reproduzieren.", "Schlägt eine Methode vor, mit der man lernt, über die physische Interaktion verschiedener Objekte nachzudenken, ohne dass die Eigenschaften der Objekte überwacht werden."]} +{"source": "We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global minimum. Surprisingly, our conditions provide an efficiently checkable test for global optimality, while such tests are typically intractable in nonconvex optimization. We further extend these results to deep nonlinear neural networks and prove similar sufficient conditions for global optimality, albeit in a more limited function space setting.", "target": ["Wir liefern effizient überprüfbare notwendige und hinreichende Bedingungen für globale Optimalität in tiefen linearen neuronalen Netzen, mit einigen ersten Erweiterungen auf nichtlineare Bedingungen.", "Der Artikel enthält Bedingungen für die globale Optimalität der Verlustfunktion von tiefen linearen neuronalen Netzen.", "Die Arbeit enthält theoretische Ergebnisse über die Existenz lokaler Minima in der Zielfunktion von tiefen neuronalen Netzen.", "Untersuchung einiger theoretischer Eigenschaften von tiefen linearen Netzen."]} +{"source": "Recurrent auto-encoder model can summarise sequential data through an encoder structure into a fixed-length vector and then reconstruct into its original sequential form through the decoder structure. The summarised information can be used to represent time series features. In this paper, we propose relaxing the dimensionality of the decoder output so that it performs partial reconstruction. The fixed-length vector can therefore represent features only in the selected dimensions. In addition, we propose using rolling fixed window approach to generate samples. The change of time series features over time can be summarised as a smooth trajectory path. The fixed-length vectors are further analysed through additional visualisation and unsupervised clustering techniques. \n\n This proposed method can be applied in large-scale industrial processes for sensors signal analysis purpose where clusters of the vector representations can be used to reflect the operating states of selected aspects of the industrial system.", "target": ["Verwendung eines rekurrenten Auto-Encoder Modells zur Extraktion mehrdimensionaler Zeitreihenmerkmale", "Dieser Text beschreibt eine Anwendung des rekurrenten Autoencoders zur Analyse von mehrdimensionalen Zeitreihen.", "Die Arbeit beschreibt ein Sequenz zu Sequenz Auto-Encoder Modell, das verwendet wird, um zu lernen, Darstellungen von Sequenzen, die zeigen, dass für ihre Anwendung, eine bessere Leistung erzielt wird, wenn das Netzwerk nur trainiert wird, um eine Teilmenge der Daten Messungen zu rekonstruieren. ", "Schlägt eine Strategie vor, die sich am rekurrenten Autoencoder Modell orientiert, so dass ein Clustering mehrdimensionaler Zeitreihendaten auf der Grundlage von Kontextvektoren durchgeführt werden kann."]} +{"source": "We view molecule optimization as a graph-to-graph translation problem. The goal is to learn to map from one molecular graph to another with better properties based on an available corpus of paired molecules. Since molecules can be optimized in different ways, there are multiple viable translations for each input graph. A key challenge is therefore to model diverse translation outputs. Our primary contributions include a junction tree encoder-decoder for learning diverse graph translations along with a novel adversarial training method for aligning distributions of molecules. Diverse output distributions in our model are explicitly realized by low-dimensional latent vectors that modulate the translation process. We evaluate our model on multiple molecule optimization tasks and show that our model outperforms previous state-of-the-art baselines by a significant margin. \n", "target": ["Wir stellen ein Graph-zu-Graph Encoder-Decoder Framework für das Lernen verschiedener Graphübersetzungen vor.", "Schlägt ein Graph-zu-Graph Übersetzungsmodell für die Moleküloptimierung vor, das von der Analyse übereinstimmender Molekülpaare inspiriert ist.", "Erweiterung von JT-VAE auf das Szenario der Übersetzung von Graphen in Graphen durch Hinzufügen der latenten Variable zur Erfassung der Multimodalität und einer adversen Regularisierung im latenten Raum.", "Er schlägt ein recht komplexes System vor, das viele verschiedene Entscheidungen und Komponenten umfasst, um ausgehend von einem gegebenen Korpus chemische Zusammensetzungen mit verbesserten Eigenschaften zu erhalten."]} +{"source": "Partial differential equations (PDEs) are widely used across the physical and computational sciences. Decades of research and engineering went into designing fast iterative solution methods. Existing solvers are general purpose, but may be sub-optimal for specific classes of problems. In contrast to existing hand-crafted solutions, we propose an approach to learn a fast iterative solver tailored to a specific domain. We achieve this goal by learning to modify the updates of an existing solver using a deep neural network. Crucially, our approach is proven to preserve strong correctness and convergence guarantees. After training on a single geometry, our model generalizes to a wide variety of geometries and boundary conditions, and achieves 2-3 times speedup compared to state-of-the-art solvers.", "target": ["Wir lernen einen schnellen neuronalen Löser für PDEs, der Konvergenzgarantien hat.", "Entwickelt eine Methode zur Beschleunigung der Finite-Differenzen Methode bei der Lösung von PDEs und schlägt einen überarbeiteten Rahmen für die Festpunktiteration nach der Diskretisierung vor.", "Die Autoren schlagen eine lineare Methode zur Beschleunigung von PDE-Lösern vor."]} +{"source": "Variational Bayesian neural networks (BNN) perform variational inference over weights, but it is difficult to specify meaningful priors and approximating posteriors in a high-dimensional weight space. We introduce functional variational Bayesian neural networks (fBNNs), which maximize an Evidence Lower BOund (ELBO) defined directly on stochastic processes, i.e. distributions over functions. We prove that the KL divergence between stochastic processes is equal to the supremum of marginal KL divergences over all finite sets of inputs. Based on this, we introduce a practical training objective which approximates the functional ELBO using finite measurement sets and the spectral Stein gradient estimator. With fBNNs, we can specify priors which entail rich structure, including Gaussian processes and implicit stochastic processes. Empirically, we find that fBNNs extrapolate well using various structured priors, provide reliable uncertainty estimates, and can scale to large datasets.", "target": ["Wir führen funktionale Variationsinferenz auf stochastischen Prozessen durch, die durch Bayes'sche neuronale Netze definiert sind.", "Anpassung von variationalen Bayesian Neural Network Approximationen in funktionaler Form und unter Berücksichtigung der Anpassung an einen stochastischen Prozess Prior implizit über Stichproben.", "Stellt ein neuartiges ELBO Ziel für das Training von BNNs vor, das es ermöglicht, aussagekräftigere Priors im Modell zu kodieren als die weniger informativen Gewichts Priors, die in der Literatur beschrieben werden.", "Stellt einen neuen Variationsinferenzalgorithmus für Bayes'sche neuronale Netzmodelle vor, bei dem der Prior funktional und nicht über einen Prior über Gewichte spezifiziert wird. "]} +{"source": "Words are not created equal. In fact, they form an aristocratic graph with a latent hierarchical structure that the next generation of unsupervised learned word embeddings should reveal. In this paper, justified by the notion of delta-hyperbolicity or tree-likeliness of a space, we propose to embed words in a Cartesian product of hyperbolic spaces which we theoretically connect to the Gaussian word embeddings and their Fisher geometry. This connection allows us to introduce a novel principled hypernymy score for word embeddings. Moreover, we adapt the well-known Glove algorithm to learn unsupervised word embeddings in this type of Riemannian manifolds. We further explain how to solve the analogy task using the Riemannian parallel transport that generalizes vector arithmetics to this new type of geometry. Empirically, based on extensive experiments, we prove that our embeddings, trained unsupervised, are the first to simultaneously outperform strong and popular baselines on the tasks of similarity, analogy and hypernymy detection. In particular, for word hypernymy, we obtain new state-of-the-art on fully unsupervised WBLESS classification accuracy.", "target": ["Wir betten Wörter in den hyperbolischen Raum ein und stellen die Verbindung zu den Gaußschen Worteinbettungen her.", "In diesem Beitrag wird die Glove Worteinbettung an einen hyperbolischen Raum angepasst, der durch das Poincare Halbebenenmodell gegeben ist.", "In diesem Beitrag wird ein Ansatz zur Implementierung eines GLOVE basierten hyperbolischen Worteinbettungsmodells vorgeschlagen, das mit Hilfe der Riemannschen Optimierungsmethoden optimiert wird."]} +{"source": "Answering questions about a text frequently requires aggregating information from multiple places in that text. End-to-end neural network models, the dominant approach in the current literature, can theoretically learn how to distill and manipulate representations of the text without explicit supervision about how to do so. We investigate a canonical architecture for this task, the memory network, and analyze how effective it really is in the context of three multi-hop reasoning settings. In a simple synthetic setting, the path-finding task of the bAbI dataset, the model fails to learn the correct reasoning without additional supervision of its attention mechanism. However, with this supervision, it can perform well. On a real text dataset, WikiHop, the memory network gives nearly state-of-the-art performance, but does so without using its multi-hop capabilities. A tougher anonymized version of the WikiHop dataset is qualitatively similar to bAbI: the model fails to perform well unless it has additional supervision. We hypothesize that many \"multi-hop\" architectures do not truly learn this reasoning as advertised, though they could learn this reasoning if appropriately supervised.", "target": ["Gedächtnisnetze lernen kein Multi-Hop Denken, es sei denn, wir beaufsichtigen sie.", "Die Behauptung, dass Multi-Hop-Denken nicht einfach direkt zu erlernen ist und eine direkte Überwachung erfordert, und dass ein gutes Abschneiden bei WikiHop nicht unbedingt bedeutet, dass das Modell tatsächlich lernt zu hüpfen.", "Die Arbeit schlägt vor, das bekannte Problem des Lernens von Gedächtnisnetzwerken zu untersuchen, genauer gesagt, die Schwierigkeit der Überwachung des Aufmerksamkeitslernens mit solchen Modellen.", "In diesem Beitrag wird argumentiert, dass das Gedächtnisnetz nicht in der Lage ist, vernünftiges Multi-Hop-Denken zu lernen."]} +{"source": "Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder.", "target": ["Wir stellen generative Netze vor, die nicht mit einem Diskriminator oder einem Encoder gelernt werden müssen; sie werden durch Invertierung eines speziellen Einbettungsoperators erhalten, der durch eine Wavelet-Streuungstransformation definiert ist.", "Stellt Scattering-Transformationen als generative Bildmodelle im Kontext von Generative Adversarial Networks vor und legt dar, warum sie als Gaussianisierungstransformationen mit kontrolliertem Informationsverlust und Invertierbarkeit angesehen werden können. ", "Die Arbeit schlägt ein generatives Modell für Bilder vor, das weder einen Diskriminator (wie bei GANs) noch eine erlernte Einbettung benötigt."]} +{"source": "Recurrent neural networks (RNNs) can model natural language by sequentially ''reading'' input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as ''neural speed reading'', either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word.\n A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that \n Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text.", "target": ["Wir schlagen ein neues Modell für neuronales Schnelllesen vor, das die inhärente Interpunktionsstruktur eines Textes nutzt, um effektives Sprung- und Überspringverhalten zu definieren.", "Die Arbeit schlägt ein Structural-Jump LSTM Modell zur Beschleunigung des maschinellen Lesens mit zwei Agenten anstelle eines Agenten vor.", "Schlägt ein neues Modell für neuronales Schnelllesen vor, bei dem der neue Leser die Möglichkeit hat, ein Wort oder eine Wortfolge zu überspringen.", "Der Artikel schlägt eine Schnelllesemethode vor, die Skip- und Jump-Aktionen verwendet, und zeigt, dass die vorgeschlagene Methode genauso genau ist wie LSTM, aber viel weniger Rechenaufwand benötigt."]} +{"source": "One of the key challenges of session-based recommender systems is to enhance users’ purchase intentions. In this paper, we formulate the sequential interactions between user sessions and a recommender agent as a Markov Decision Process (MDP). In practice, the purchase reward is delayed and sparse, and may be buried by clicks, making it an impoverished signal for policy learning. Inspired by the prediction error minimization (PEM) and embodied cognition, we propose a simple architecture to augment reward, namely Imagination Reconstruction Network (IRN). Specifically, IRN enables the agent to explore its environment and learn predictive representations via three key components. The imagination core generates predicted trajectories, i.e., imagined items that users may purchase. The trajectory manager controls the granularity of imagined trajectories using the planning strategies, which balances the long-term rewards and short-term rewards. To optimize the action policy, the imagination-augmented executor minimizes the intrinsic imagination error of simulated trajectories by self-supervised reconstruction, while maximizing the extrinsic reward using model-free algorithms. Empirically, IRN promotes quicker adaptation to user interest, and shows improved robustness to the cold-start scenario and ultimately higher purchase performance compared to several baselines. Somewhat surprisingly, IRN using only the purchase reward achieves excellent next-click prediction performance, demonstrating that the agent can \"guess what you like\" via internal planning.", "target": ["Wir schlagen die IRN-Architektur vor, um die spärliche und verzögerte Kaufbelohnung für sitzungsbasierte Empfehlungen zu erweitern.", "In dem Beitrag wird vorgeschlagen, die Leistung von Empfehlungssystemen durch Reinforcement Learning zu verbessern, indem ein Imaginations Rekonstruktions Netzwerk verwendet wird.", "Die Arbeit stellt einen sitzungsbasierten Empfehlungsansatz vor, der sich auf die Käufe der Nutzer statt auf die Klicks konzentriert. "]} +{"source": "The question why deep learning algorithms generalize so well has attracted increasing\n research interest. However, most of the well-established approaches,\n such as hypothesis capacity, stability or sparseness, have not provided complete\n explanations (Zhang et al., 2016; Kawaguchi et al., 2017). In this work, we focus\n on the robustness approach (Xu & Mannor, 2012), i.e., if the error of a hypothesis\n will not change much due to perturbations of its training examples, then it\n will also generalize well. As most deep learning algorithms are stochastic (e.g.,\n Stochastic Gradient Descent, Dropout, and Bayes-by-backprop), we revisit the robustness\n arguments of Xu & Mannor, and introduce a new approach – ensemble\n robustness – that concerns the robustness of a population of hypotheses. Through\n the lens of ensemble robustness, we reveal that a stochastic learning algorithm can\n generalize well as long as its sensitiveness to adversarial perturbations is bounded\n in average over training examples. Moreover, an algorithm may be sensitive to\n some adversarial examples (Goodfellow et al., 2015) but still generalize well. To\n support our claims, we provide extensive simulations for different deep learning\n algorithms and different network architectures exhibiting a strong correlation between\n ensemble robustness and the ability to generalize.", "target": ["Theoretische und empirische Erklärung der Verallgemeinerung von stochastischen Deep Learning Algorithmen durch Ensemble-Robustheit.", "Diese Arbeit stellt eine Anpassung der algorithmischen Robustheit von Xu & Mannor '12 vor und präsentiert Lerngrenzen und ein Experiment, das die Korrelation zwischen empirischer Ensemble Robustheit und Generalisierungsfehler zeigt. ", "Schlägt eine Untersuchung der Generalisierungsfähigkeit von Deep Learning Algorithmen unter Verwendung einer Erweiterung des Stabilitätsbegriffs vor, die als Ensemble Robustheit bezeichnet wird, und gibt Grenzen für den Generalisierungsfehler eines randomisierten Algorithmus in Bezug auf den Stabilitätsparameter an und bietet eine empirische Studie, die versucht, Theorie und Praxis zu verbinden.", "Die Arbeit untersuchte die Verallgemeinerungsfähigkeit von Lernalgorithmen unter dem Gesichtspunkt der Robustheit in einem Deep Learning Kontext."]} +{"source": "Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.", "target": ["Few-Shot Lernen eines PixelCNN.", "In dem Beitrag wird vorgeschlagen, die Dichteschätzung bei geringer Verfügbarkeit von Trainingsdaten mit Hilfe eines Meta-Lernmodells zu verwenden.", "Dieser Beitrag befasst sich mit dem Problem der One / Few Shot Dichteschätzung, unter Verwendung von Metalearning-Techniken, die auf One / Few Shot überwachtes Lernen angewendet wurden.", "Die Arbeit konzentriert sich auf das Few-Shot Learning mit autoregressiver Dichteschätzung und verbessert PixelCNN mit neuronaler Aufmerksamkeit und Meta Lerntechniken."]} +{"source": "Neural networks exhibit good generalization behavior in the\n over-parameterized regime, where the number of network parameters\n exceeds the number of observations. Nonetheless,\n current generalization bounds for neural networks fail to explain this\n phenomenon. In an attempt to bridge this gap, we study the problem of\n learning a two-layer over-parameterized neural network, when the data is generated by a linearly separable function. In the case where the network has Leaky\n ReLU activations, we provide both optimization and generalization guarantees for over-parameterized networks.\n Specifically, we prove convergence rates of SGD to a global\n minimum and provide generalization guarantees for this global minimum\n that are independent of the network size. \n Therefore, our result clearly shows that the use of SGD for optimization both finds a global minimum, and avoids overfitting despite the high capacity of the model. This is the first theoretical demonstration that SGD can avoid overfitting, when learning over-specified neural network classifiers.", "target": ["Wir zeigen, dass SGD zweischichtige überparametrisierte neuronale Netze mit Leaky ReLU Aktivierungen lernt, die nachweislich auf linear trennbaren Daten generalisieren.", "Die Arbeit untersucht überparametrisierte Modelle, die in der Lage sind, gut generalisierende Lösungen zu erlernen, indem sie ein Netz mit einer verborgenen Schicht und einer festen Ausgangsschicht verwenden.", "In diesem Beitrag wird gezeigt, dass SGD auf einem überparametrisierten Netzwerk bei linear trennbaren Daten immer noch zu einem Klassifikator führen kann, der nachweislich generalisiert."]} +{"source": "A central challenge in reinforcement learning is discovering effective policies for tasks where rewards are sparsely distributed. We postulate that in the absence of useful reward signals, an effective exploration strategy should seek out {\\it decision states}. These states lie at critical junctions in the state space from where the agent can transition to new, potentially unexplored regions. We propose to learn about decision states from prior experience. By training a goal-conditioned model with an information bottleneck, we can identify decision states by examining where the model accesses the goal state through the bottleneck. We find that this simple mechanism effectively identifies decision states, even in partially observed settings. In effect, the model learns the sensory cues that correlate with potential subgoals. In new environments, this model can then identify novel subgoals for further exploration, guiding the agent through a sequence of potential decision states and through new regions of the state space.", "target": ["Das Training von Agenten mit zielgerichteten Informationsengpässen fördert den Transfer und führt zu einem starken Explorationsbonus.", "Schlägt vor, Standard RL Verluste mit der negativen bedingten gegenseitigen Information für die Suche nach Strategien in einer Mehrziel RL Umgebung zu regulieren.", "Diese Arbeit schlägt das Konzept des Entscheidungszustandes vor und schlägt eine KL Divergenzregulierung vor, um die Struktur der Aufgaben zu erlernen und diese Informationen zu nutzen, um die Politik zu ermutigen, die Entscheidungszustände zu besuchen.", "In dem Beitrag wird eine Methode zur Regularisierung zielbezogener Strategien mit einem Term der gegenseitigen Information vorgeschlagen. "]} +{"source": "Many applications in machine learning require optimizing a function whose true gradient is unknown, but where surrogate gradient information (directions that may be correlated with, but not necessarily identical to, the true gradient) is available instead. This arises when an approximate gradient is easier to compute than the full gradient (e.g. in meta-learning or unrolled optimization), or when a true gradient is intractable and is replaced with a surrogate (e.g. in certain reinforcement learning applications or training networks with discrete variables). We propose Guided Evolutionary Strategies, a method for optimally using surrogate gradient directions along with random search. We define a search distribution for evolutionary strategies that is elongated along a subspace spanned by the surrogate gradients. This allows us to estimate a descent direction which can then be passed to a first-order optimizer. We analytically and numerically characterize the tradeoffs that result from tuning how strongly the search distribution is stretched along the guiding subspace, and use this to derive a setting of the hyperparameters that works well across problems. Finally, we apply our method to example problems including truncated unrolled optimization and training neural networks with discrete variables, demonstrating improvement over both standard evolutionary strategies and first-order methods (that directly follow the surrogate gradient). We provide a demo of Guided ES at: redacted URL", "target": ["Wir schlagen eine Optimierungsmethode für den Fall vor, dass nur verzerrte Gradienten verfügbar sind - wir definieren einen neuen Gradientenschätzer für dieses Szenario, leiten die Verzerrung und Varianz dieses Schätzers ab und wenden ihn auf Beispielprobleme an.", "Die Autoren schlagen einen Ansatz vor, der die zufällige Suche mit der Surrogate-Gradienteninformation kombiniert, und erörtern den Kompromiss zwischen Varianz und Vorspannung sowie die Optimierung der Hyperparameter.", "In dem Beitrag wird eine Methode zur Verbesserung der Zufallssuche vorgeschlagen, bei der ein Unterraum aus den vorherigen k Surrogate Gradienten gebildet wird.", "In dieser Arbeit wird versucht, die Entwicklung des OpenAI Typs zu beschleunigen, indem eine nicht isotrophe Verteilung mit einer Kovarianzmatrix der Form I + UU^t und externen Informationen wie einem Surrogategradienten zur Bestimmung von U eingeführt werden."]} +{"source": "Point clouds are an important type of geometric data and have widespread use in computer graphics and vision. However, learning representations for point clouds is particularly challenging due to their nature as being an unordered collection of points irregularly distributed in 3D space. Graph convolution, a generalization of the convolution operation for data defined over graphs, has been recently shown to be very successful at extracting localized features from point clouds in supervised or semi-supervised tasks such as classification or segmentation. This paper studies the unsupervised problem of a generative model exploiting graph convolution. We focus on the generator of a GAN and define methods for graph convolution when the graph is not known in advance as it is the very output of the generator. The proposed architecture learns to generate localized features that approximate graph embeddings of the output geometry. We also study the problem of defining an upsampling layer in the graph-convolutional generator, such that it learns to exploit a self-similarity prior on the data distribution to sample more effectively.", "target": ["Ein GAN, der Graph Convolutional Operationen mit dynamisch berechneten Graphen aus verborgenen Merkmalen verwendet.", "Die Arbeit schlägt vor, eine Version von GANs speziell für die Erzeugung von Punktwolken mit dem Kernbeitrag der Upsampling-Operation.", "In diesem Beitrag werden Graph Convolutional GANs für unregelmäßige 3D-Punktwolken vorgeschlagen, die gleichzeitig die Domäne und die Merkmale lernen."]} +{"source": "Memorization in over-parameterized neural networks can severely hurt generalization in the presence of mislabeled examples. However, mislabeled examples are to hard avoid in extremely large datasets. We address this problem using the implicit regularization effect of stochastic gradient descent with large learning rates, which we find to be able to separate clean and mislabeled examples with remarkable success using loss statistics. We leverage this to identify and on-the-fly discard mislabeled examples using a threshold on their losses. This leads to On-the-fly Data Denoising (ODD), a simple yet effective algorithm that is robust to mislabeled examples, while introducing almost zero computational overhead. Empirical results demonstrate the effectiveness of ODD on several datasets containing artificial and real-world mislabeled examples.", "target": ["Wir stellen einen schnellen und einfach zu implementierenden Algorithmus vor, der robust gegenüber Datenrauschen ist.", "Die Arbeit zielt darauf ab, potenzielle Beispiele mit Labelrauschen zu entfernen, indem die Beispiele mit großen Verlusten im Trainingsverfahren verworfen werden."]} +{"source": "Binarized Neural Networks (BNNs) have recently attracted significant interest due to their computational efficiency. Concurrently, it has been shown that neural networks may be overly sensitive to ``attacks\" -- tiny adversarial changes in the input -- which may be detrimental to their use in safety-critical domains. Designing attack algorithms that effectively fool trained models is a key step towards learning robust neural networks.\n The discrete, non-differentiable nature of BNNs, which distinguishes them from their full-precision counterparts, poses a challenge to gradient-based attacks. In this work, we study the problem of attacking a BNN through the lens of combinatorial and integer optimization. We propose a Mixed Integer Linear Programming (MILP) formulation of the problem. While exact and flexible, the MILP quickly becomes intractable as the network and perturbation space grow. To address this issue, we propose IProp, a decomposition-based algorithm that solves a sequence of much smaller MILP problems. Experimentally, we evaluate both proposed methods against the standard gradient-based attack (PGD) on MNIST and Fashion-MNIST, and show that IProp performs favorably compared to PGD, while scaling beyond the limits of the MILP.", "target": ["Gradientenbasierte Angriffe auf binarisierte neuronale Netze sind aufgrund der Nichtdifferenzierbarkeit solcher Netze nicht wirksam; unser IPROP-Algorithmus löst dieses Problem durch ganzzahlige Optimierung.", "Schlägt einen neuen Algorithmus im Stil der Zielverbreitung vor, um starke adversarial Angriffe auf binarisierte neuronale Netze zu erzeugen.", "In diesem Beitrag wird ein neuer Angriffsalgorithmus auf der Grundlage von MILP für binäre neuronale Netze vorgeschlagen.", "In dieser Arbeit wird ein Algorithmus zur Suche nach adversarial Angriffen auf binäre neuronale Netze vorgestellt, der iterativ die gewünschten Repräsentationen Schicht für Schicht von der Spitze bis zum Eingang findet und effizienter ist als die vollständige Lösung der gemischt-ganzzahligen linearen Programmierung (MILP)."]} +{"source": "Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token. This biases the model towards retaining more contextual information, in turn improving its ability to predict the next token. With negligible overhead in the number of parameters and training time, our Past Decode Regularization (PDR) method achieves a word level perplexity of 55.6 on the Penn Treebank and 63.5 on the WikiText-2 datasets using a single softmax. We also show gains by using PDR in combination with a mixture-of-softmaxes, achieving a word level perplexity of 53.8 and 60.5 on these datasets. In addition, our method achieves 1.169 bits-per-character on the Penn Treebank Character dataset for character level language modeling. These results constitute a new state-of-the-art in their respective settings.", "target": ["Die Dekodierung des letzten Tokens im Kontext unter Verwendung der vorhergesagten Verteilung der nächsten Token wirkt als Regularisierer und verbessert die Sprachmodellierung.", "Die Autoren führen die Idee der Dekodierung in der Vergangenheit zum Zweck der Regularisierung zur Verbesserung der Komplexität in der Penn Treebank ein.", "Vorschlagen eines zusätzlichen Verlustterms, der beim Training eines LSTM LM verwendet werden kann, und zeigt, dass durch Hinzufügen dieses Verlustterms eine SOTA-Perplexität bei einer Reihe von LM-Benchmarks erreicht werden kann.", "Schlägt eine neue Regularisierungstechnik vor, die mit geringem Aufwand zu der in AWD-LSTM von Merity et al. (2017) verwendeten hinzugefügt werden kann."]} +{"source": "The assumption that data samples are independently identically distributed is the backbone of many learning algorithms. Nevertheless, datasets often exhibit rich structures in practice, and we argue that there exist some unknown orders within the data instances. Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically. Specifically, we assume that the instances are sampled from a Markov chain. Our goal is to learn the transitional operator of the chain as well as the generation order by maximizing the generation probability under all possible data permutations. One of our key ideas is to use neural networks as a soft lookup table for approximating the possibly huge, but discrete transition matrix. This strategy allows us to amortize the space complexity with a single model and make the transitional operator generalizable to unseen instances. To ensure the learned Markov chain is ergodic, we propose a greedy batch-wise permutation scheme that allows fast training. Empirically, we evaluate the learned Markov chain by showing that GMNs are able to discover orders among data instances and also perform comparably well to state-of-the-art methods on the one-shot recognition benchmark task.", "target": ["Vorschlag zur Beobachtung impliziter Ordnungen in Datensätzen unter dem Gesichtspunkt eines generativen Modells.", "Die Autoren befassen sich mit dem Problem der impliziten Ordnung in einem Datensatz und der Herausforderung, diese wiederherzustellen, und schlagen vor, ein abstandsmetrikfreies Modell zu erlernen, das eine Markov-Kette als generativen Mechanismus der Daten annimmt.", "In dem Beitrag wird Generative Markov Networks vorgeschlagen - ein Deep Learning basierter Ansatz zur Modellierung von Sequenzen und zur Entdeckung von Ordnung in Datensätzen.", "Schlägt vor, die Ordnung einer ungeordneten Datenstichprobe durch Lernen einer Markov-Kette zu erlernen."]} +{"source": "We present a Neural Program Search, an algorithm to generate programs from natural language description and a small number of input / output examples. The algorithm combines methods from Deep Learning and Program Synthesis fields by designing rich domain-specific language (DSL) and defining efficient search algorithm guided by a Seq2Tree model on it. To evaluate the quality of the approach we also present a semi-synthetic dataset of descriptions with test examples and corresponding programs. We show that our algorithm significantly outperforms sequence-to-sequence model with attention baseline.", "target": ["Programmsynthese aus natürlichsprachlicher Beschreibung und Eingabe-/Ausgabebeispielen mittels Tree-Beam Search über Seq2Tree-Modelle.", "Es wird ein seq2Tree-Modell zur Übersetzung einer Problemstellung in natürlicher Sprache in das entsprechende funktionale Programm in DSL vorgestellt, das eine Verbesserung gegenüber dem seq2seq-Basisansatz darstellt.", "Diese Arbeit befasst sich mit dem Problem der Programmsynthese, wenn eine Problembeschreibung und eine kleine Anzahl von Eingabe-Ausgabe Beispielen vorliegt.", "In diesem Beitrag wird eine Technik zur Programmsynthese vorgestellt, die eine eingeschränkte Grammatik von Problemen beinhaltet, die mit Hilfe eines aufmerksamkeitsgesteuerten Encoder-Decoder Netzwerks durchsucht wird."]} +{"source": "Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to go beyond memorizing samples (generalizable). In this paper, we show that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions. This is a very mild condition satisfied even by neural networks with a single neuron. Further, we develop generalization bounds between the learned distribution and true distribution under different evaluation metrics. When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. When evaluated with KL divergence, our bound provides an explanation on the counter-intuitive behaviors of testing likelihood in GAN training. Our analysis sheds lights on understanding the practical performance of GANs.", "target": ["In diesem Beitrag werden die Diskriminierungs- und Generalisierungseigenschaften von GANs untersucht, wenn die Diskriminatormenge eine eingeschränkte Funktionsklasse wie neuronale Netze ist.", "Gleicht die Kapazitäten von Generator- und Diskriminatorklassen in GANs aus, indem es garantiert, dass induzierte IPMs Metriken und keine Pseudo-Metriken sind.", "Diese Arbeit bietet eine mathematische Analyse der Rolle der Größe der Gegner-/Diskriminatormenge in GANs."]} +{"source": "Normalization layers are a staple in state-of-the-art deep neural network architectures. They are widely believed to stabilize training, enable higher learning rate, accelerate convergence and improve generalization, though the reason for their effectiveness is still an active research topic. In this work, we challenge the commonly-held beliefs by showing that none of the perceived benefits is unique to normalization. Specifically, we propose fixed-update initialization (Fixup), an initialization motivated by solving the exploding and vanishing gradient problem at the beginning of training via properly rescaling a standard initialization. We find training residual networks with Fixup to be as stable as training with normalization -- even for networks with 10,000 layers. Furthermore, with proper regularization, Fixup enables residual networks without normalization to achieve state-of-the-art performance in image classification and machine translation.", "target": ["Alles, was Sie zum Trainieren tiefer Residualnetze brauchen, ist eine gute Initialisierung; Normalisierungsschichten sind nicht erforderlich.", "Es wird eine Methode zur Initialisierung und Normalisierung von tiefen Residual Netzwerken vorgestellt. Diese basiert auf Beobachtungen der Forward- und Backward Explosion in solchen Netzen. Die Leistung der Methode entspricht den besten Ergebnissen, die mit anderen Netzen mit expliziterer Normalisierung erzielt wurden.", "Die Autoren schlagen eine neuartige Methode zur Initialisierung von Residual Networks vor, die durch die Notwendigkeit begründet ist, explodierende/verschwindende Gradienten zu vermeiden.", "Schlägt eine neue Initialisierungsmethode vor, um sehr tiefe RedNets ohne Batch-Norm zu trainieren."]} +{"source": "Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN's: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.", "target": ["Diese Arbeit zielt darauf ab, eine bessere Metrik für unüberwachtes Lernen, wie z.B. Textgenerierung, zu erlernen und zeigt eine deutliche Verbesserung gegenüber SeqGAN.", "Beschreibt einen Ansatz zur Erzeugung von Zeitsequenzen durch Lernen von Zustands-Aktionswerten, wobei der Zustand die bisher erzeugte Sequenz und die Aktion die Wahl des nächsten Wertes ist. ", "Diese Arbeit befasst sich mit dem Problem der Verbesserung der Sequenzgenerierung durch das Erlernen besserer Metriken, insbesondere mit dem Problem des Expositionsbias."]} +{"source": "One of the most successful techniques in generative models has been decomposing a complicated generation task into a series of simpler generation tasks. For example, generating an image at a low resolution and then learning to refine that into a high resolution image often improves results substantially. Here we explore a novel strategy for decomposing generation for complicated objects in which we first generate latent variables which describe a subset of the observed variables, and then map from these latent variables to the observed space. We show that this allows us to achieve decoupled training of complicated generative models and present both theoretical and experimental results supporting the benefit of such an approach. ", "target": ["Zerlegen Sie die Aufgabe des Lernens eines generativen Modells in das Erlernen von unentwirrten latenten Faktoren für Teilmengen der Daten und das anschließende Erlernen der Verbindung über diese latenten Faktoren. ", "Lokal entwirrte Faktoren für ein hierarchisches generatives Modell für latente Variablen, das als hierarchische Variante der adversarial erlernten Inferenz betrachtet werden kann.", "Der Beitrag untersucht das Potenzial hierarchischer latenter Variablenmodelle für die Generierung von Bildern und Bildsequenzen und schlägt vor, mehrere übereinander gestapelte ALI-Modelle zu trainieren, um eine hierarchische Darstellung der Daten zu erzeugen.", "Die Arbeit zielt darauf ab, die Hierarchien für das Training von GAN in einem hierarchischen Optimierungsplan direkt zu lernen, anstatt von einem Menschen entworfen zu werden."]} +{"source": "Visual grounding of language is an active research field aiming at enriching text-based representations with visual information. In this paper, we propose a new way to leverage visual knowledge for sentence representations. Our approach transfers the structure of a visual representation space to the textual space by using two complementary sources of information: (1) the cluster information: the implicit knowledge that two sentences associated with the same visual content describe the same underlying reality and (2) the perceptual information contained within the structure of the visual space. We use a joint approach to encourage beneficial interactions during training between textual, perceptual, and cluster information. We demonstrate the quality of the learned representations on semantic relatedness, classification, and cross-modal retrieval tasks.", "target": ["Wir schlagen ein gemeinsames Modell vor, um visuelles Wissen in Satzrepräsentationen einzubeziehen.", "Die Arbeit schlägt eine Methode vor, um Videos in Verbindung mit Untertiteln zu verwenden, um die Satzeinbettung zu verbessern.", "In diesem Beitrag wird ein Modell für das Lernen von Satzrepräsentationen vorgeschlagen, die auf der Grundlage von Videodaten fundiert sind.", "Vorschlagen einer Methode zur Verbesserung textbasierter Satzeinbettungen durch ein gemeinsames multimodales Framework."]} \ No newline at end of file