{"source": "This paper revisits visual saliency prediction by evaluating the recent advancements in this field such as crowd-sourced mouse tracking-based databases and contextual annotations. We pursue a critical and quantitative approach towards some of the new challenges including the quality of mouse tracking versus eye tracking for model training and evaluation. We extend quantitative evaluation of models in order to incorporate contextual information by proposing an evaluation methodology that allows accounting for contextual factors such as text, faces, and object attributes. The proposed contextual evaluation scheme facilitates detailed analysis of models and helps identify their pros and cons. Through several experiments, we find that (1) mouse tracking data has lower inter-participant visual congruency and higher dispersion, compared to the eye tracking data, (2) mouse tracking data does not totally agree with eye tracking in general and in terms of different contextual regions in specific, and (3) mouse tracking data leads to acceptable results in training current existing models, and (4) mouse tracking data is less reliable for model selection and evaluation. The contextual evaluation also reveals that, among the studied models, there is no single model that performs best on all the tested annotations.", "target": ["顕著性(画像のどこに眼を付けやすいか)の研究において、データセットの作成方法であるマウス追跡と視線追跡でどのような差異が発生するか比較を行った。"]} {"source": "We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including template-based models, bag-of-words models, sequence-to-sequence neural network and latent variable neural network models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A/B testing with real-world users, where it performed significantly better than many competing systems. Due to its machine learning architecture, the system is likely to improve with additional data.", "target": ["人間とある程度長い時間(20分ほど)対話できるボットを競うAmazon Alexa Prizeで、準決勝をトップクラスで通過したモデル(平均ターン数は14.5~16とかなり長い)。 テンプレートからSeq2Seqまで計22個のモデルを組み合わせ、応答選択を強化学習で最適化している"]} {"source": "Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model on translation by incorporating SRU into the architecture.", "target": ["RNNの計算を高速化する試み。ポイントとしては、ゲートの重みを更新する際に前回の隠れ層を使わずに済ませることで、並列で計算できるようにした点(下図赤線が時系列で並列に計算可能)。後は要素積+和算なので高速に計算を行うことができる。これをSimple Recurrent Unit (SRU)と命名。"]} {"source": "Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-ofthe-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks.", "target": ["画像分類において、単純に分類確率(softmax)だけでなく、当該クラスの中心からの距離をlossに組み込もうという提案(中心は最初に計算してしまうのでなく、学習中ミニバッチにより随時更新されていく)。"]} {"source": "Neural Machine Translation (NMT) has shown remarkable progress over the past few years with production systems now being deployed to end-users. One major drawback of current architectures is that they are expensive to train, typically requiring days to weeks of GPU time to converge. This makes exhaustive hyperparameter search, as is commonly done with other neural network architectures, prohibitively expensive. In this work, we present the first large-scale analysis of NMT architecture hyperparameters. We report empirical results and variance numbers for several hundred experimental runs, corresponding to over 250,000 GPU hours on the standard WMT English to German translation task. Our experiments lead to novel insights and practical advice for building and extending NMT architectures. As part of this contribution, we release an open-source NMT framework that enables researchers to easily experiment with novel techniques and reproduce state of the art results.", "target": ["250,000時間にも及ぶGPUの酷使により得られた、機械翻訳におけるハイパーパラメーターの知見が公開されている。"]} {"source": "This paper proposes a novel model for the rating prediction task in recommender systems which significantly outperforms previous state-of-the art models on a time-split Netflix data set. Our model is based on deep autoencoder with 6 layers and is trained end-to-end without any layer-wise pre-training. We empirically demonstrate that: a) deep autoencoder models generalize much better than the shallow ones, b) non-linear activation functions with negative parts are crucial for training deep models, and c) heavy use of regularization techniques such as dropout is necessary to prevent over-fiting. We also propose a new training algorithm based on iterative output re-feeding to overcome natural sparseness of collaborate filtering. The new algorithm significantly speeds up training and improves model performance. Our code is available at this https URL", "target": ["協調フィルタリングのタスクにAutoEncoderで取り組む話。具体的には、userのレーティング結果であるxを復元するように学習させる。これにより、ユーザーが新しく評価を行ったときにそれをネットワークに入れれば新しい評価を元にした他のアイテムに対する評価が得られる。"]} {"source": "Automatic generation of facial images has been well studied after the Generative Adversarial Network (GAN) came out. There exists some attempts applying the GAN model to the problem of generating facial images of anime characters, but none of the existing work gives a promising result. In this work, we explore the training of GAN models specialized on an anime facial image dataset. We address the issue from both the data and the model aspect, by collecting a more clean, well-suited dataset and leverage proper, empirical application of DRAGAN. With quantitative analysis and case studies we demonstrate that our efforts lead to a stable and high-quality model. Moreover, to assist people with anime character design, we build a website (http://make.girls.moe) with our pre-trained model available online, which makes the model easily accessible to general public.", "target": ["アニメ顔画像特化したGANモデルを検討した。DRAGANの損失関数、生成器をResNetにし、データセットを選別し、学習率を変化させ、SRGANを使用した。高品質なモデルを作成できた。下記ウェブサイトからモデルを一般に公開した。 http://make.girls.moe/#/"]} {"source": "This paper presents an automatic image synthesis method to transfer the style of an example image to a content image. When standard neural style transfer approaches are used, the textures and colours in different semantic regions of the style image are often applied inappropriately to the content image, ignoring its semantic layout, and ruining the transfer result. In order to reduce or avoid such effects, we propose a novel method based on automatically segmenting the objects and extracting their soft semantic masks from the style and content images, in order to preserve the structure of the content image while having the style transferred. Each soft mask of the style image represents a specific part of the style image, corresponding to the soft mask of the content image with the same semantics. Both the soft masks and source images are provided as multichannel input to an augmented deep CNN framework for style transfer which incorporates a generative Markov random field (MRF) model. Results on various images show that our method outperforms the most recent techniques.", "target": ["StyleTransferを行う際に、各パーツに適したスタイルを適用するようにする研究。「各パーツ」は具体的には目や鼻といったもので、これをセグメンテーションするのにCNNに組み込めるCRF-RNNという手法を使用。これで作成したパーツごとのマスクと特徴マップを結合して使用する"]} {"source": "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "target": ["高速かつ初見node/graphにロバストなGraph構造におけるnode embeddingアルゴリズムの提案。 個々のノードから局所的な隣接ノード情報を活用して、埋め込み空間を構築。samplingとaggregationが主な操作。特徴量の分布だけでなく、グラフ構造の情報も活用。計算量削減のために、隣接ノードから固定個のノードを各iterationで一様サンプリング。aggregatorは単純に各nodeの平均でもOK。"]} {"source": "This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.", "target": ["音楽生成にDNNを使った研究のサーベイ、というか本(108ページ)。「音楽生成」と一口に言ってもメロディなのか伴奏なのか、生成結果の形式はMIDIなのかピアノロールなのかと様々なバリエーションがあるので、それらを区分けしつつ手法およびモデリングの仕方についてまとめた大作。"]} {"source": "For accurate entity linking, we need to capture various information aspects of an entity, such as its description in a KB, contexts in which it is mentioned, and structured knowledge. Additionally, a linking system should work on texts from different domains without requiring domain-specific training data or hand-engineered features. In this work we present a neural, modular entity linking system that learns a unified dense representation for each entity using multiple sources of information, such as its description, contexts around its mentions, and its fine-grained types. We show that the resulting entity linking system is effective at combining these sources, and performs competitively, sometimes out-performing current state-of-theart systems across datasets, without requiring any domain-specific training data or hand-engineered features. We also show that our model can effectively “embed” entities that are new to the KB, and is able to link its mentions accurately.", "target": ["Entity Linking(Wikipediaで単語にリンクが張られるような機能)に関する研究。リンク対象の単語の前後をLSTMでエンコードしたもの(これは局所で、他に文全体のリンク対象の情報をマージ)、リンク先の辞書的な記載のエンコード、リンクの種別の計3つを統合して推定する"]} {"source": "The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the \"Squeeze-and-Excitation\" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL.", "target": ["ILSVRC2017の画像分類タスクで一位を記録した論文。空間における関係性を明示的にモデルに組み込み成果が出たため、今度はチャンネル間の関係性を明示的に組み込んだという話。具体的にはチャンネル単位(1x1xC)で切り出し平均をとり、それを重み的に使い特徴マップを作成する。"]} {"source": "Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach. Our method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data. The learned transformation model can then be used to perform data augmentation for any end discriminative model. In our experiments, we show the efficacy of our approach on both image and text datasets, achieving improvements of 4.0 accuracy points on CIFAR-10, 1.4 F1 points on the ACE relation extraction task, and 3.4 accuracy points when using domain-specific transformation operations on a medical imaging dataset as compared to standard heuristic augmentation approaches.", "target": ["augmentationをgenerative adversarial approachで行う事を提案。学習はlabel無しデータを投入したGANモデルの学習ステップと、学習済みのGeneratorを使用してデータ変換を施したデータを入力(識別モデルの前にGeneratorを通すだけ)とする通常の教師あり学習ステップに分かれる。"]} {"source": "Convolutional neural networks (CNNs) have massively impacted visual recognition in 2D images, and are now ubiquitous in state-of-the-art approaches. While CNNs naturally extend to other domains, such as audio and video, where data is also organized in rectangular grids, they do not easily generalize to other types of data such as 3D shape meshes, social network graphs or molecular graphs. To handle such data, we propose a novel graph-convolutional network architecture that builds on a generic formulation that relaxes the 1-to-1 correspondence between filter weights and data elements around the center of the convolution. The main novelty of our architecture is that the shape of the filter is a function of the features in the previous network layer, which is learned as an integral part of the neural network. Experimental evaluations on digit recognition, semi-supervised document classification, and 3D shape correspondence yield state-of-the-art results, significantly improving over previous work for shape correspondence.", "target": ["local filteringアプローチによる動的なGCNを提案。Conv層のフィルタの考え方を再構成することで、不規則な構造に対するlocal graph convolutionを可能にした。エッジの重みはNNで求める。3D shape correspondanceタスクでSoTA。"]} {"source": "Training deep neural networks is known to require a large number of training samples. However, in many applications only few training samples are available. In this work, we tackle the issue of training neural networks for classification task when few training samples are available. We attempt to solve this issue by proposing a new regularization term that constrains the hidden layers of a network to learn class-wise invariant representations. In our regularization framework, learning invariant representations is generalized to the class membership where samples with the same class should have the same representation. Numerical experiments over MNIST and its variants showed that our proposal helps improving the generalization of neural network particularly when trained with few samples. We provide the source code of our framework this https URL .", "target": ["学習用のデータ数が少なくても、クラスごとに不偏的な特徴量抽出をするための正則化手法を提案。本手法では同一クラスのサンプルは似た特徴量を持っているというヒューリスティックを使って、中間表現に対して制約を課す。1ループで同一クラスの2つのサンプルを投入し、通常のロスとは別に2つの中間表現に対してコスト(距離)を計算。"]} {"source": "We present a new neural text to speech (TTS) method that is able to transform text to speech in voices that are sampled in the wild. Unlike other systems, our solution is able to deal with unconstrained voice samples and without requiring aligned phonemes or linguistic features. The network architecture is simpler than those in the existing literature and is based on a novel shifting buffer working memory. The same buffer is used for estimating the attention, computing the output audio, and for updating the buffer itself. The input sentence is encoded using a context-free lookup table that contains one entry per character or phoneme. The speakers are similarly represented by a short vector that can also be fitted to new identities, even with only a few samples. Variability in the generated speech is achieved by priming the buffer prior to generating the audio. Experimental results on several datasets demonstrate convincing capabilities, making TTS accessible to a wider range of applications. In order to promote reproducibility, we release our source code and models.", "target": ["RNNよりシンプルな構成で音声認識を行う研究。基本となるベクトルは、Attentionで作成したコンテキスト、話者ベクトル、前回入力、バッファをNNにかけて作成する。バッファは固定長のキュー形式で、入れると末端がでる形。"]} {"source": "The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy. However, older ideas about privacy may well remain valid and useful. This note reviews two recent works on privacy in the light of the wisdom of some of the early literature, in particular the principles distilled by Saltzer and Schroeder in the 1970s.", "target": ["センシティブな学習データをどう守るかという話。学習済みモデルから学習データが推定できたり、入力or出力を複数与え学習に使用されたデータか検証できてしまうことを予防する。対応として、ノイズを加えた学習(NoisySGD)と分割したデータで学習しアンサンブルする手法(PATE)を提案"]} {"source": "Organized relational knowledge in the form of “knowledge graphs” is important for many applications. However, the ability to populate knowledge bases with facts automatically extracted from documents has improved frustratingly slowly. This paper simultaneously addresses two issues that have held back prior work. We first propose an effective new model, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction. Then we build TACRED, a large (119,474 examples) supervised relation extraction dataset obtained via crowdsourcing and targeted towards TAC KBP relations. The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance. When the model trained on this new dataset replaces the previous relation extraction component of the best TAC KBP 2015 slot filling system, its F1 score increases markedly from 22.2% to 26.7%.", "target": ["文章から各オブジェクトの関係性を抽出する研究(AはBと友達、など)。主語と目的語の位置情報をネットワークに含めることで、例え位置が離れていても関係性を類推できるようにしている(ただ事前に主語/目的語のアノテーションが必要)。学習に使用したデータ(TACRED)は公開されるとのこと"]} {"source": "A significant amount of the world's knowledge is stored in relational databases. However, the ability for users to retrieve facts from a database is limited due to a lack of understanding of query languages such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model leverages the structure of SQL queries to significantly reduce the output space of generated queries. Moreover, we use rewards from in-the-loop query execution over the database to learn a policy to generate unordered parts of the query, which we show are less suitable for optimization via cross entropy loss. In addition, we will publish WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia. This dataset is required to train our model and is an order of magnitude larger than comparable datasets. By applying policy-based reinforcement learning with a query execution environment to WikiSQL, our model Seq2SQL outperforms attentional sequence to sequence models, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.", "target": ["自然言語からSQLを生成する研究。問合せとテーブルが与えられた時に、問合せ中の単語・テーブルのカラム・SQLで使用されるコマンド(SELECTやWHERE)らを組み合わせることで集計句・選択列・選択条件を生成しSQLを組み立てる。選択条件の学習には強化学習が利用されている。"]} {"source": "Standard accuracy metrics indicate that reading comprehension systems are making rapid progress, but the extent to which these systems truly understand language remains unclear. To reward systems with real language understanding abilities, we propose an adversarial evaluation scheme for the Stanford Question Answering Dataset (SQuAD). Our method tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences, which are automatically generated to distract computer systems without changing the correct answer or misleading humans. In this adversarial setting, the accuracy of sixteen published models drops from an average of 75\\% F1 score to 36\\%; when the adversary is allowed to add ungrammatical sequences of words, average accuracy on four models decreases further to 7\\%. We hope our insights will motivate the development of new models that understand language more precisely.", "target": ["機械学習で質問回答を行う際に、モデルが本当に文章を理解したうえで回答しているかを検証する手法の提案。文章に対して回答に影響を与えないAdversarialな変更を行っても精度を維持できるか検証するのが主眼で、ただ文の言い換えは非常に高度なので「付けたし」に重きを置いている。"]} {"source": "In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards.", "target": ["深層学習x強化学習でゲームを攻略する研究のまとめ。どんな手法がどんな種類のゲームに使われているかなどもまとめられている。"]} {"source": "The centroid-based model for extractive document summarization is a simple and fast baseline that ranks sentences based on their similarity to a centroid vector. In this paper, we apply this ranking to possible summaries instead of sentences and use a simple greedy algorithm to find the best summary. Furthermore, we show possi- bilities to scale up to larger input docu- ment collections by selecting a small num- ber of sentences from each document prior to constructing the summary. Experiments were done on the DUC2004 dataset for multi-document summarization. We ob- serve a higher performance over the orig- inal model, on par with more complex state-of-the-art methods.", "target": ["抜粋により文章を要約する研究。基本の手法は各文をTF-IDFで重みづけしたBoWで表現し、文章全体から計算した重心と近い文を選択する。これに文単体でなく作成した要約と重心の距離を比較すること、候補文を事前に選択する手法を組み合わせることで簡単な実装で高いRougeスコアを記録"]} {"source": "Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a \\emph{BadNet}) that has state-of-the-art performance on the user's training and validation samples, but behaves badly on specific attacker-chosen inputs. We first explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classifier. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of {25}\\% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful and---because the behavior of neural networks is difficult to explicate---stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging software.", "target": ["DNNのモデルに対するハッキングについて、考えられるケースをまとめた研究。学習データへの介入は難しいと思うが、悪意ある学習済みモデルを利用させることで特定ケースのみ精度を下げるといったことが可能という報告(道路標識の特定箇所にシールが貼ってある場合のみ異なる標識に誤認させるなど)。"]} {"source": "The amount of content on online music streaming platforms is immense, and most users only access a tiny fraction of this content. Recommender systems are the application of choice to open up the collection to these users. Collaborative filtering has the disadvantage that it relies on explicit ratings, which are often unavailable, and generally disregards the temporal nature of music consumption. On the other hand, item co-occurrence algorithms, such as the recently introduced word2vec-based recommenders, are typically left without an effective user representation. In this paper, we present a new approach to model users through recurrent neural networks by sequentially processing consumed items, represented by any type of embeddings and other context features. This way we obtain semantically rich user representations, which capture a user's musical taste over time. Our experimental analysis on large-scale user data shows that our model can be used to predict future songs a user will likely listen to, both in the short and long term.", "target": ["Spotifyの関わる、楽曲推薦についての論文。プレイリストを文、プレイリストの中の楽曲を単語と見立て、プレイリストから楽曲の分散表現を作成。これをユーザーの再生履歴に沿いRNNで合成することでユーザーの嗜好表現を作成している(実際に再生した楽曲の分散表現を予測するように学習)。"]} {"source": "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Our SSD model is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stage and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets confirm that SSD has comparable accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. Compared to other single stage methods, SSD has much better accuracy, even with a smaller input image size. For 300\\times 300 input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan X and for 500\\times 500 input, SSD achieves 75.1% mAP, outperforming a comparable state of the art Faster R-CNN model. Code is available at this https URL .", "target": ["物体検出を行う研究で、あのYOLOより速く正確にできたとの報告。物体領域を直接予測するのでなく、あらかじめ定められた領域(default boxes)をどれだけ動かすかを予測するという方式。この領域はマルチスケールで用意し、予測のための特徴マップもそれと対応させ用意している。"]} {"source": "Stance classification determines the attitude, or stance, in a (typically short) text. The task has powerful applications, such as the detection of fake news or the automatic extraction of attitudes toward entities or events in the media. This paper describes a surprisingly simple and efficient classification approach to open stance classification in Twitter, for rumour and veracity classification. The approach profits from a novel set of automatically identifiable problem-specific features, which significantly boost classifier accuracy and achieve above state-of-the-art results on recent benchmark datasets. This calls into question the value of using complex sophisticated models for stance classification without first doing informed feature extraction.", "target": ["短い文書(Twitterなど)におけるスタンスの検知について。スタンスには様々なものがあるが、今回は嘘か真実かの検知としている。BoWやPOS、固有表現などベーシックなものから始まり特徴エンジニアリングを駆使しており(モデルは決定木)、LSTMベースのものと比較し優位な結果。"]} {"source": "Deep learning methods employ multiple processing layers to learn hierarchical representations of data and have produced state-of-the-art results in many domains. Recently, a variety of model designs and methods have blossomed in the context of natural language processing (NLP). In this paper, we review significant deep learning related models and methods that have been employed for numerous NLP tasks and provide a walk-through of their evolution. We also summarize, compare and contrast the various models and put forward a detailed understanding of the past, present and future of deep learning in NLP.", "target": ["NLPにおけるdeep learningの手法を網羅的に解説したレビュー論文。紹介されている手法は、分散表現系(word2vecとその前身)、CNN系(Basic CNN, time-delay neural network, dynamic CNN, multi-clumn CNN, dynamic multi-pooling CNN, hybrid CNN-HMM)、RNN系(Simple RNN, LSTM, GRU, Dual-LSTM, MemNet)、Recursive neural network、 強化学習系、教師なし学習系(seq2seq)、生成モデル(VAEs, GANs)、メモリ増設系(memory networks, dynamic memory networks)など。"]} {"source": "Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.", "target": ["DLでのCTR予測。ユーザ素性の各成分ごとに埋め込みベクトルを作成し、attentionをはりユーザベクトルを作成することで、多様なユーザ行動と1部の行動履歴のみがクリックに寄与することを表現した。attention重みは広告の素性ベクトルと埋め込みベクトルの類似度とした。データスパースネスに対応し低頻度素性ほど強い正則化をかけ過学習を防ぐ。GAUC(AUCの拡張)を評価し既存手法をうわまわる。"]} {"source": "A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. Although, the problem has largely been overcome via carefully constructed initializations and batch normalization, architectures incorporating skip-connections such as highway and resnets perform much better than standard feedforward architectures despite well-chosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, the gradients in architectures with skip-connections are far more resistant to shattering, decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new \"looks linear\" (LL) initialization that prevents shattering, with preliminary experiments showing the new initialization allows to train very deep networks without the addition of skip-connections.", "target": ["shattered gradients problem(近しい入力に対する勾配が大きく異なる問題)を定義し、深いフィードフォワードネットワークではこれが起こりやすいこと、またskip connectionはこの問題を回避できることを解析している。"]} {"source": "3D reconstruction from a single image is a key problem in multiple applications ranging from robotic manipulation to augmented reality. Prior methods have tackled this problem through generative models which predict 3D reconstructions as voxels or point clouds. However, these methods can be computationally expensive and miss fine details. We introduce a new differentiable layer for 3D data deformation and use it in DeformNet to learn a model for 3D reconstruction-through-deformation. DeformNet takes an image input, searches the nearest shape template from a database, and deforms the template to match the query image. We evaluate our approach on the ShapeNet dataset and show that - (a) the Free-Form Deformation layer is a powerful new building block for Deep Learning models that manipulate 3D data (b) DeformNet uses this FFD layer combined with shape retrieval for smooth and detail-preserving 3D reconstruction of qualitatively plausible point clouds with respect to a single query image (c) compared to other state-of-the-art 3D reconstruction methods, DeformNet quantitatively matches or outperforms their benchmarks by significant margins. For more information, visit: this https URL .", "target": ["2次元の画像から3Dモデルを生成する試み。既存の研究は直接2D=>3Dを生成モデルで行う形だったが、これだと生成の精度があまり良くなかった。そこで2Dの画像と「似ている3D」をまず検索し、この「似ている3D」を足掛かりに生成を行うというモデルを提案している。"]} {"source": "Neural task-oriented dialogue systems often struggle to smoothly interface with a knowledge base. In this work, we seek to address this problem by proposing a new neural dialogue agent that is able to effectively sustain grounded, multi-domain discourse through a novel key-value retrieval mechanism. The model is end-to-end differentiable and does not need to explicitly model dialogue state or belief trackers. We also release a new dataset of 3,031 dialogues that are grounded through underlying knowledge bases and span three distinct tasks in the in-car personal assistant space: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our architecture is simultaneously trained on data from all domains and significantly outperforms a competitive rule-based system and other existing neural dialogue architectures on the provided domains according to both automatic and human evaluation metrics.", "target": ["外部知識を利用したタスク指向対話をEnd2Endで学習させる試み。外部知識は(対象 属性 値)のような形で格納し(打合せ 時間 5時、など)、対象/属性をキーとしてAttentionにより引いてくる。これを語彙に含め予測する。"]} {"source": "We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the \"learning to search\" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task.", "target": ["RNNにおいて、一定箇所まで一旦予測し(Roll in)、そこから終端までを予測した結果(Roll out)を実際のデータと比較して学習を行う手法の提案。 これにより学習時も自らの予測に基づいて最後まで予測し、誤差計算をするようする"]} {"source": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on hidden-to-hidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.", "target": ["シンプルなLSTMを言語モデル用に限界までチューニングしてみるという研究。メインの工夫は、リカレントの接続にDropConnectを適用する+SGDで更新を行う際一定期間の平均を利用するASGDを、一定間隔の性能チェックで悪化していた場合に行うようにしたNT-ASGDの2点。"]} {"source": "This article offers an empirical study on the different ways of encoding Chinese, Japanese, Korean (CJK) and English languages for text classification. Different encoding levels are studied, including UTF-8 bytes, characters, words, romanized characters and romanized words. For all encoding levels, whenever applicable, we provide comparisons with linear models, fastText and convolutional networks. For convolutional networks, we compare between encoding mechanisms using character glyph images, one-hot (or one-of-n) encoding, and embedding. In total there are 473 models, using 14 large-scale text classification datasets in 4 languages including Chinese, English, Japanese and Korean. Some conclusions from these results include that byte-level one-hot encoding based on UTF-8 consistently produces competitive results for convolutional networks, that word-level n-grams linear models are competitive even without perfect word segmentation, and that fastText provides the best result using character-level n-gram encoding but can overfit when the features are overly rich.", "target": ["中国語圏の言語(CJK)ではテキストのエンコーディングの単位として,UTF-8 bytes, 文字,単語,ローマ字書きの文字,ローマ字書きの単語の5種類があるが、どの単位でエンコーディングするのがテキスト分類のパフォーマンスにとって良いのか、英語、中国語、日本語、韓国語の4つで比較検討した研究。"]} {"source": "Learning Visual Importance for Graphic Designs and Data Visualizations", "target": ["グラフやグラフィックにおいて、人が重要と認識する箇所を予測する研究。BubbleViewという手法(靄のかかった画像から見たい個所をクリックしてもらう。二場面の図参照)や実際に囲ってもらうことでデータセットを作成。手法はFCNベース。"]} {"source": "We introduce a new audio processing technique that increases the sampling rate of signals such as speech or music using deep convolutional neural networks. Our model is trained on pairs of low and high-quality audio examples; at test-time, it predicts missing samples within a low-resolution signal in an interpolation process similar to image super-resolution. Our method is simple and does not involve specialized audio processing techniques; in our experiments, it outperforms baselines on standard speech and music benchmarks at upscaling ratios of 2x, 4x, and 6x. The method has practical applications in telephony, compression, and text-to-speech generation; it demonstrates the effectiveness of feed-forward convolutional architectures on an audio generation task.", "target": ["音声における低解像度(低サンプリングレート)⇒高解像度の研究。サンプルの間を補完させる形で予測を行う。モデルはシンプルなdown sampling/upsamplingのCNNを組み合わせたもの。"]} {"source": "How can we explain the predictions of a black-box model? In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.", "target": ["DNNの判断根拠を理解するための試みで、ある学習データ(サンプル)がなかった場合のモデルへの影響を手がかりにする。通常だと該当サンプルを抜いての再学習が必要だが、該当サンプルのlossを増減させた場合の最適解を既存の最適解から導出するという技を使いこれをクリアしている。"]} {"source": "NLP tasks are often limited by scarcity of manually annotated data. In social media sentiment analysis and related tasks, researchers have therefore used binarized emoticons and specific hashtags as forms of distant supervision. Our paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations. Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis we obtain state-of-the-art performance on 8 benchmark datasets within sentiment, emotion and sarcasm detection using a single pretrained model. Our analyses confirm that the diversity of our emotional labels yield a performance improvement over previous distant supervision approaches.", "target": ["自然言語における感情分析はネガポジの様に二値で行うことが多いが、これでは感情の機微を表現できない。ただ、多感情にするとラベル付けが大変。そこで、Twitterにつけられた絵文字を予測させる形で学習を実行。12億ツイート(!!)でbi-LSTM+Attentionのモデルを学習。"]} {"source": "We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.", "target": ["モバイルにも組み込めるNLPのネットワークを目指した研究。単語ベースの分散表現だとベクトル/辞書が大きくなるため、文字ベースのn-gram(bi/tri)を特徴量とし、2-layerで予測を行なっている。これで言語特定や形態素解析といったモバイル上で使うようなタスクで高精度を維持"]} {"source": "Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.", "target": ["学習済み機械翻訳モデルのEncoderを用いることで、単語だけでなく(単語ではWord2Vecのような分散表現がよく用いられる)文脈の転移学習を行おうという研究。入力に単語・文脈それぞれのベクトルを結合したものを用いることで、感情や質問分類、Q&Aといったタスクで効果を確認。"]} {"source": "Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations.Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm,Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. Witha perturbation in the form of only black and white stickers,we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8%of the captured video frames obtained on a moving vehicle(field test) for the target classifier.", "target": ["道路標識を誤認させるサンプルを作成するという研究。生成した停止の標識のサンプル(をプリントしたもの)を、速度制限の標識に100%誤認させることが可能だったという結果。手法としては誤認識させる最小かつ印刷可能な変動を、標識の範囲内のみという制約(Mask)をかけて計算している。"]} {"source": "The current processes for building machine learning systems require practitioners with deep knowledge of machine learning. This significantly limits the number of machine learning systems that can be created and has led to a mismatch between the demand for machine learning systems and the ability for organizations to build them. We believe that in order to meet this growing demand for machine learning systems we must significantly increase the number of individuals that can teach machines. We postulate that we can achieve this goal by making the process of teaching machines easy, fast and above all, universally accessible. While machine learning focuses on creating new algorithms and improving the accuracy of \"learners\", the machine teaching discipline focuses on the efficacy of the \"teachers\". Machine teaching as a discipline is a paradigm shift that follows and extends principles of software engineering and programming languages. We put a strong emphasis on the teacher and the teacher's interaction with data, as well as crucial components such as techniques and design principles of interaction and visualization. In this paper, we present our position regarding the discipline of machine teaching and articulate fundamental machine teaching principles. We also describe how, by decoupling knowledge about machine learning algorithms from the process of teaching, we can accelerate innovation and empower millions of new uses for machine learning models.", "target": ["機械学習を利用したいというニーズに応えていくには、機械学習モデルの構築作業を分業していく必要があるという提言。現在は一人の職人がデータ収集から前処理、モデルの構築まで全部を行い、そのプロセスが属人的になることが多い。なので、最低限アルゴリズム構築と学習は分けようという。"]} {"source": "Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually.", "target": ["ノイズの入れ方で学習効率が大きく変わるという話。通常の強化学習では行動を決定した後にノイズで散らすが、提案手法は行動を決定するネットワーク自体にノイズを乗せる。既存の手法ではエージェントの意思決定とは無関係にノイズが作用するので、予測不能な探索をする可能性があったとのこと。"]} {"source": "In adversarial training, a set of models learn together by pursuing competing goals, usually defined on single data instances. However, in relational learning and other non-i.i.d domains, goals can also be defined over sets of instances. For example, a link predictor for the is-a relation needs to be consistent with the transitivity property: if is-a(x_1, x_2) and is-a(x_2, x_3) hold, is-a(x_1, x_3) needs to hold as well. Here we use such assumptions for deriving an inconsistency loss, measuring the degree to which the model violates the assumptions on an adversarially-generated set of examples. The training objective is defined as a minimax problem, where an adversary finds the most offending adversarial examples by maximising the inconsistency loss, and the model is trained by jointly minimising a supervised loss and the inconsistency loss on the adversarial examples. This yields the first method that can use function-free Horn clauses (as in Datalog) to regularise any neural link predictor, with complexity independent of the domain size. We show that for several link prediction models, the optimisation problem faced by the adversary has efficient closed-form solutions. Experiments on link prediction benchmarks indicate that given suitable prior knowledge, our method can significantly improve neural link predictors on all relevant metrics.", "target": ["エンティティ間の関係(is-a)を学習させる際に、敵対的なサンプルを使うことで正規化を行うという手法。これまでの手法では猫=ネコ科、ネコ科=動物という個別の関係は学習できるが、猫=動物という推移関係には弱かった。そこで、A・B、B・Cに加えC・Aまで含めて学習を行っている。"]} {"source": "In this paper we argue for the fundamental importance of the value distribution: the distribution of the random return received by a reinforcement learning agent. This is in contrast to the common approach to reinforcement learning which models the expectation of this return, or value. Although there is an established body of literature studying the value distribution, thus far it has always been used for a specific purpose such as implementing risk-aware behaviour. We begin with theoretical results in both the policy evaluation and control settings, exposing a significant distributional instability in the latter. We then use the distributional perspective to design a new algorithm which applies Bellman's equation to the learning of approximate value distributions. We evaluate our algorithm using the suite of games from the Arcade Learning Environment. We obtain both state-of-the-art results and anecdotal evidence demonstrating the importance of the value distribution in approximate reinforcement learning. Finally, we combine theoretical and empirical evidence to highlight the ways in which the value distribution impacts learning in the approximate setting.", "target": ["強化学習で使用されるBellman Equationについて、報酬の期待値ではなく状況/行動に対する「分布」を使用しようという提案。「期待値」を利用する場合どんな状況における報酬も最終的には平均化されてしまうが、分布なら個別の状況に応じて報酬を推定することができる、という。"]} {"source": "In this paper, we introduce Recipe1M+, a new large-scale, structured corpus of over one million cooking recipes and 13 million food images. As the largest publicly available collection of recipe data, Recipe1M+ affords the ability to train high-capacity models on aligned, multimodal data. Using these data, we train a neural network to learn a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Moreover, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M+ dataset and food and cooking in general. Code, data and models are publicly available.", "target": ["料理の画像からそのレシピを推察するという研究。このため、料理画像とレシピのペアのデータセットを作成している(総計100万、料理種は80万)。モデルにおいては、自然言語側は材料をencode・手順をencodeして全結合。画像側はCNNにかけてベクトル化。双方の距離と、同一の食品カテゴリの分類で最適化を行っている。"]} {"source": "We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a \"surrogate\" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.", "target": ["Policy gradientは様々なタスクで利用されているが、戦略の更新幅の設定が難しく、小さいと収束が遅くなり大きいと学習が破綻する問題があった。そこで、TRPOという更新前後の戦略分布の距離を制約にするモデルをベースに、より計算を簡略化したPPOという手法を開発した。"]} {"source": "Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images. In most successful applications, GAN models share two common aspects: solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions; and parameterizing the generator and the discriminator as deep convolutional neural networks. The goal of this paper is to disentangle the contribution of these two factors to the success of GANs. In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators using simple reconstruction losses. Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors; all of this without the adversarial optimization scheme.", "target": ["GANはCNN+Adversarialな学習という点に特徴があるが、後者の形態は学習を難しくしている。そこでdiscriminator(以下D)をとってしまい、ノイズとサンプルから元画像を復元する。そしてDの持つ判定力を模倣するため、ラプラシアン階層ごとの差分を利用し学習する。"]} {"source": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing code bases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.", "target": ["よくチューニングされたLSTMはSOTAを出したと言われるモデル(ここではRecurrent Highway Network)に比肩するという話。下図が4-layer LSTMで上位の成績を収めたパラメーターの設定図となる。この範囲内であれば、perplexityの変動は~3程度に収まる。"]} {"source": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult control problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages this data to massively accelerate the learning process even from relatively small amounts of demonstration data. DQfD works by combining temporal difference updates with large-margin classification of the demonstrator's actions. We show that DQfD has better initial performance than Deep Q-Networks (DQN) on 40 of 42 Atari games and it receives more average rewards than DQN on 27 of 42 Atari games. We also demonstrate that DQfD learns faster than DQN even when given poor demonstration data.", "target": ["DQNは学習に時間がかかるので、お手本を与えておくことでその時間を短縮しようという試み。お手本でまずは学習し、その後の実際の学習でもお手本の情報はReplay bufferにキープし、なおかつ重みをつけて学習する。"]} {"source": "Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to \\emph{learn} data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are -- to some degree -- transferable across models, domains, and even tasks.", "target": ["転移学習は通常は1:1で行われることが多いが、これだと個別に学習を行う必要がある。そこで転移のための学習データを選択するモデルを独立に構築することを提案。通常はドメイン間の類似度のみが指標として使われることが多いが、複数の類似度とデータ間の多様性の指標を使いモデルを構築している。"]} {"source": "In this work, we show that saturating output activation functions, such as the softmax, impede learning on a number of standard classification tasks. Moreover, we present results showing that the utility of softmax does not stem from the normalization, as some have speculated. In fact, the normalization makes things worse. Rather, the advantage is in the exponentiation of error gradients. This exponential gradient boosting is shown to speed up convergence and improve generalization. To this end, we demonstrate faster convergence and better performance on diverse classification tasks: image classification using CIFAR-10 and ImageNet, and semantic segmentation using PASCAL VOC 2012. In the latter case, using the state-of-the-art neural network architecture, the model converged 33% faster with our method (roughly two days of training less) than with the standard softmax activation, and with a slightly better performance to boot.", "target": ["ニューラルネットでは層を積むほど複雑性が増し最適化が難しくなる。なので必要以上に層を積まない方がいいが、この不必要の筆頭が表現力に貢献しないが非線形である出力層(softmaxなど)だ!という指摘。出力は線形、伝搬誤差は累乗(強調)することで精度と収束速度の向上が確認できた。"]} {"source": "Machine-learning excels in many areas with well-defined goals. However, a clear goal is usually not available in art forms, such as photography. The success of a photograph is measured by its aesthetic value, a very subjective concept. This adds to the challenge for a machine learning approach. We introduce Creatism, a deep-learning system for artistic content creation. In our system, we break down aesthetics into multiple aspects, each can be learned individually from a shared dataset of professional examples. Each aspect corresponds to an image operation that can be optimized efficiently. A novel editing tool, dramatic mask, is introduced as one operation that improves dramatic lighting for a photo. Our training does not require a dataset with before/after image pairs, or any additional labels to indicate different aspects in aesthetics. Using our system, we mimic the workflow of a landscape photographer, from framing for the best composition to carrying out various post-processing operations. The environment for our virtual photographer is simulated by a collection of panorama images from Google Street View. We design a \"Turing-test\"-like experiment to objectively measure quality of its creations, where professional photographers rate a mixture of photographs from different sources blindly. Experiments show that a portion of our robot's creation can be confused with professional work.", "target": ["写真をプロ級に加工する仕組みの紹介。実際のプロの写真に意図的にフィルタをかけてネガティブサンプルを作り、その修正方法をGANを利用して学習させている。"]} {"source": "It has been shown that most machine learning algorithms are susceptible to adversarial perturbations. Slightly perturbing an image in a carefully chosen direction in the image space may cause a trained neural network model to misclassify it. Recently, it was shown that physical adversarial examples exist: printing perturbed images then taking pictures of them would still result in misclassification. This raises security and safety concerns. However, these experiments ignore a crucial property of physical objects: the camera can view objects from different distances and at different angles. In this paper, we show experiments that suggest that current constructions of physical adversarial examples do not disrupt object detection from a moving platform. Instead, a trained neural network classifies most of the pictures taken from different distances and angles of a perturbed image correctly. We believe this is because the adversarial property of the perturbation is sensitive to the scale at which the perturbed picture is viewed, so (for example) an autonomous car will misclassify a stop sign only from a small range of distances. Our work raises an important question: can one construct examples that are adversarial for many or most viewing conditions? If so, the construction should offer very significant insights into the internal representation of patterns by deep networks. If not, there is a good prospect that adversarial examples can be reduced to a curiosity with little practical impact.", "target": ["画像に微細な変更を加えることで誤認識をさせる試みがあるが、実世界での運用上は心配しなくてもいいのではという研究。誤検知を誘発する変更は画像をとった距離/角度に固有のもので、少しそれらが変わると正しく認識されるとのこと。自動運転などでは対象画像までの距離/角度はすぐに変わるのでOKという"]} {"source": "Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. Recent work in meta-learning seeks to overcome this shortcoming by training a meta-learner on a distribution of similar tasks; the goal is for the meta-learner to generalize to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, most recent approaches to meta-learning are extensively hand-designed, either using architectures that are specialized to a particular application, or hard-coding algorithmic components that tell the meta-learner how to solve the task. We propose a class of simple and generic meta-learner architectures, based on temporal convolutions, that is domain- agnostic and has no particular strategy or algorithm encoded into it. We validate our temporal-convolution-based meta-learner (TCML) through experiments pertaining to both supervised and reinforcement learning, and demonstrate that it outperforms state-of-the-art methods that are less general and more complex.", "target": ["メタラーニングを手軽に行うための追加レイヤの提案。通常の学習ではデータの分布を学習させるが、メタラーニングではタスクの分布を学習させる(以前の入力や状況と似ているか)。そこで各タスク(入力+判定)をDilated CNNで畳み込むレイヤを追加。画像認識と強化学習で検証し、精度向上"]} {"source": "Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-to-sequence problems. However, the fact that soft attention mechanisms perform a pass over the entire input sequence when producing each element in the output sequence precludes their use in online settings and results in a quadratic time complexity. Based on the insight that the alignment between input and output sequence elements is monotonic in many problems of interest, we propose an end-to-end differentiable method for learning monotonic alignments which, at test time, enables computing attention online and in linear time. We validate our approach on sentence summarization, machine translation, and online speech recognition problems and achieve results competitive with existing sequence-to-sequence models.", "target": ["(soft) Attentionの計算範囲を限定して処理速度を向上させる話。Attentionは過去のEncoderの状態全てに対してどこが重要か計算するが、実際重要な箇所はDecoderの生成に伴い時間軸上を徐々に右にずれていく単純な推移になる。ならその範囲に限定しようという話"]} {"source": "Many supervised learning tasks are emerged in dual forms, e.g., English-to-French translation vs. French-to-English translation, speech recognition vs. text to speech, and image classification vs. image generation. Two dual tasks have intrinsic connections with each other due to the probabilistic correlation between their models. This connection is, however, not effectively utilized today, since people usually train the models of two dual tasks separately and independently. In this work, we propose training the models of two dual tasks simultaneously, and explicitly exploiting the probabilistic correlation between them to regularize the training process. For ease of reference, we call the proposed approach \\emph{dual supervised learning}. We demonstrate that dual supervised learning can improve the practical performances of both tasks, for various applications including machine translation, image processing, and sentiment analysis.", "target": ["機械学習における対称性に注目した研究。翻訳で日->英に対し英->日があるように、あるタスクには対となるタスクが存在する。であれば同時に学習させたほうが良いのではという主張。対象関係のタスクの条件付確率が等しくなるような制約をかけて学習させ、翻訳・画像・文判定の3つで検証している。"]} {"source": "The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.", "target": ["AttentionをDecode時に毎回計算するのでなく、Encoder時に(数を絞って)計算しておくことで計算速度を向上させるという話。最初の予測には最初の方、最後の予測には最後の方に注目させるためPosition Encodingも併せて行っている。"]} {"source": "We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and dueling agents (entropy reward and \\epsilon-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.", "target": ["強化学習を行う際に内発的報酬などを組み込む手法があるが、これは実際に環境から得られる報酬とは異なるため場合によっては学習結果をゆがめてしまう可能性がある。そこで、ランダムな探索をより意図的に行うためにネットワークの伝搬にノイズを組み込むことを提案(ノイズを組み込むため、ε-greedyによる探索も必要なくなる)。これにより多くのゲームでスコアを改善できた"]} {"source": "In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. \"Associations\" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.", "target": ["ラベルなしデータを活用した学習方法の提案。同じラベルのデータは当然近いベクトル表現になるはずなので、例えば(数字の)1とラベルされた画像のベクトル表現→近い表現をラベルなしから探す→さらにそれに近いものをラベルありから探す=1とラベルされた画像に戻るはず、という仮定から学習を行う"]} {"source": "This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. It introduces some recently proposed techniques of interpretation, along with theory, tricks and recommendations, to make most efficient use of these techniques on real data. It also discusses a number of practical applications.", "target": ["DNNの判断を理解するための研究のまとめ。ネットワークが反応する入力を見つける方法(Activation Maximizationなど)、判断根拠となった特徴を入力にマップする方法(Relevance Propagationなど)などを紹介、説明力の比較方法についても記載している"]} {"source": "This paper describes the E2E data, a new dataset for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. We also establish a baseline on this dataset, which illustrates some of the difficulties associated with this data.", "target": ["End-to-Endの対話システムを構築するためのデータセットが公開。50万発話でが含まれ、ドメインはレストラン検索となっている。発話に対しては固有表現(slot)的なアノテーションもされている(「フレンチが食べたい。500円くらいで」なら、種別=フレンチ、予算=500円など)。"]} {"source": "Hyperparameter tuning is one of the most time-consuming workloads in deep learning. State-of-the-art optimizers, such as AdaGrad, RMSProp and Adam, reduce this labor by adaptively tuning an individual learning rate for each variable. Recently researchers have shown renewed interest in simpler methods like momentum SGD as they may yield better test metrics. Motivated by this trend, we ask: can simple adaptive methods based on SGD perform as well or better? We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam. We then analyze its robustness to learning rate misspecification and objective curvature variation. Based on these insights, we design YellowFin, an automatic tuner for momentum and learning rate in SGD. YellowFin optionally uses a negative-feedback loop to compensate for the momentum dynamics in asynchronous settings on the fly. We empirically show that YellowFin can converge in fewer iterations than Adam on ResNets and LSTMs for image recognition, language modeling and constituency parsing, with a speedup of up to 3.28x in synchronous and up to 2.69x in asynchronous settings.", "target": ["SGDが最近見直されてきているが、重要なパラメーターであるMomentumについてはあまり議論がされていない。滑らかな勾配の曲面における最適なMomentumの値は数理的に証明が可能であり、これに伴い学習率の低減についても一定値が求まる。これを応用したYellowFinという最適化法を発明し、検証を行ったところ既存の最適化手法よりも高速に収束することが確認できた。"]} {"source": "Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of (Goodfellow et al 2014) suggested they do, if they were given sufficiently large deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al (to appear at ICML 2017) raised doubts whether the same holds when discriminator has finite size. It showed that the training objective can approach its optimum value even if the generated distribution has very low support ---in other words, the training objective is unable to prevent mode collapse. The current note reports experiments suggesting that such problems are not merely theoretical. It presents empirical evidence that well-known GANs approaches do learn distributions of fairly low support, and thus presumably are not learning the target distribution. The main technical contribution is a new proposed test, based upon the famous birthday paradox, for estimating the support size of the generated distribution.", "target": ["GANは与えている画像をあまり学習していないのではという話。きちんと学習していれば生成画像は無限の組み合わせのパターンがあるはずなのに、実際はごく少ない生成画像の中で重複がたやすく見つかる。Discriminatorのサイズを大きくすることでこの問題は回避できるかも?としている"]} {"source": "We build deep RL agents that execute declarative programs expressed in formal language. The agents learn to ground the terms in this language in their environment, and can generalize their behavior at test time to execute new programs that refer to objects that were not referenced during training. The agents develop disentangled interpretable representations that allow them to generalize to a wide variety of zero-shot semantic tasks.", "target": ["ロボットアームに論理形式の記述(GET[AND[赤い, ボール]]で赤いボールをとる、みたいな)を実行させる研究。各オブジェクトの情報(環境情報)を入力にしてオブジェクトと特徴(赤い、など)のマトリクスを作成。そこから行動を推定するネットワークを構築し、強化学習で学習させている。"]} {"source": "We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.", "target": ["DNNは確かにラベルを覚えきる力があるが、本物とノイズではデータの性質が異なり(ノイズはデータ間の関連がない)、学習プロセスにもそれが影響しているのでは?という検証。結果として、共通パターンを先行して覚えることが示唆されている。"]} {"source": "The Japanese comic format known as Manga is popular all over the world. It is traditionally produced in black and white, and colorization is time consuming and costly. Automatic colorization methods generally rely on greyscale values, which are not present in manga. Furthermore, due to copyright protection, colorized manga available for training is scarce. We propose a manga colorization method based on conditional Generative Adversarial Networks (cGAN). Unlike previous cGAN approaches that use many hundreds or thousands of training images, our method requires only a single colorized reference image for training, avoiding the need of a large dataset. Colorizing manga using cGANs can produce blurry results with artifacts, and the resolution is limited. We therefore also propose a method of segmentation and color-correction to mitigate these issues. The final results are sharp, clear, and in high resolution, and stay true to the character's original color scheme.", "target": ["GANをベースに、1つの白黒/カラーのペアからの学習だけで漫画を塗りきるという研究。そもそも同じキャラは同じ色、同じパーツ(顔、服etc)は同じ色なので、何枚もいらないでしょということでパーツごとにcrop(セグメンテーション)を行い学習を行っている。結果はかなり衝撃的。"]} {"source": "In the past five years we have observed the rise of incredibly well performing feed-forward neural networks trained supervisedly for vision related tasks. These models have achieved super-human performance on object recognition, localisation, and detection in still images. However, there is a need to identify the best strategy to employ these networks with temporal visual inputs and obtain a robust and stable representation of video data. Inspired by the human visual system, we propose a deep neural network family, CortexNet, which features not only bottom-up feed-forward connections, but also it models the abundant top-down feedback and lateral connections, which are present in our visual cortex. We introduce two training schemes - the unsupervised MatchNet and weakly supervised TempoNet modes - where a network learns how to correctly anticipate a subsequent frame in a video clip or the identity of its predominant subject, by learning egomotion clues and how to automatically track several objects in the current scene. Find the project website at this https URL.", "target": ["動画において時系列で認識結果がぶれないようにすることを目指した研究。単純なConvだけでなくDeconvの結果も取り入れる構造を考案しさらに階層上に積んでいる。これをフレーム間マッチ(MatchNet)と時系列での予測差異(TempoNet)の2種で学習させている。"]} {"source": "We propose an approach for semi-automatic annotation of object instances. While most current methods treat object segmentation as a pixel-labeling problem, we here cast it as a polygon prediction task, mimicking how most current datasets have been annotated. In particular, our approach takes as input an image crop and sequentially produces vertices of the polygon outlining the object. This allows a human annotator to interfere at any time and correct a vertex if needed, producing as accurate segmentation as desired by the annotator. We show that our approach speeds up the annotation process by a factor of 4.7 across all classes in Cityscapes, while achieving 78.4% agreement in IoU with original ground-truth, matching the typical agreement between human annotators. For cars, our speed-up factor is 7.3 for an agreement of 82.2%. We further show generalization capabilities of our approach to unseen datasets.", "target": ["画像のアノテーションを楽にするために、候補領域を予測しポリゴンで囲み、ユーザーが行わないといけないのはポリゴンの頂点の編集だけにするという試み。ポリゴンは学習済みVGGで特徴量を抜いてConvLSTMで予測している。これでアノテーションの精度を保ちつつ5~7倍の速度向上を実現。"]} {"source": "We present a new model for singing synthesis based on a modified version of the WaveNet architecture. Instead of modeling raw waveform, we model features produced by a parametric vocoder that separates the influence of pitch and timbre. This allows conveniently modifying pitch to match any target melody, facilitates training on more modest dataset sizes, and significantly reduces training and generation times. Our model makes frame-wise predictions using mixture density outputs rather than categorical outputs in order to reduce the required parameter count. As we found overfitting to be an issue with the relatively small datasets used in our experiments, we propose a method to regularize the model and make the autoregressive generation process more robust to prediction errors. Using a simple multi-stream architecture, harmonic, aperiodic and voiced/unvoiced components can all be predicted in a coherent manner. We compare our method to existing parametric statistical and state-of-the-art concatenative methods using quantitative metrics and a listening test. While naive implementations of the autoregressive generation algorithm tend to be inefficient, using a smart algorithm we can greatly speed up the process and obtain a system that's competitive in both speed and quality.", "target": ["音声合成の研究で、WaveNetを踏襲しつつ生の音声でなく音響特徴量(図はスペクトログラムに見えるが、WORLDというソフトウェアで抽出しているよう)を使用することで計算時間を短縮しつつ、ピッチや音色といった要素を個別に扱えるようになった。これを利用した歌声もデモで公開されてる"]} {"source": "This paper proposes the idea of using a generative adversarial network (GAN) to assist a novice user in designing real-world shapes with a simple interface. The user edits a voxel grid with a painting interface (like Minecraft). Yet, at any time, he/she can execute a SNAP command, which projects the current voxel grid onto a latent shape manifold with a learned projection operator and then generates a similar, but more realistic, shape using a learned generator network. Then the user can edit the resulting shape and snap again until he/she is satisfied with the result. The main advantage of this approach is that the projection and generation operators assist novice users to create 3D models characteristic of a background distribution of object shapes, but without having to specify all the details. The core new research idea is to use a GAN to support this application. 3D GANs have previously been used for shape generation, interpolation, and completion, but never for interactive modeling. The new challenge for this application is to learn a projection operator that takes an arbitrary 3D voxel model and produces a latent vector on the shape manifold from which a similar and realistic shape can be generated. We develop algorithms for this and other steps of the SNAP processing pipeline and integrate them into a simple modeling tool. Experiments with these algorithms and tool suggest that GANs provide a promising approach to computer-assisted interactive modeling.", "target": ["GANを利用し初心者が3Dオブジェクトを構築するのをサポートするツールを開発した話。まずざくっと作った後に「SNAP」コマンドを実行すると、ベテラン達の3Dオブジェクトから学習したGANがいい感じに調整。それを修正してさらにSNAPして・・・と繰り返す。"]} {"source": "Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all.", "target": ["1モデルでマルチタスクを解かせるという試み。タスクは画像認識・音声認識・形態素解析・翻訳などの計8タスクで、モデルとしては各入力に対するEncoder・出力に変換するDecoder、それらの間をつなぐ=マルチモーダルな情報をミックスるI/O Mixerという構成。"]} {"source": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage.", "target": ["ステレオ画像のみを教師データとして単眼深度推定を学習する。 具体的な処理の流れは以下の通り。 1. 左画像のみから左画像および右画像の視差マップを生成する 1. 生成された視差マップとオリジナルの左右画像から逆側の画像を生成する(Bilinear Samplerなるものを使う) 1. 以下の3つをlossとして学習する。 - 生成された画像とオリジナル画像との差 - 視差マップの滑らかさ - 視差マップを使って視差マップ自身を反対側の視差マップに写像した時の誤差 1. 視差マップ×係数が実際の深度"]} {"source": "Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.", "target": ["「学習方法」を学ぶメタラーナーを利用することで、少ない学習データから正答できるようにする(Few-Shot Learning)ことを試みた研究。メタラーナーはLSTM、学習側はCNNで、学習側のlossからメタラーナーが更新パラメーターを決定して、学習側に渡す形になる。"]} {"source": "Deep latent variable models, trained using variational autoencoders or generative adversarial networks, are now a key technique for representation learning of continuous structures. However, applying similar methods to discrete structures, such as text sequences or discretized images, has proven to be more challenging. In this work, we propose a flexible method for training deep latent variable models of discrete structures. Our approach is based on the recently-proposed Wasserstein autoencoder (WAE) which formalizes the adversarial autoencoder (AAE) as an optimal transport problem. We first extend this framework to model discrete sequences, and then further explore different learned priors targeting a controllable representation. This adversarially regularized autoencoder (ARAE) allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce change in the output space. Finally we show that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods.", "target": ["GANを利用したテキストの生成(と潜在構造の獲得)を試みた研究。encodeして単純にGANを適用するだけではなかなか上手くいかないので、Auto Encoderと並行して学習させるという手法を提案(AE側のencodeを真としてGANを適用)。結果としては微妙なところ。"]} {"source": "Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance - known as the \"generalization gap\" phenomena. Identifying the origin of this gap and closing it had remained an open problem. Contributions: We examine the initial high learning rate training phase. We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a \"random walk on random landscape\" statistical model which is known to exhibit similar \"ultra-slow\" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the \"generalization gap\" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named \"Ghost Batch Normalization\" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization.", "target": ["大きいバッチサイズでも汎化性能を高めることができるということを示した研究。フラットな最適解への到達には更新回数が大きく関係しており、大きなバッチの場合同エポックだとこの更新回数を稼げないのが問題とのこと。また、高めの学習率の設定やBatch Normでも十分抑止できる。"]} {"source": "The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say 32-512 data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions - and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.", "target": ["なぜ「ミニ」なバッチの方が大きいバッチよりもうまくいくのかを検証した論文。理論的な検証ではなく実際の学習で実地的に検証を行っており、結果ミニなバッチの方が周辺がフラットな最適解の方にたどり着くのに対し、大きいバッチはシャープな最適解にたどり着く傾向があるとのこと。"]} {"source": "One of the main challenges in reinforcement learning (RL) is generalisation. In typical deep RL methods this is achieved by approximating the optimal value function with a low-dimensional representation using a deep network. While this approach works well in many domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable. This paper contributes towards tackling such challenging domains, by proposing a new method, called Hybrid Reward Architecture (HRA). HRA takes as input a decomposed reward function and learns a separate value function for each component reward function. Because each component typically only depends on a subset of all features, the corresponding value function can be approximated more easily by a low-dimensional representation, enabling more effective learning. We demonstrate HRA on a toy-problem and the Atari game Ms. Pac-Man, where HRA achieves above-human performance.", "target": ["強化学習において攻略困難だったパックマンを攻略したことで話題となった研究。状況からの報酬の推定・行動の決定を一本で行わず、報酬の推定と行動決定の関数を分離。かつ、複数のエージェント(状況認識部分は共有なので、イメージとしては多頭になる)に持たせたそれらの重み付き和から学習を行う。"]} {"source": "Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.", "target": ["画像のどの部分が分類に貢献したかを確認するための手法として画像に対する勾配を可視化する手法があるが、そのまま使うととてもノイズが多い。そこでがガウシンアンノイズを加えたn個の入力に対する勾配を平均することでスムージングを行ったという話。"]} {"source": "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.", "target": ["強化学習で報酬関数の設計を人が行っていると、定義にミスがあった時事故につながったりする。そこで報酬を関数表現でなく直接的に与えることでより明確な学習をさせるというもの。具体的にはエージェントが提示する2つの行動についてどちらが好ましいかを人が教示することでフィードバックを与える。"]} {"source": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.", "target": ["RNN/CNNを使わず翻訳のSOTAを達成した話。Attentionを基礎とした伝搬が肝となっている。単語/位置のlookupから入力を作成、Encoderは入力+前回出力からAを作成しその後位置ごとに伝搬、DecoderはEncoder出力+前回出力から同様に処理し出力している"]} {"source": "Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are \"scaled exponential linear units\" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: this http URL.", "target": ["通常のフィードフォワードで、伝搬時に正規化の状態をキープし続けられるようにする手法の提案。これにより、Batch Normalizationなど外部的に正規化する必要がなくなる。平均/分散を調整できるよう、正負の値/拡大縮小双方が取れる活性化関数・初期化方法などを提案している。"]} {"source": "We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.", "target": ["マルチエージェントの強化学習について、どのような学習方法が良いのかについての研究。Actor/Criticモデルを適用し、個々のプレイヤー(Actor)にコーチ(Critic)をつける形で学習するのが良いとのこと。コーチは他のプレイヤーの状況を把握でき、その上で最適な指示を行う"]} {"source": "Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.", "target": ["オブジェクト同士の「関係性」を学習させるためのモジュールを発明したという研究。2つのベクトルを引数に演算する関数と(関係の数だけ総当たりで計算)、その和を基に演算する関数の二つでできている(実験では双方3~4層のNN)。これで画像キャプションなどで驚きの精度を達成。"]} {"source": "This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.", "target": ["Wikipediaの知識を使って一般的な質問に回答する試み。回答を含む文章の取得、文章から回答箇所を抜粋するの二段階で、前者はbigramのTF-IDFベクトルを使って検索、後者は学習済み分散表現(Glove)・品詞などの単語特徴・質問中の単語との一致などを入力としたRNNを利用"]} {"source": "The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18× faster, requires 75× less FLOPs, has 79× less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster.", "target": ["DNNの精度を落とさず演算速度をどう上げるかという課題へのアプローチ(最近トレンドになっている)。構成は下図の通りで、18倍の速度向上とパラメーター数を1/79に削減。ポイントはdown samplingで、早い層から仕掛けている"]} {"source": "We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.", "target": ["行動認識において大規模化のみならず網羅性や機械学習のための校正されたデータセットを提案。3Dカーネルを用いたDNNの学習においても有効である。"]} {"source": "Deep neural networks with skip-connections, such as ResNet, show excellent performance in various image classification benchmarks. It is though observed that the initial motivation behind them - training deeper networks - does not actually hold true, and the benefits come from increased capacity, rather than from depth. Motivated by this, and inspired from ResNet, we propose a simple Dirac weight parameterization, which allows us to train very deep plain networks without explicit skip-connections, and achieve nearly the same performance. This parameterization has a minor computational cost at training time and no cost at all at inference, as both Dirac parameterization and batch normalization can be folded into convolutional filters, so that network becomes a simple chain of convolution-ReLU pairs. We are able to match ResNet-1001 accuracy on CIFAR-10 with 28-layer wider plain DiracNet, and closely match ResNets on ImageNet. Our parameterization also mostly eliminates the need of careful initialization in residual and non-residual networks. The code and models for our experiments are available at this https URL", "target": ["ResNetのskip-connection構造を表現する重みのパラメトライゼーションDirac parametrizationの提案。convolution層の操作を$(a\\delta+bW_{norm})\\odot x$とする。$\\odot$はconv操作、$W_{norm}$は正規化した通常のconv操作のテンソル、$\\delta$はconv操作に対して恒等なテンソルでこれによってskip-connectionを表現する。これとNCReLUを導入したDiracNetを評価した。わずかな計算量の増加でskip-connectionなしで深い層の学習を実現し性能も同等。"]} {"source": "This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.", "target": ["IR(情報抽出)にGANを適用。Discriminatorは抽出関すを最大化するようタグ付けされたデータを学習し、GeneratorはDiscriminatorが識別しにくいデータを生成する。IRGANで学習したモデルはweb検索、アイテム推薦でスコアを更新した。"]} {"source": "Deep neural networks trained on large supervised datasets have led to impressive results in image classification and other tasks. However, well-annotated datasets can be time-consuming and expensive to collect, lending increased interest to larger but noisy datasets that are more easily obtained. In this paper, we show that deep neural networks are capable of generalizing from training data for which true labels are massively outnumbered by incorrect labels. We demonstrate remarkably high test performance after training on corrupted data from MNIST, CIFAR, and ImageNet. For example, on MNIST we obtain test accuracy above 90 percent even after each clean training example has been diluted with 100 randomly-labeled examples. Such behavior holds across multiple patterns of label noise, even when erroneous labels are biased towards confusing classes. We show that training in this regime requires a significant but manageable increase in dataset size that is related to the factor by which correct labels have been diluted. Finally, we provide an analysis of our results that shows how increasing noise decreases the effective batch size.", "target": ["ニューラルネットでは、ノイズが多く含まれるデータセットでも大丈夫ということを検証した研究。ノイズが一様な場合/偏りがある場合、ノイズとして与えるデータがデータセットに含まれない新規のものかどうか、などで検証しているが傾向に大きな違いは見られないとのこと。"]} {"source": "We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-Factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.", "target": ["DNNを対象としたnatural gradientベースの2次オーダ勾配法K-FACの提案。Fisher情報行列をkronecker積を使い近似評価。その逆行列をブロック対角/ブロック3重対角で近似し計算する。Fisher情報行列をHessianとみなし近似(できる)し2次アルゴリズムを構築。Momentum SGDとの比較しイテレーション回数を桁のオーダで削減できるので並列計算に向いている。"]} {"source": "Generative Adversarial Networks (GANs) have gathered a lot of attention from the computer vision community, yielding impressive results for image generation. Advances in the adversarial generation of natural language from noise however are not commensurate with the progress made in generating images, and still lag far behind likelihood based methods. In this paper, we take a step towards generating natural language with a GAN objective alone. We introduce a simple baseline that addresses the discrete output space problem without relying on gradient estimators and show that it is able to achieve state-of-the-art results on a Chinese poem generation dataset. We present quantitative results on generating sentences from context-free and probabilistic context-free grammars, and qualitative language modeling results. A conditional version is also described that can generate sequences conditioned on sentence characteristics.", "target": ["文生成にGANを適用してみるという話。GANはWGANで、1-hot vectorの列で表現された文章と、GANから生成した分布を比較する(双方文長x語彙のマップになる)。それなりに学習できているように見えるが、単純なRNNとの比較がなく文長/語彙もかなり絞っているので様子見"]} {"source": "In this paper, we consider the problem of machine teaching, the inverse problem of machine learning. Different from traditional machine teaching which views the learners as batch algorithms, we study a new paradigm where the learner uses an iterative algorithm and a teacher can feed examples sequentially and intelligently based on the current performance of the learner. We show that the teaching complexity in the iterative case is very different from that in the batch case. Instead of constructing a minimal training set for learners, our iterative machine teaching focuses on achieving fast convergence in the learner model. Depending on the level of information the teacher has from the learner model, we design teaching algorithms which can provably reduce the number of teaching examples and achieve faster convergence than learning without teachers. We also validate our theoretical findings with extensive experiments on different data distribution and real image datasets.", "target": ["機械学習を行う際、生徒たるモデルに対して与えるデータを「先生」が効率的に選ぶというスタイルの提案(生徒と先生は目的関数を共有)。学習率が高い状態では簡単なもの、学習率が低くなってきた状態では前回データとの一貫性が重要になるとのこと。目的関数からの導出過程がとても丁寧に書かれている"]} {"source": "Individuals on social media may reveal themselves to be in various states of crisis (e.g. suicide, self-harm, abuse, or eating disorders). Detecting crisis from social media text automatically and accurately can have profound consequences. However, detecting a general state of crisis without explaining why has limited applications. An explanation in this context is a coherent, concise subset of the text that rationalizes the crisis detection. We explore several methods to detect and explain crisis using a combination of neural and non-neural techniques. We evaluate these techniques on a unique data set obtained from Koko, an anonymous emotional support network available through various messaging applications. We annotate a small subset of the samples labeled with crisis with corresponding explanations. Our best technique significantly outperforms the baseline for detection and explanation.", "target": ["個人のSNS上の発言などから危機的状況(自殺しちゃいそうなど)かを検知するだけでなく、その状況の度合いを判断(トリアージ)するための根拠を提案するという試み。手法としてはAttentionを参照することで重要語句を抽出する。最近のモデル解釈研究がまとまっているという点でもよい。"]} {"source": "We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself -- the correspondence between the visual and the audio streams, and we introduce a novel \"Audio-Visual Correspondence\" learning task that makes use of this. Training visual and audio networks from scratch, without any additional supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art self-supervised approaches on ImageNet classification. We also demonstrate that the network is able to localize objects in both modalities, as well as perform fine-grained recognition tasks.", "target": ["動画データセットを活用した教師なし学習。アーキテクチャ自体はシンプルだが、音声の2クラス分類ではベンチマークでSoTA、画像ではImageNetのself-supervisedとしてはSoTAに匹敵する性能をもち、物体検出やfine-grainedの認識タスクもこなせるモデル(L^3-Net)を学習した。"]} {"source": "The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.", "target": ["文章に対してVAEを適用し、文全体の情報をベクトル化しそれを文生成に利用しようという話。基本はRNN/Z/RNNという構成のVAEだが、このままだと学習が上手くいかないので、学習初期に正規化の役割を担うKL項を抑制したり、賢すぎるDecoderを制限するといった工夫をしている。"]} {"source": "In this paper, we present a method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content/style image pair. We build upon recent work leveraging conditional instance normalization for multi-style transfer networks by learning to predict the conditional instance normalization parameters directly from a style image. The model is successfully trained on a corpus of roughly 80,000 paintings and is able to generalize to paintings previously unobserved. We demonstrate that the learned embedding space is smooth and contains a rich structure and organizes semantic information associated with paintings in an entirely unsupervised manner.", "target": ["画像の画風変換については、通常変換させる画風ごとに専用のネットワークが必要だった。しかし最近は上手く正規化すれば様々な画風を一つのネットワークで表現できることがわかってきた・・・ので、大量のデータ(8万画像)で学習させて(初見も含めた)画風に対応できるネットワークを構築できた話"]} {"source": "The problem of sparse rewards is one of the hardest challenges in contemporary reinforcement learning. Hierarchical reinforcement learning (HRL) tackles this problem by using a set of temporally-extended actions, or options, each of which has its own subgoal. These subgoals are normally handcrafted for specific tasks. Here, though, we introduce a generic class of subgoals with broad applicability in the visual domain. Underlying our approach (in common with work using \"auxiliary tasks\") is the hypothesis that the ability to control aspects of the environment is an inherently useful skill to have. We incorporate such subgoals in an end-to-end hierarchical reinforcement learning system and test two variants of our algorithm on a number of games from the Atari suite. We highlight the advantage of our approach in one of the hardest games -- Montezuma's revenge -- for which the ability to handle sparse rewards is key. Our agent learns several times faster than the current state-of-the-art HRL agent in this game, reaching a similar level of performance. UPDATE 22/11/17: We found that a standard A3C agent with a simple shaped reward, i.e. extrinsic reward + feature control intrinsic reward, has comparable performance to our agent in Montezuma Revenge. In light of the new experiments performed, the advantage of our HRL approach can be attributed more to its ability to learn useful features from intrinsic rewards rather than its ability to explore and reuse abstracted skills with hierarchical components. This has led us to a new conclusion about the result.", "target": ["報酬が疎な環境でどうエージェントを学習させるかについての取り組み。先日の好奇心の導入(#308)と同様、環境変化の最大化を内発的な報酬にしており、こちらはピクセルレベルと特徴マップレベルで検証を行っている。ピクセルの方が即時性があるが、学習が進むにつれ特徴マップの方が効いてくるとのこと。"]} {"source": "In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MSCOCO dataset, FOIL-COCO, which associates images with both correct and \"foil\" captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake (\"foil word\"). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image.", "target": ["画像キャプションの研究について、本当に理解してキャプションを生成しているのかをテストするデータセットの提案。MS COCOのキャプションに意図的に一つのFoil(=間違い)(犬から猫に置き換えなど)を潜ませ、それを検知できるか、また間違いの場所を特定/訂正できるかをテストする。"]} {"source": "We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.", "target": ["Batch NormalizationをRNNにも適用しようという話。具体的には、RNNの再帰部分の重みについても正規化を行う(論文中では、LSTMの重みに対して適用を行なっている)。 タイムステップを通じた統計量でなく、各タイムステップで計算した統計量を使うと良いとのこと(評価時は時間を通じた統計量を使用する)。"]} {"source": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent.", "target": ["学習データを中央に集めない分散学習の問題Federated optimizationの定式化。通信量やプライバシーの観点から各モバイルデバイスで収集される学習サンプルをデバイス上で学習し集約すること考える。DNNをターゲットとしSGDベースのアルゴリズムFedAvgを提案。複数のネットワークアーキテクチャで少ない通信回数で学習することを実現した。AISTATS 2017"]} {"source": "In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL", "target": ["強化学習において、特に高次元になると報酬が得られる機会はとても少なくなる。そこで「好奇心」、つまり新規性のある環境への到達について報酬を設定することで学習速度を上げる試み。これによりベースライン(A3C)よりも高い学習性能を記録することができた。Doomとマリオブラザーズのデモ有"]} {"source": "Convolutional neural networks provide visual features that perform remarkably well in many computer vision applications. However, training these networks requires significant amounts of supervision. This paper introduces a generic framework to train deep networks, end-to-end, with no supervision. We propose to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. This domain agnostic approach avoids the standard unsupervised learning issues of trivial solutions and collapsing of features. Thanks to a stochastic batch reassignment strategy and a separable square loss function, it scales to millions of images. The proposed approach produces representations that perform on par with state-of-the-art unsupervised methods on ImageNet and Pascal VOC.", "target": ["ディープラーニングを用いた高速で高精度な教師なし学習アルゴリズムを提案。SoTA。低次元の超球からサンプリングしたtarget vectorsに特徴マップ(ベクトル)を近づけるように学習。ハンガリアン法をミニバッチ内で実行すれば高速に割あて問題を解ける。片側の行列をupdate。損失関数は計算高速化のため にseparable square lossを使用。"]} {"source": "Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit \"exposure bias\" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.", "target": ["文章要約で教師有と強化学習を併用したという話。モデルはSeq2Seq+Attention(リピート防止にDecoder側も参照)。単純な教師有だと「正解」に固執する傾向があるため、自由度を持たせた強化学習も導入(ROUGEにより良さを評価)。補完的に働かせ良好な結果が得られた"]} {"source": "Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional learning to rank models employ machine learning techniques over hand-crafted IR features. By contrast, neural models learn representations of language from raw text that can bridge the gap between query and document vocabulary. Unlike classical IR models, these new machine learning based approaches are data-hungry, requiring large scale training data before they can be deployed. This tutorial introduces basic concepts and intuitions behind neural IR models, and places them in the context of traditional retrieval models. We begin by introducing fundamental concepts of IR and different neural and non-neural approaches to learning vector representations of text. We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end. We introduce deep neural networks next, discussing popular deep architectures. Finally, we review the current DNN models for information retrieval. We conclude with a discussion on potential future directions for neural IR.", "target": ["Information Retrieval(クエリに基づいて、検索対象のドキュメントをランキングするような手法)について、ニューラルネットワークだけでなく既存の手法も取り上げ比較を行い、DNNの応用まで言及するという全部入りのありがたいチュートリアル。"]} {"source": "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "target": ["Facebookが発表したCNNによる翻訳の研究。翻訳の評価指標であるBLEUスコアを改善できただけでなく、9倍の高速化に成功した。 具体的には、入力文を畳み込んだものと、生成済みの出力文を畳み込んだものとで内積を取ることでAttentionを作成して、これを利用している。"]} {"source": "Deep neural networks are powerful and popular learning models that achieve state-of-the-art pattern recognition performance on many computer vision, speech, and language processing tasks. However, these networks have also been shown susceptible to carefully crafted adversarial perturbations which force misclassification of the inputs. Adversarial examples enable adversaries to subvert the expected system behavior leading to undesired consequences and could pose a security risk when these systems are deployed in the real world. In this work, we focus on deep convolutional neural networks and demonstrate that adversaries can easily craft adversarial examples even without any internal knowledge of the target network. Our attacks treat the network as an oracle (black-box) and only assume that the output of the network can be observed on the probed inputs. Our first attack is based on a simple idea of adding perturbation to a randomly selected single pixel or a small set of them. We then improve the effectiveness of this attack by carefully constructing a small set of pixels to perturb by using the idea of greedy local-search. Our proposed attacks also naturally extend to a stronger notion of misclassification. Our extensive experimental results illustrate that even these elementary attacks can reveal a deep neural network's vulnerabilities. The simplicity and effectiveness of our proposed schemes mean that they could serve as a litmus test for designing robust networks.", "target": ["入力画像に細工を施すことで、誤識別を誘発できるという検証。ネットワークの構成がわかっているホワイトボックス型と不明なブラックボックス型があるが、本研究では後者の手法で検証。ランダムに選択された画像の各所から最も識別に影響を与える箇所を探索する形(greedy local-search)で実装を行い、1~3%程度のピクセルを操作するだけでエラーレートを跳ね上げられることを確認。"]} {"source": "We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.", "target": ["プログラミング言語のコード自動生成タスクにおいて、言語の抽象構文木(AST)を取り入れ精度向上。SoTA。自然言語をASTに変換するモデルも定義。DecoderにAPPLYRULE, GENTOKENという操作が加えられる。Pythonコード生成タスクにおいて、SoTA手法と比較して10ポイント程度accuracyが向上。"]} {"source": "We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of \"image analogy\" with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique Deep Image Analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style/texture transfer, color/style swap, sketch/painting to photo, and time lapse.", "target": ["画像上の特徴を転移させる試み。A→A'=B→B'というアナロジーを元にモデルを組んでいる(AとB'が既知で、変換後のA'と変換前(A風の)Bを推定)。学習済みVGGから5つの階層別特徴マップを作成し、最上位から似た特徴点の探索、それによるA'/Bの再構成、を繰り返して作成を行う"]} {"source": "Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we must search the space of programs for those that output the correct result, while not being misled by spurious programs: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-the-art results on a recent context-dependent semantic parsing task.", "target": ["自然言語を実行可能な論理式に変換する試み。「理解していないけど最終的な実行結果は合っている」タイプの変換を避けるのを課題としている。このために強化学習と周辺尤度最大化を複合した手法(RANDOMER)を提案。探索的にノイズを加えたビームサーチ・頻度に依存しない重みの付与の二点が肝"]} {"source": "In recent years, automatic generation of image descriptions (captions), that is, image captioning, has attracted a great deal of attention. In this paper, we particularly consider generating Japanese captions for images. Since most available caption datasets have been constructed for English language, there are few datasets for Japanese. To tackle this problem, we construct a large-scale Japanese image caption dataset based on images from MS-COCO, which is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions for 164,062 images. In the experiment, we show that a neural network trained using STAIR Captions can generate more natural and better Japanese captions, compared to those generated using English-Japanese machine translation after generating English captions.", "target": ["MS COCOの画像に対する日本語キャプションのデータセットが公開。単純に翻訳するより良好な結果。"]} {"source": "This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.", "target": ["GoogleのSmart Replyの仕組み(具体的には受信メールを元に返信候補をランキングする箇所)について。本文や件名など、複数パートのn-gramから特徴を抽出し、受信/返信候補で内積をとるモデルを採用。基本的なSeq2Seq(ベクトル表現のみ利用)より高速かつ同精度を達成"]} {"source": "Most natural videos contain numerous events. For example, in a video of a \"man playing a piano\", the video might also contain \"another man dancing\" or \"a crowd clapping\". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.", "target": ["動画内の複数のイベント(時間的な重複あり)を説明文する話。3D-CNNで特徴抽出→イベント範囲推定→説明文生成の流れ。説明文生成時は他イベントをアテンションしたり、過去や未来のイベントをコンテキスト情報に用いる。ActivityNet CaptionデータセットでSOTA。"]} {"source": "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.", "target": ["文のエンコードの速度を上げようという話。そのための3つの目的関数(タスク)を提唱。1.文が連続したものか否かの0/1、2.段落冒頭3文に続く一文を5つの中から選択、3.接続詞を抜いた上で、そのタイプを予測させる。※学習データは自動作成(教師なしなので)。これで6~40倍の高速化。"]} {"source": "Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquely expressive semantic information, and outperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, on benchmark datasets such as word similarity and entailment.", "target": ["単語をword2vecのように単一点(ベクトル)ではなく、広がりを持った分布で表現しようという試み。複数の意味を表現するため、分布を複合した多峰分布で表現する。word2vecの学習法を取り入れつつ(近くに来る単語の分布の距離は近いとする)学習。類似度推定等のタスクで最高精度"]} {"source": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .", "target": ["CNNは強力だけどそもそも画像をそんなに用意できないというケースのために、少ない画像でも良く識別できるようなネットワーク構成を提案。畳み込んだ層とアップサンプリングしていった層を合わせることで局所+グローバルで有効な特徴を学習させる"]} {"source": "Our goal is to create a convenient natural language interface for performing well-specified but complex actions such as analyzing data, manipulating text, and querying databases. However, existing natural language interfaces for such tasks are quite primitive compared to the power one wields with a programming language. To bridge this gap, we start with a core programming language and allow users to \"naturalize\" the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. In a voxel world, we show that a community of users can simultaneously teach a common system a diverse language and use it to build hundreds of complex voxel structures. Over the course of three days, these users went from using only the core language to using the naturalized language in 85.9\\% of the last 10K utterances.", "target": ["プログラミング言語と自然言語の橋渡しをする試み。自然言語をどうにか上手く解釈するというトップダウンの方式ではなく、プログラムでの記述を「自然言語化する」というボトムアップのアプローチを取っている。そのデータを集めるために、ブロック積みのゲームVoxelurnを開発したという話"]} {"source": "In this paper, we study a new learning paradigm for Neural Machine Translation (NMT). Instead of maximizing the likelihood of the human translation as in previous works, we minimize the distinction between human translation and the translation given by an NMT model. To achieve this goal, inspired by the recent success of generative adversarial networks (GANs), we employ an adversarial training architecture and name it as Adversarial-NMT. In Adversarial-NMT, the training of the NMT model is assisted by an adversary, which is an elaborately designed Convolutional Neural Network (CNN). The goal of the adversary is to differentiate the translation result generated by the NMT model from that by human. The goal of the NMT model is to produce high quality translations so as to cheat the adversary. A policy gradient method is leveraged to co-train the NMT model and the adversary. Experimental results on English\\rightarrowFrench and German\\rightarrowEnglish translation tasks show that Adversarial-NMT can achieve significantly better translation quality than several strong baselines.", "target": ["GANを翻訳に適用しようという試み。人の翻訳vs機械翻訳(自分の出力)を見分ける識別機と、翻訳生成側(Enc/Dec)を戦わせるという構図。識別には原文と翻訳文を繫げたマップを畳み込むCNNを使用、生成側は出力が離散なので強化学習の枠組み(識別を騙せたら報酬)を使って学習している"]} {"source": "Scene parsing, or recognizing and segmenting objects and stuff in an image, is one of the key problems in computer vision. Despite the communitys efforts in data collection, there are still few image datasets covering a wide range of scenes and object categories with dense and detailed annotations for scene parsing. In this paper, we introduce and analyze the ADE20K dataset, spanning diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. A scene parsing benchmark is built upon the ADE20K with 150 object and stuff classes included. Several segmentation baseline models are evaluated on the benchmark. A novel network design called Cascade Segmentation Module is proposed to parse a scene into stuff, objects, and object parts in a cascade and improve over the baselines. We further show that the trained scene parsing networks can lead to applications such as image content removal and scene synthesis", "target": ["セマンティックセグメンテーションの問題において、(1) アノテーションの曖昧性を排除、(2) カテゴリ数の増加(150カテゴリ)、(3) サブカテゴリを導入したデータセットを提案。"]} {"source": "Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models.", "target": ["自然言語処理において単語の予測をクラス分類のように0/1でやるのは不自然だということで、予測分布の距離を加味することを提案。また、それにより入力のembeddingと出力のprojectionは使いまわしが可能になることを理論的に証明。これによりパラメーター量も下げられる、はず。"]} {"source": "We introduce an exceptionally simple gated recurrent neural network (RNN) that achieves performance comparable to well-known gated architectures, such as LSTMs and GRUs, on the word-level language modeling task. We prove that our model has simple, predicable and non-chaotic dynamics. This stands in stark contrast to more standard gated architectures, whose underlying dynamical systems exhibit chaotic behavior.", "target": ["LSTM/GRUよりシンプルな、input/forgetのみの構成を提案。これにより、LSTM/GRUにおける謎な挙動(※)を回避しつつ精度を出せた。検証は言語モデル(Penn Treebank)。 ※入力がない場合に発生するらしく、普段0でたまに1というケースでは不都合になる"]} {"source": "Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice.", "target": ["DNNが持つ汎化性能の謎に迫る論文。DNNにとっては全てのラベルを覚えてしまうことは簡単なのに、正則化だけでは説明のつかない汎化性能を記録していることを確認。仮説としてSGD自体が汎化性能に貢献している?という提案をしている。"]} {"source": "Recent years have witnessed enormous progress in AI-related fields such as computer vision, machine learning, and autonomous vehicles. As with any rapidly growing field, it becomes increasingly difficult to stay up-to-date or enter the field as a beginner. While several survey papers on particular sub-problems have appeared, no comprehensive survey on problems, datasets, and methods in computer vision for autonomous vehicles has been published. This book attempts to narrow this gap by providing a survey on the state-of-the-art datasets and techniques. Our survey includes both the historically most relevant literature as well as the current state of the art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding, and end-to-end learning for autonomous driving. Towards this goal, we analyze the performance of the state of the art on several challenging benchmarking datasets, including KITTI, MOT, and Cityscapes. Besides, we discuss open problems and current research challenges. To ease accessibility and accommodate missing references, we also provide a website that allows navigating topics as well as methods and provides additional information.", "target": ["自動運転に関わる技術は複合的でなおかつ進歩も早いので、初心者にはかなり入りづらくなっている。そこで、自動運転にまつわる画像認識の技術について基礎論文と現時点での最高精度をまとめ、また学習に利用可能なデータセットについてもリストアップ。67pの大作。"]} {"source": "Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks. The main motivation for these architectural innovations is that they capture better domain knowledge, and importantly are easier to optimize than more basic architectures. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we discuss a low-overhead and easy-to-implement technique of adding gradient noise which we find to be surprisingly effective when training these very deep architectures. The technique not only helps to avoid overfitting, but also can result in lower training loss. This method alone allows a fully-connected 20-layer deep network to be trained with standard gradient descent, even starting from a poor initialization. We see consistent improvements for many complex models, including a 72% relative reduction in error rate over a carefully-tuned baseline on a challenging question-answering task, and a doubling of the number of accurate binary multiplication models learned across 7,000 random restarts. We encourage further application of this technique to additional complex modern architectures.", "target": ["学習時に勾配にガウシアンノイズを加えると精度が上がる。"]} {"source": "Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work proposed to analyze DNNs in the \\textit{Information Plane}; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the Information-Plane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on {\\emph compression} of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer.", "target": ["SGDによる最適化には ・Empirical Error Minimization ・Representation Compression という異なる二つのフェイズがあることを明らかにした。"]} {"source": "We introduce the first deep reinforcement learning agent that learns to beat Atari games with the aid of natural language instructions. The agent uses a multimodal embedding between environment observations and natural language to self-monitor progress through a list of English instructions, granting itself reward for completing instructions in addition to increasing the game score. Our agent significantly outperforms Deep Q-Networks (DQNs), Asynchronous Advantage Actor-Critic (A3C) agents, and the best agents posted to OpenAI Gym on what is often considered the hardest Atari 2600 environment: Montezuma's Revenge.", "target": ["言葉によるナビで、より高精度のプレイを素早く学習させようという試み。通常の強化学習の仕組みに加えて、「指示を実行できたか」という報酬を追加で与えている。"]} {"source": "Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.", "target": ["LAPGANなどで用いられているLaplacian pyramid frameworkを活用した超解像手法。SoTA(speed and accuracy)。Feature Extraction BranchとImage Reconstruction Branchの2つのブランチを持つネットワーク構造。bicubic interpolationを使用しないことで計算量の削減と質の向上を図った。lossはl_2 lossでなくCharbonnier lossを用いた。"]} {"source": "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "target": ["強化学習において、Asynchronous(非同期)で、Actor-CriticをベースとしてAdvantageを利用して学習するワーカーを、並列に走らせて学習する=A3Cを提唱。Advantageは、行動が推定より良い結果をもたらしたかで表される(R-V(s))。イメージ的には分身の術を使って学習結果を統合する感じ。"]} {"source": "Convolutional autoregressive models have recently demonstrated state-of-the-art performance on a number of generation tasks. While fast, parallel training methods have been crucial for their success, generation is typically implemented in a naïve fashion where redundant computations are unnecessarily repeated. This results in slow generation, making such models infeasible for production environments. In this work, we describe a method to speed up generation in convolutional autoregressive models. The key idea is to cache hidden states to avoid redundant computation. We apply our fast generation method to the Wavenet and PixelCNN++ models and achieve up to 21× and 183× speedups respectively.", "target": ["RNNを使った自己回帰モデルの(hidden stateの)途中結果をキャッシュしておくことで計算の無駄な繰り返しを無くしWavenetを21倍、PixelCNN++を183倍早くした。"]} {"source": "Softmax GAN is a novel variant of Generative Adversarial Network (GAN). The key idea of Softmax GAN is to replace the classification loss in the original GAN with a softmax cross-entropy loss in the sample space of one single batch. In the adversarial learning of N real training samples and M generated samples, the target of discriminator training is to distribute all the probability mass to the real samples, each with probability \\frac{1}{M}, and distribute zero probability to generated data. In the generator training phase, the target is to assign equal probability to all data points in the batch, each with probability \\frac{1}{M+N}. While the original GAN is closely related to Noise Contrastive Estimation (NCE), we show that Softmax GAN is the Importance Sampling version of GAN. We futher demonstrate with experiments that this simple change stabilizes GAN training.", "target": ["GANの学習安定化のためにclassification lossをbinaryでなくてMulticlass cross entropy loss(softmax loss)にした。"]} {"source": "We introduce the Self-Annotated Reddit Corpus (SARC), a large corpus for sarcasm research and for training and evaluating systems for sarcasm detection. The corpus has 1.3 million sarcastic statements -- 10 times more than any previous dataset -- and many times more instances of non-sarcastic statements, allowing for learning in both balanced and unbalanced label regimes. Each statement is furthermore self-annotated -- sarcasm is labeled by the author, not an independent annotator -- and provided with user, topic, and conversation context. We evaluate the corpus for accuracy, construct benchmarks for sarcasm detection, and evaluate baseline methods.", "target": ["皮肉を検出するための大規模コーパスの公開。Redditという掲示板のデータから、130万のデータが提供。アノテーションは投稿者自身が行っている(皮肉コメントには/sがついている)。Redditには皮肉に/sをつける文化があるらしい(HTMLのタグで囲むようにするのが発祥とのこと)"]} {"source": "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.", "target": ["要約についての論文で、抜粋型と生成型のいいところ取りをするという手法。Seq2Seqの不正確+繰り返しが多いという弱点を、生成or抽出(入力文からのコピー)をスイッチする確率p_genの導入+カバレッジの担保(Attentionの分布を利用)により克服している。"]} {"source": "Previous work has modeled the compositionality of words by creating character-level models of meaning, reducing problems of sparsity for rare words. However, in many writing systems compositionality has an effect even on the character-level: the meaning of a character is derived by the sum of its parts. In this paper, we model this effect by creating embeddings for characters based on their visual characteristics, creating an image for the character and running it through a convolutional neural network to produce a visual character embedding. Experiments on a text classification task demonstrate that such model allows for better processing of instances with rare characters in languages such as Chinese, Japanese, and Korean. Additionally, qualitative analyses demonstrate that our proposed model learns to focus on the parts of characters that carry semantic content, resulting in embeddings that are coherent in visual space.", "target": ["文書の分類に、文字の画像情報を利用しようという試み。具体的には、文字をCNNにかけてそれをRNNでエンコードしていき分類を行う。Wikipediaのタイトルを利用し、画像ではない通常の文字単位のembeddingを使うモデル(LOOKUP)と比較して検証。低頻度語に強い分類が可能となった。"]} {"source": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.", "target": ["正則化をより効果的に行う手法の提案。予測分布を最も大きく変えてしまうような変動を入力データに与え・・・ても、予測ができるように学習をさせる。"]} {"source": "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.", "target": ["モバイル向けにDNNのサイズを小さくしよう(計算コストを軽くしよう)という試み。面の畳み込み(depth wise)とチャンネル方向の畳み込み(point wise)を分け計算コストを削減し、さらに面の広さとチャンネルの深さに係数をかけ、精度と計算量のバランスを調整している。"]} {"source": "In this paper, we aim to estimate the Winner of world-wide film festival from the exhibited movie poster. The task is an extremely challenging because the estimation must be done with only an exhibited movie poster, without any film ratings and box-office takings. In order to tackle this problem, we have created a new database which is consist of all movie posters included in the four biggest film festivals. The movie poster database (MPDB) contains historic movies over 80 years which are nominated a movie award at each year. We apply a couple of feature types, namely hand-craft, mid-level and deep feature to extract various information from a movie poster. Our experiments showed suggestive knowledge, for example, the Academy award estimation can be better rate with a color feature and a facial emotion feature generally performs good rate on the MPDB. The paper may suggest a possibility of modeling human taste for a movie recommendation.", "target": ["世界の4つの映画賞におけるポスターのみの情報から作品賞となる映画の予測を試みた論文。Haar-likeのような従来のCV手法や深層学習などいくつかの特徴抽出を試している。予測器にはSVMを使用。LabとEmotionNetでの結果がよく、ポスターの色情報とポスター内の顔の表情の影響が大きいことが分かった。"]} {"source": "In recent years we have seen rapid and significant progress in automatic image description but what are the open problems in this area? Most work has been evaluated using text-based similarity metrics, which only indicate that there have been improvements, without explaining what has improved. In this paper, we present a detailed error analysis of the descriptions generated by a state-of-the-art attention-based model. Our analysis operates on two levels: first we check the descriptions for accuracy, and then we categorize the types of errors we observe in the inaccurate descriptions. We find only 20% of the descriptions are free from errors, and surprisingly that 26% are unrelated to the image. Finally, we manually correct the most frequently occurring error types (e.g. gender identification) to estimate the performance reward for addressing these errors, observing gains of 0.2--1 BLEU point per type.", "target": ["画像のキャプション生成タスクにおける誤りをPeople, Subject, Object, Generalの4つに分類しエラー分析を実施。改善可能性や評価方法について探った。多くの誤りがPeopleまたはGeneral。教師の説明文を正しく学習出来るように修正することでBLEUが最大1.0改善。"]} {"source": "We present sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects. The model is trained on thousands of crude human-drawn images representing hundreds of classes. We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format.", "target": ["GANのようにピクセル単位の画像(ラスタライズ)ではなく、ストローク単位の画像(ベクター画像)を生成する試み(人が絵を描くときは後者に近い)。 ストロークをペンの状態で表現し(前座標との差分・始点・終点・描画終了)、これをRNN(Encoder/Decoder)+VAEでモデル化"]} {"source": "We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.", "target": ["GANにおけるGeneratorとAutoEncoderにおけるEncoderを競わせるという手法の提案。BiGAN(#270)と同様の流れで、内容はBEGAN(#265)とほぼかぶっている(というか下位互換)。"]} {"source": "The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.", "target": ["GANがとらえている特徴を逆算して獲得しようというBi-directionalなGANの仕組みの提案(BiGAN)。これ自体の利用は、ほかの分野でも使えそう。"]} {"source": "This report is targeted to groups who are subject matter experts in their application but deep learning novices. It contains practical advice for those interested in testing the use of deep neural networks on applications that are novel for deep learning. We suggest making your project more manageable by dividing it into phases. For each phase this report contains numerous recommendations and insights to assist novice practitioners.", "target": ["深層学習をアプリケーションで利用する際のすすめ方や注意点についての話。問題の定義(inとout)をしっかり行うこと、検証が済んでいるモデル(公開されているコードetc)からはじめること、結果の見える化をしとくこと、などが書かれている"]} {"source": "Recurrent neural networks (RNNs) process input text sequentially and model the conditional transition between word tokens. In contrast, the advantages of recursive networks include that they explicitly model the compositionality and the recursive structure of natural language. However, the current recursive architecture is limited by its dependence on syntactic tree. In this paper, we introduce a robust syntactic parsing-independent tree structured model, Neural Tree Indexers (NTI) that provides a middle ground between the sequential RNNs and the syntactic treebased recursive models. NTI constructs a full n-ary tree by processing the input text with its node function in a bottom-up fashion. Attention mechanism can then be applied to both structure and node function. We implemented and evaluated a binarytree model of NTI, showing the model achieved the state-of-the-art performance on three different NLP tasks: natural language inference, answer sentence selection, and sentence classification, outperforming state-of-the-art recurrent and recursive neural networks.", "target": ["RNNでなくRecursiveを使うメリットとして構造が扱えるという点があるが、この構造は構文木のパースに依存すると言う問題点があった。 そこで、それを克服するために構造自体も推定対象にするという提案。Leaf用/Node用LSTMを使い親の推定もタスクに含む(構造は二分木限定)"]} {"source": "Generative models in vision have seen rapid progress due to algorithmic improvements and the availability of high-quality image datasets. In this paper, we offer contributions in both these areas to enable similar progress in audio modeling. First, we detail a powerful new WaveNet-style autoencoder model that conditions an autoregressive decoder on temporal codes learned from the raw audio waveform. Second, we introduce NSynth, a large-scale and high-quality dataset of musical notes that is an order of magnitude larger than comparable public datasets. Using NSynth, we demonstrate improved qualitative and quantitative performance of the WaveNet autoencoder over a well-tuned spectral autoencoder baseline. Finally, we show that the model learns a manifold of embeddings that allows for morphing between instruments, meaningfully interpolating in timbre to create new types of sounds that are realistic and expressive.", "target": ["WaveNetベース(non-causal dilated convolution=現時点までの音を、間引きして畳み込む)のAuto-Encoderにより、End-to-Endの音声生成を行ったという話。 Magentaで実装が公開されており、学習データも提供されている。"]} {"source": "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.", "target": ["GANの学習を安定させる試み。実際のサンプルに近づけるのでなく、Auto-Encoderの誤差分布に近づける(距離はWasserstein)という点と、DとGの間のlossの割合(γ)を導入し、学習の均衡と収束判定を容易にした(lossが低い+平衡になったらok)。"]} {"source": "Most existing neural network models for music generation use recurrent neural networks. However, the recent WaveNet model proposed by DeepMind shows that convolutional neural networks (CNNs) can also generate realistic musical waveforms in the audio domain. Following this light, we investigate using CNNs for generating melody (a series of MIDI notes) one bar after another in the symbolic domain. In addition to the generator, we use a discriminator to learn the distributions of melodies, making it a generative adversarial network (GAN). Moreover, we propose a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars (e.g. a priming melody), among other possibilities. The resulting model, named MidiNet, can be expanded to generate music with multiple MIDI channels (i.e. tracks). We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google's MelodyRNN models, each time using the same priming melody. Result shows that MidiNet performs comparably with MelodyRNN models in being realistic and pleasant to listen to, yet MidiNet's melodies are reported to be much more interesting.", "target": ["CNN+GANでMIDIを作成しようという試み。時間(長さは1小節分・16部音符単位で16)x音 (MIDIの128音)で2次元で小節を表現し、和音を1次元のベクトルで表現これを不思議な変換で組み合わせてマップを作り、学習する。"]} {"source": "Current speech enhancement techniques operate on the spectral domain and/or exploit some higher-level feature. The majority of them tackle a limited number of noise conditions and rely on first-order statistics. To circumvent these issues, deep networks are being increasingly used, thanks to their ability to learn complex functions from large example sets. In this work, we propose the use of generative adversarial networks for speech enhancement. In contrast to current techniques, we operate at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them. We evaluate the proposed model using an independent, unseen test set with two speakers and 20 alternative noise conditions. The enhanced samples confirm the viability of the proposed model, and both objective and subjective evaluations confirm the effectiveness of it. With that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to improve their performance.", "target": ["GANによって音声の質を改善する研究。図のように、Auto Encoderのような形式でGeneratorを形成して学習を行う。この構造は音声意外にも適用できそうな印象。"]} {"source": "We present two simple ways of reducing the number of parameters and accelerating the training of large Long Short-Term Memory (LSTM) networks: the first one is \"matrix factorization by design\" of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the near state-of the art perplexity while using significantly less RNN parameters.", "target": ["LSTMの計算高速化のためのテクニックの提案。LSTMは内部で4つのゲートがあるが、これらはまとめて計算が可能⇒4ゲート分をまとめたサイズの重み行列(T)で計算し、計算後に切り分ける。このTを二つの重みW1/W2の積で表現+入力をグループにわけて並列化する。TFでの実装あり。"]} {"source": "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G:X→Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F:Y→X and introduce a cycle consistency loss to push F(G(X))≈X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "target": ["通常のimage2imageでは元ドメインと変換先ドメインの画像のペア(線画と着色済みなど)が必要だったが、ペアでなくても変換を可能にしたという話。そのからくりは「元の画像」と、その画像を「変換+逆変換」して元に戻したものの間の誤差で学習を行うというもの。名付けてCycleGAN"]} {"source": "We use the scattering network as a generic and fixed ini-tialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providing the best results to-date with pre-defined representations while being competitive with Deep CNNs. Using a shallow cascade of 1 x 1 convolutions, which encodes scattering coefficients that correspond to spatial windows of very small sizes, permits to obtain AlexNet accuracy on the imagenet ILSVRC2012. We demonstrate that this local encoding explicitly learns invariance w.r.t. rotations. Combining scattering networks with a modern ResNet, we achieve a single-crop top 5 error of 11.4% on imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing only 10 layers. We also find that hybrid architectures can yield excellent performance in the small sample regime, exceeding their end-to-end counterparts, through their ability to incorporate geometrical priors. We demonstrate this on subsets of the CIFAR-10 dataset and on the STL-10 dataset.", "target": ["DNNの下層部分は転移が可能なことからも非常に汎用的な層になっている。だったら学習は不要では?ということでこの一層目を汎用的な特徴が計算できる散乱変換(Scattering Transform)で置き換えるという話、のはず。"]} {"source": "In this paper, we propose the \"adversarial autoencoder\" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples. As a result, the decoder of the adversarial autoencoder learns a deep generative model that maps the imposed prior to the data distribution. We show how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization. We performed experiments on MNIST, Street View House Numbers and Toronto Face datasets and show that adversarial autoencoders achieve competitive results in generative modeling and semi-supervised classification tasks.", "target": ["VAEでは獲得する潜在空間が(正規)分布に従うと仮定し、潜在空間と仮定した分布との差異を最小化するが、ここで利用しているKL距離は特定の分布でないと使えない。そこでGAN的な思想で潜在空間と仮定分布からのサンプルの真偽をとり学習を行うと言う手法。"]} {"source": "Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label or to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computer-aided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona (2010), and by Mnih and Hinton (2012). Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels for training.", "target": ["ラベル付けを行う際、アノテーター同士で意見が割れる場合は多数決のラベルで学習することが多い。が、そうではなくまずアノテーターごとのWeightを学習させて、その上に各アノテーターのWeightをどう組み合わせるかを学習させる、という風にした方が高精度になるという話。"]} {"source": "We introduce the value iteration network (VIN): a fully differentiable neural network with a `planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.", "target": ["DQNに代表されるCNNから直接行動を推定する手法は、「その場の情報」だけで「行動」する形であり未知の状態への適応が困難。そこで鉄板の価値反復法を「微分可能な形」で組み込もうという話(図参照)。これにより汎化性能を向上できた。"]} {"source": "Image matting is a fundamental computer vision problem and has many applications. Previous algorithms have poor performance when an image has similar foreground and background colors or complicated textures. The main reasons are prior methods 1) only use low-level features and 2) lack high-level context. In this paper, we propose a novel deep learning based algorithm that can tackle both these problems. Our deep model has two parts. The first part is a deep convolutional encoder-decoder network that takes an image and the corresponding trimap as inputs and predict the alpha matte of the image. The second part is a small convolutional network that refines the alpha matte predictions of the first network to have more accurate alpha values and sharper edges. In addition, we also create a large-scale image matting dataset including 49300 training images and 1000 testing images. We evaluate our algorithm on the image matting benchmark, our testing set, and a wide variety of real images. Experimental results clearly demonstrate the superiority of our algorithm over previous methods.", "target": ["Adobeによる、背景除去についての研究。前景/背景のマップ(Alpha Matte)を、入力画像とそのTrimap(絶対前景・絶対背景・よくわからんの3領域のマップ)から予測させるというもの。予測用・補正用という二段階での構成。"]} {"source": "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.", "target": ["Adobeから出てきた写真のスタイルトランスファーの論文。「写真っぽさ」を維持するために、色の変換が(色の)アフィン変換の範囲で行われるように、そしてスタイルの適用時にセグメンテーションの制限を設ける(ビルにはビル部分のスタイルが適用されるようにするなど)という手法。"]} {"source": "We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: this https URL", "target": ["画像のセグメンテーションについての研究。領域抽出>オブジェクト領域+認識を解くというFaster R-CNNに、さらにオブジェクトマスクのタスクも解かせるという手法。Maskと認識のタスクが競合しないように、sigmoid/binary lossを使うのがポイントとのこと。"]} {"source": "Despite the rapid progress in style transfer, existing approaches using feed-forward generative network for multi-style or arbitrary-style transfer are usually compromised of image quality and model flexibility. We find it is fundamentally difficult to achieve comprehensive style modeling using 1-dimensional style embedding. Motivated by this, we introduce CoMatch Layer that learns to match the second order feature statistics with the target styles. With the CoMatch Layer, we build a Multi-style Generative Network (MSG-Net), which achieves real-time performance. We also employ an specific strategy of upsampled convolution which avoids checkerboard artifacts caused by fractionally-strided convolution. Our method has achieved superior image quality comparing to state-of-the-art approaches. The proposed MSG-Net as a general approach for real-time style transfer is compatible with most existing techniques including content-style interpolation, color-preserving, spatial control and brush stroke size control. MSG-Net is the first to achieve real-time brush-size control in a purely feed-forward manner for style transfer. Our implementations and pre-trained models for Torch, PyTorch and MXNet frameworks will be publicly available.", "target": ["MSGNet:画像のスタイル適用について、複数のスタイルを一モデルで、しかもリアルタイムに適用する。スタイル画像を事前学習済みVGGにかけ、その中間レイヤをコンテンツ側のネットワークに挿入し変換する。この出力をこれも学習済VGGにかけコンテンツ/スタイルのlossを算出し学習する"]} {"source": "This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks. These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture. In this paper, we explore the foundations for such an architecture: we show how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one notable example, we show that the method is capable of learning to play mini-Sudoku (4x4) given just input and output games, with no a priori information about the rules of the game; this highlights the ability of our architecture to learn hard constraints better than other neural architectures.", "target": ["ニューラルネットの層としてQP(二次計画)を解く層(入力がQPの各係数等になっていて、出力がそのQPの最適解となるような層)を導入するという話。KKT条件からうまくバックプロパゲーション可能。ミニバッチで学習する際に多量のQPインスタンスを解く必要があるため、GPUベースで複数の問題をバッチで解くソルバを実装。 応用例としてはサイズ縮小版の数独ソルバの学習など。"]} {"source": "We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice. We demonstrate the effectiveness of the method in a range of optimization problems by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it significantly reduces the need for the manual tuning of the initial learning rate for these commonly used algorithms. Our method works by dynamically updating the learning rate during optimization using the gradient with respect to the learning rate of the update rule itself. Computing this \"hypergradient\" needs little additional computation, requires only one extra copy of the original gradient to be stored in memory, and relies upon nothing more than what is provided by reverse-mode automatic differentiation.", "target": ["学習において最重要なパラメーターの一つの学習率αを自動で最適化する話。具体的には、前回との勾配の内積を取り、その大小によりαの更新を行う(=前回と変わらなければ大きく、変わっていれば小さくとる)。この勾配の内積による更新には新たなパラメーターβが必要になるが、これはαほど敏感には影響しないとのこと。"]} {"source": "While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available this https URL", "target": ["2つの画像ドメイン間の類似性、言い換えればドメインの変換関数を獲得できるかという研究。互いのドメインの変換を行うに当たり、「変換先のドメインの識別機を騙せる」ように、なおかつ「変換が元の画像をなるべく損なわないようにする」ように最適化を行う。これをDiscoGANと命名。"]} {"source": "Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.", "target": ["DNNはすごく過学習しやすそうなのになぜ一般解(と思われるもの)を獲得できるのかについて、「最適解近辺がフラットだから」という予測があった。しかし挙動が変化しない類のパラメーター変換でも誤差平面の平坦さは影響を受けることが分かった。つまり、平坦の定義も含め議論の余地ありという話。"]} {"source": "Many sequential processing tasks require complex nonlinear transition functions from one step to the next. However, recurrent neural networks with 'deep' transition functions remain difficult to train, even when using Long Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of recurrent networks based on Gersgorin's circle theorem that illuminates several modeling and optimization issues and improves our understanding of the LSTM cell. Based on this analysis we propose Recurrent Highway Networks, which extend the LSTM architecture to allow step-to-step transition depths larger than one. Several language modeling experiments demonstrate that the proposed architecture results in powerful and efficient models. On the Penn Treebank corpus, solely increasing the transition depth from 1 to 10 improves word-level perplexity from 90.6 to 65.4 using the same number of parameters. On the larger Wikipedia datasets for character prediction (text8 and enwik8), RHNs outperform all previous results and achieve an entropy of 1.27 bits per character.", "target": ["High Wayの手法をRNNに適用する話。High Wayは入力をバイパスするゲートCを設けて、これと隠れ層HをゲートTに通したものを合算させることで入力にない表現のみ学習をさせるような手法。これで伝搬ステップの深いRNNを作る。言語モデル(PTB)とWikipediaの語予測でSOTA"]} {"source": "By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.", "target": ["マルチエージェントの強化学習で、エージェントが他のエージェントに指示を伝えられるようにすることで、エージェントがどう言葉とその意味を開発していくのかを調べたもの。意図した言語獲得にするために、発言にコストをかけたり既存の単語の使用をプラスに評価するなど報酬設計をかなり工夫している"]} {"source": "We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.", "target": ["強化学習における遺伝的アルゴリズムの有用性を示した論文。Virtual batch normalizationを使うことで信頼性up。backprop、value functionなしで計算できてしかも並列化が可能。DQNに比べパフォーマンス+学習にかかるまでの時間で優位との結果"]} {"source": "Variants of Naive Bayes (NB) and Support Vector Machines (SVM) are often used as baseline methods for text classification, but their performance varies greatly depending on the model variant, features used and task/ dataset. We show that: (i) the inclusion of word bigram features gives consistent gains on sentiment analysis tasks; (ii) for short snippet sentiment tasks, NB actually does better than SVMs (while for longer documents the opposite result holds); (iii) a simple but novel SVM variant using NB log-count ratios as feature values consistently performs well across tasks and datasets. Based on these observations, we identify simple NB and SVM variants which outperform most published results on sentiment analysis datasets, sometimes providing a new state-of-the-art performance level.", "target": ["お前たちがベースラインとして使っているSVMとNBで、お前たちの出そうとしているSOTAを記録してやったぜという話。タスクは極性判定と文書分類。bigramのBoWはかなり強力に効く、短い文書だとNB>SVM、そしてNBとSVMを組み合わせるとすごいことになる(全タスクで無双)"]} {"source": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.", "target": ["MoleculeNetという、新薬発見のための分子・分子物理・生体物理・生体?という4種類のデータを包含したデータセットが公開。"]} {"source": "Large computer-understandable proofs consist of millions of intermediate logical steps. The vast majority of such steps originate from manually selected and manually guided heuristics applied to intermediate goals. So far, machine learning has generally not been used to filter or generate these steps. In this paper, we introduce a new dataset based on Higher-Order Logic (HOL) proofs, for the purpose of developing new machine learning-based theorem-proving strategies. We make this dataset publicly available under the BSD license. We propose various machine learning tasks that can be performed on this dataset, and discuss their significance for theorem proving. We also benchmark a set of simple baseline machine learning models suited for the tasks (including logistic regression, convolutional neural networks and recurrent neural networks). The results of our baseline models show the promise of applying machine learning to HOL theorem proving.", "target": ["HolStep: Googleから公開された、論理推論を学習するための大規模データセット。与えられた情報の中で推論に重要な点は何か、各推論間の依存関係、そこから導かれる結論は何か、などといったものがタスクとして挙げられている。"]} {"source": "Deep reinforcement learning methods attain super-human performance in a wide range of environments. Such methods are grossly inefficient, often taking orders of magnitudes more data than humans to achieve reasonable performance. We propose Neural Episodic Control: a deep reinforcement learning agent that is able to rapidly assimilate new experiences and act upon them. Our agent uses a semi-tabular representation of the value function: a buffer of past experience containing slowly changing state representations and rapidly updated estimates of the value function. We show across a wide range of environments that our agent learns significantly faster than other state-of-the-art, general purpose deep reinforcement learning agents.", "target": ["Differentiable Neural Computerのメモリの実装を利用した強化学習の提案。State(ゲーム画面をCNNにかけたもの)をkey、その時のQ値をvalueとしてメモリを構成。Stateが来たら各keyとの間で重みを計算し値を読みだす(メモリのサイズが大きい場合はk-nearestを使用)。A3Cより高速に収束"]} {"source": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson \\chi^2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on five scene datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.", "target": ["GANにおいて、Discriminatorは真偽を判定するためその出力は確率値(sigmoid)になっている。ただ、これだと勾配消失が起きやすい。そこで二乗誤差を使用する手法を提案。具体的には、ネットワークの出力値から定数を引く形で誤差を定義する。この最小化が、ピアソンのカイ二乗分布の分散の最小化と等価であることも証明。"]} {"source": "The success of deep learning depends on finding an architecture to fit the task. As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.", "target": ["DNNの構造を人手で決めるのは厳しいので、遺伝的アルゴリズムで探索させようという話。CIFAR-10、 PTBでそれぞれ画像と言語モデル、MSCOCOでイメージキャプションを検証。それだけでなく、実際の雑誌サイトのデータでも検証。リソースがあれば任せるのもありな結果。"]} {"source": "Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as \"the\" and \"of\". Other words that may seem visual can often be predicted reliably just from the language model e.g., \"sign\" after \"behind a red stop\" or \"phone\" following \"talking on a cell\". In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.", "target": ["画像からのその説明を生成する研究について。基本的なモデルはEncoder-Decoder + Attentionだけれど、単語のtheとかofみたいな語を推測する際に画像を参照する必要なくない?ということで、画像を参照するか否か、するとすればどの領域かを判断するゲートを搭載した。"]} {"source": "Nearest neighbor (kNN) methods have been gaining popularity in recent years in light of advances in hardware and efficiency of algorithms. There is a plethora of methods to choose from today, each with their own advantages and disadvantages. One requirement shared between all kNN based methods is the need for a good representation and distance measure between samples. We introduce a new method called differentiable boundary tree which allows for learning deep kNN representations. We build on the recently proposed boundary tree algorithm which allows for efficient nearest neighbor classification, regression and retrieval. By modelling traversals in the tree as stochastic events, we are able to form a differentiable cost function which is associated with the tree's predictions. Using a deep neural network to transform the data and back-propagating through the tree allows us to learn good representations for kNN methods. We demonstrate that our method is able to learn suitable representations allowing for very efficient trees with a clearly interpretable structure.", "target": ["k-NNにおける有用な表現学習の手法を提案。Differentiable Boundary Treesが中心的な役割。ツリー内のトラバーサルを確率論的事象としてモデル化することにより微分可能なコスト関数を形成。効果的な木を構築するために各ノードおよびqueryを写像する全transition共通のディープニューラルネットを使用。"]} {"source": "Deep convolutional networks provide state of the art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and non-linearities. A mathematical framework is introduced to analyze their properties. Computations of invariants involve multiscale contractions, the linearization of hierarchical symmetries, and sparse separations. Applications are discussed.", "target": ["DCNN理解のための数理モデル。DCNNが学習に必要とするデータ量は、そのパラメタ空間の次元からするとかなり小さい。したがってDCNNは(近似する関数)f(x)が定常であるような高次元領域を線形化するような変換Φ(x)を見つけ、線形射影によってデータ空間の次元を潰していると考えられる。本論文ではf(x)の値を変えないような変換として平行移動と微分同相写像を例に挙げ、その数学的性質を検討することでDCNNを理解することを試みる。"]} {"source": "Current deep learning models are mostly build upon neural networks, i.e., multiple layers of parameterized differentiable nonlinear modules that can be trained by backpropagation. In this paper, we explore the possibility of building deep models based on non-differentiable modules. We conjecture that the mystery behind the success of deep neural networks owes much to three characteristics, i.e., layer-by-layer processing, in-model feature transformation and sufficient model complexity. We propose the gcForest approach, which generates \\textit{deep forest} holding these characteristics. This is a decision tree ensemble approach, with much less hyper-parameters than deep neural networks, and its model complexity can be automatically determined in a data-dependent way. Experiments show that its performance is quite robust to hyper-parameter settings, such that in most cases, even across different data from different domains, it is able to get excellent performance by using the same default setting. This study opens the door of deep learning based on non-differentiable modules, and exhibits the possibility of constructing deep models without using backpropagation.", "target": ["ハイパパラメタのチューニングがほぼ不要な決定木のアンサンブルメソッドであるgcForestを提案。構造は下層での複数のforestsからの出力をconcatし、それを次の層の複数のforestsの入力に用いるというカスケードモデル。ディープラーニングと比較して、計算資源, 必要な教師データ数が少なくてよく、異なるドメインから生成されたデータに対しても頑健、並列化が容易という利点がある。"]} {"source": "Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in this https URL", "target": ["教師なしで画像のドメイン変換を行うという試み。変換元と変換先でそれぞれEncoder->画像生成(VAE)->画像判定(Discriminator)を用意、どっちに入れてもどっちも騙せるように訓練する。元は同じ画像なので画像特徴は同じはず、とし高位層での重み共有の制約を入れている。"]} {"source": "When training neural networks, the use of Synthetic Gradients (SG) allows layers or modules to be trained without update locking - without waiting for a true error gradient to be backpropagated - resulting in Decoupled Neural Interfaces (DNIs). This unlocked ability of being able to update parts of a neural network asynchronously and with only local information was demonstrated to work empirically in Jaderberg et al (2016). However, there has been very little demonstration of what changes DNIs and SGs impose from a functional, representational, and learning dynamics point of view. In this paper, we study DNIs through the use of synthetic gradients on feed-forward networks to better understand their behaviour and elucidate their effect on optimisation. We show that the incorporation of SGs does not affect the representational strength of the learning system for a neural network, and prove the convergence of the learning system for linear and deep linear models. On practical problems we investigate the mechanism by which synthetic gradient estimators approximate the true loss, and, surprisingly, how that leads to drastically different layer-wise representations. Finally, we also expose the relationship of using synthetic gradients to other error approximation techniques and find a unifying language for discussion and comparison.", "target": ["Backpropは誤差が出る=Forwardが終了しないと学習できないので、誤差を予測するモデルを組み込み(図中のSG)予測誤差で学習しようという話(SGは別途真の誤差から学習)。特定の問題で収束は確認、通常と異なる学習をするらしい"]} {"source": "Implicit probabilistic models are a very flexible class for modeling data. They define a process to simulate observations, and unlike traditional models, they do not require a tractable likelihood function. In this paper, we develop two families of models: hierarchical implicit models and deep implicit models. They combine the idea of implicit densities with hierarchical Bayesian modeling and deep neural networks. The use of implicit models with Bayesian analysis has in general been limited by our ability to perform accurate and scalable inference. We develop a variational inference algorithm for implicit models. Key to our method is specifying a variational family that is also implicit. This matches the model's flexibility and allows for accurate approximation of the posterior. Our method scales up implicit models to sizes previously not possible and opens the door to new modeling designs. We demonstrate diverse applications: a large-scale physical simulator for predator-prey populations in ecology; a Bayesian generative adversarial network for discrete data; and a deep implicit model for text generation.", "target": ["確率モデルとニューラルネットワークの合わせ技的な話。隠れ層のパラメーター(重みやバイアス)を潜在変数とみなし、これとノイズを組み合わせてActivationする、というのを事前分布の推定ととらえ、これを階層状に積むことで階層ベイズモデルと同様の推定を表現する。Edwardで実装済"]} {"source": "The United States spends more than $1B each year on initiatives such as the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed half a decade. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may provide a cheaper and faster alternative. Here, we present a method that determines socioeconomic trends from 50 million images of street scenes, gathered in 200 American cities by Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22M automobiles in total (8% of all automobiles in the US), was used to accurately estimate income, race, education, and voting patterns, with single-precinct resolution. (The average US precinct contains approximately 1000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a 15-minute drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next Presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographic trends may effectively complement labor-intensive approaches, with the potential to detect trends with fine spatial resolution, in close to real time.", "target": ["都市の統計調査(人種や性別、職業や失業率など・・・)は、個別訪問ベースで行われるため調査結果が出るまで半年ぐらいかかることもあり、実態と乖離することも多い。そこで、街の様子、具体的には街中を走る車の車種からこれらを推計しようという試み。2200万ほどの車種をモデルに利用することで、驚くほど精確に推定ができたという話。 例:15分の中で、セダンの数がピックアップトラックの数よりも多い場合、市は次の大統領選挙で民主党に投票する可能性が高い(88%の確率)。その逆は共和党に投票する可能性が高い(82%)など。"]} {"source": "Two recently developed methods, Feedback Alignment (FA) and Direct Feedback Alignment (DFA), have been shown to obtain surprising performance on vision tasks by replacing the traditional backpropagation update with a random feedback update. However, it is still not clear what mechanisms allow learning to happen with these random updates. In this work we argue that DFA can be viewed as a noisy variant of a layer-wise training method we call Linear Aligned Feedback Systems (LAFS). We support this connection theoretically by comparing the update rules for the two methods. We additionally empirically verify that the random update matrices used in DFA work effectively as readout matrices, and that strong correlations exist between the error vectors used in the DFA and LAFS updates. With this new connection between DFA and LAFS we are able to explain why the \"alignment\" happens in DFA.", "target": ["Back propagationは生体的な学習のプロセスとは乖離があるので、もうちょい実際の生物寄りな学習プロセスを実装するとどうか、という話。ここではDFAというBackpropではなく直接重みを更新する手法と、レイヤ個別に学習させるLAFSがほぼ等価であるという紹介をしている"]} {"source": "Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory.", "target": ["ニューラルネットによるコンピュータの模倣。 微分可能な外部メモリをもつニューラルネットアーキテクチャ\"Defferentiable Neural Computer(DNC)\"を開発。Controller Networkにメモリの使い方を学習させることで、構造的なデータを長期的に保持、利用できるようになった。質問文への解答、グラフによる推論、強化学習によるパズルの解決などができる。以前同じくDeepMindが発表したNeural Turing Machineに似ているが、メモリの動的な割当や解放が出来るようになった点で進歩している。"]} {"source": "Word embeddings are increasingly used in natural language understanding tasks requiring sophisticated semantic information. However, the quality of new embedding methods is usually evaluated based on simple word similarity benchmarks. We propose evaluating word embeddings in vivo by evaluating them on a suite of popular downstream tasks. To ensure the ease of use of the evaluation, we take care to find a good point in the tradeoff space between (1) creating a thorough evaluation – i.e., we evaluate on a diverse set of tasks; and (2) ensuring an easy and fast evaluation – by using simple models with few tuned hyperparameters. This allows us to release this evaluation as a standardized script and online evaluation, available at http://veceval.com/.", "target": ["単語表現を評価するためには単語類似性とアナロジータスクがよく使われている。しかし、それらで良い結果でも実際のタスクで良い結果になるとは限らないことが先行研究で示されている。そのあたりを考慮してこの論文ではより実際のタスクに近い評価手法を提案している。具体的にはユーザが訓練した単語ベクトルを6つのタスク(NER, POS, Chunking, Sentiment Analysis, Question Classification, NLI)で評価するためのシステムを構築している。これにより、学習した単語表現が実際に行いたいタスクに近いタスクで有効なのかを簡単に検証できる。このシステムはhttp://veceval.com/で利用できる。"]} {"source": "We suggest a new method for creating and using gold-standard datasets for word similarity evaluation. Our goal is to improve the reliability of the evaluation, and we do this by redesigning the annotation task to achieve higher inter-rater agreement, and by defining a performance measure which takes the reliability of each annotation decision in the dataset into account.", "target": ["単語表現を評価するためには単語類似度のデータセットがよく使われている。しかし、既存のデータセットには2つの問題がある。一つは単語の関連性と類似性を区別していない点、もう一つは評価者間でアノテーションスコアがばらつく点である。これらの問題に対処するために単語類似度のためのデータセットを作成・使用する方法を提案している。アノテーションと性能指標を再設計することで、評価の信頼性を改善した。ヘブライ語に対する2つのデータセットを作成したところ、より高い評価者間の一致を達成。モデルに対する細かい分析を行うことができることを示した。"]} {"source": "Many real world tasks such as reasoning and physical interaction require identification and manipulation of conceptual entities. A first step towards solving these tasks is the automated discovery of distributed symbol-like representations. In this paper, we explicitly formalize this problem as inference in a spatial mixture model where each component is parametrized by a neural network. Based on the Expectation Maximization framework we then derive a differentiable clustering method that simultaneously learns how to group and represent individual entities. We evaluate our method on the (sequential) perceptual grouping task and find that it is able to accurately recover the constituent objects. We demonstrate that the learned representations are useful for next-step prediction.", "target": ["Neural Expectation Maximization (N-EM)の提案。EMアルゴリズムにおけるハイパパラメタ推定にニューラルネットワークを使用(RNN-EM)。RNNのhidden stateの更新がM stepに該当。"]} {"source": "The method introduced in this paper aims at helping computer vision practitioners faced with an overfit problem. The idea is to replace, in a 3-branch ResNet, the standard summation of residual branches by a stochastic affine combination. The largest tested model improves on the best single shot published result on CIFAR-10 by reaching 2.86% test error. Code is available at https://github.com/xgastaldi/shake-shake", "target": ["画像認識のモデルで、ResNetの2つのブロックからの出力をランダムに組み合わせる(Shake)モデルを提案。forward/backward共にShakeすることでCIFAR-10で2.72%のエラーレートを記録(しかもバッチ単位より個別の画像単位で適用したほうが精度が高い)。"]} {"source": "We present Char2Wav, an end-to-end model for speech synthesis. Char2Wav has two components: a reader and a neural vocoder. The reader is an encoder-decoder model with attention. The encoder is a bidirectional recurrent neural network that accepts text or phonemes as inputs, while the decoder is a recurrent neural network (RNN) with attention that produces vocoder acoustic features. Neural vocoder refers to a conditional extension of SampleRNN which generates raw waveform samples from intermediate representations. Unlike traditional models for speech synthesis, Char2Wav learns to produce audio directly from text.", "target": ["これまでの音声合成は、テキスト→発音特徴、発音特徴→音声と二段階に分かれていたが(WaveNetは後者に相当)、これを統合しEnd to Endな音声合成モデルを作成するという話。Attentionを積んだRNNでEncode、階層上のRNN(SampleRNN)でデコードする。他手法との比較結果はまだ記述されてないが、公式サイトで合成された音声を聞くことができる。"]} {"source": "Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carried out in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment --- RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.", "target": ["SNES(=スーファミの実行環境)で強化学習ができるようにする学習環境RLEを作った話。Atariよりもゲーム数が多く、またバリエーションにも非常に富んでいる。"]} {"source": "We introduce a new audio processing technique that increases the sampling rate of signals such as speech or music using deep convolutional neural networks. Our model is trained on pairs of low and high-quality audio examples; at test-time, it predicts missing samples within a low-resolution signal in an interpolation process similar to image super-resolution. Our method is simple and does not involve specialized audio processing techniques; in our experiments, it outperforms baselines on standard speech and music benchmarks at upscaling ratios of 2x, 4x, and 6x. The method has practical applications in telephony, compression, and text-to-speech generation; it demonstrates the effectiveness of feed-forward convolutional architectures on an audio generation task.", "target": ["画像ならぬ音の超解像に挑戦する研究。端的には低いサンプリングレートの音を畳み込み(ResBlock)+UpSamplingして高サンプリングレートの音に変換する。これで電話の音声などを受信側で高音質にするといったことが可能になる。発音データセットでPSNR40程度でまあ良しの結果"]} {"source": "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and de-convolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises/corruptions. De-convolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and de-convolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, The skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to de-convolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than all previously reported state-of-the-art methods.", "target": ["一枚の画像からその解像度を上げる単画像超解像の論文。 ILSVRC 2015のWinnerであるResNetの手法を取り入れて30層という当時最もディープなモデルを提案し最高性能を獲得し。シンプルでdeepなモデルが良いというトレンドを実証した。"]} {"source": "We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification \\cite{simonyan2015very}. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN \\cite{dong2015image}) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.", "target": ["一枚の画像からその解像度を上げる単画像超解像の論文。 ILSVRC 2015のWinnerであるResNetの手法を取り入れて残差項の推定に特化し、学習係数を従来の1万倍に設定し非常に早い時点での収束を実現した。それでいて性能もほぼ最高性能であった。"]} {"source": "We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding/vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.", "target": ["一枚の画像からその解像度を上げる単画像超解像の論文。 16枚の同じCNNを繰り返し適用して少しづつ高解像画像の推定を行なっている。プーリングを利用せずに合計で20枚ものCNNを適用しているが、同じCNNパラメータを利用することで学習の発散を抑えているのが特徴。"]} {"source": "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.", "target": ["一枚の画像からその解像度を上げる単画像超解像の論文。DLを使った超解像の研究の起点になったものと思われる。3枚から4枚のCNNを用いてfeature mapを作成し高解像度の画像を生成する。"]} {"source": "We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models' operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.", "target": ["時系列データのモデリング方法についての提案。RNNにより潜在変数を予測する生成モデル(Variational RNN)をベースに、各タイムステップにおける潜在変数を格納したメモリ(=時間ごとのデータのふるまいの記憶)を用意し、Attentionによりそこから読み出す機構を提案。"]} {"source": "An Empirical Exploration of Recurrent Network Architectures", "target": ["LSTM/GRUよりも優れた構造を持つものはないか?ということを検証した論文。過去の実績から100の優れた構造をピックアップし、これらとLSTM/GRUを4つのタスク(数式意味解釈、XMLタグ予測、言語モデル、音楽生成)でハイパーパラメーターを変えながら検証。結果、LSTM/GRUをすべてのタスクで上回るモデルは発見できなかった。なお、LSTMのforget gateのbiasは1にするとすごいい。"]} {"source": "Neural language models predict the next token using a latent representation of the immediate token history. Recently, various methods for augmenting neural language models with an attention mechanism over a differentiable memory have been proposed. For predicting the next token, these models query information from a memory of the recent history which can facilitate learning mid- and long-range dependencies. However, conventional attention mechanisms used in memory-augmented neural language models produce a single output vector per time step. This vector is used both for predicting the next token as well as for the key and value of a differentiable memory of a token history. In this paper, we propose a neural language model with a key-value attention mechanism that outputs separate representations for the key and value of a differentiable memory, as well as for encoding the next-word distribution. This model outperforms existing memory-augmented neural language models on two corpora. Yet, we found that our method mainly utilizes a memory of the five most recent output representations. This led to the unexpected main finding that a much simpler model based only on the concatenation of recent output representations from previous time steps is on par with more sophisticated memory-augmented neural language models.", "target": ["Attentionを行う場合、隠れ層のベクトルは次の単語の予測・Attentionの算出・将来の単語に有用な情報の格納、という3つの役割を担っていることになる。なので出力を3つにして役割分担させるアイデア。併せて、単純に過去の隠れ層を結合して入力するだけでも高精度になることを確認"]} {"source": "Batch Normalization is quite effective at accelerating and improving the training of deep models. However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples. We hypothesize that this is due to the dependence of model layer inputs on all the examples in the minibatch, and different activations being produced between training and inference. We propose Batch Renormalization, a simple and effective extension to ensure that the training and inference models generate the same outputs that depend on individual examples rather than the entire minibatch. Models trained with Batch Renormalization perform substantially better than batchnorm when training with small or non-i.i.d. minibatches. At the same time, Batch Renormalization retains the benefits of batchnorm such as insensitivity to initialization and training efficiency.", "target": ["Batch Normalizationの改良版。バッチサイズが小さい時の問題、また推定時と学習時とで正規化方法に差異が出る問題を解消する試み。具体的には、最初は通常通りバッチ内で正規化するけど、徐々にデータ全体の正規化パラメーター(移動平均/分散)へシフトしていくという手法。"]} {"source": "Recently, methods have been proposed that perform texture synthesis and style transfer by using convolutional neural networks (e.g. Gatys et al. [2015,2016]). These methods are exciting because they can in some cases create results with state-of-the-art quality. However, in this paper, we show these methods also have limitations in texture quality, stability, requisite parameter tuning, and lack of user controls. This paper presents a multiscale synthesis pipeline based on convolutional neural networks that ameliorates these issues. We first give a mathematical explanation of the source of instabilities in many previous approaches. We then improve these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar. We also show how to integrate localized style losses in our multiscale framework. These losses can improve the quality of large features, improve the separation of content and style, and offer artistic controls such as paint by numbers. We demonstrate that our approach offers improved quality, convergence in fewer iterations, and more stability over the optimization.", "target": ["StyleTransferを改善する方法についての論文。表現方法(=色の「分布」)の差異を反映できるよう元画像のヒストグラムとの差異を表すlossの項を導入、絵の中のどのスタイルをターゲットのどこに適用するか指示できるようにする、またパラメーターの自動調整を行うといった3本立て。"]} {"source": "Previous studies support the idea of merging auditory-based Gabor features with deep learning architectures to achieve robust automatic speech recognition, however, the cause behind the gain of such combination is still unknown. We believe these representations provide the deep learning decoder with more discriminable cues. Our aim with this paper is to validate this hypothesis by performing experiments with three different recognition tasks (Aurora 4, CHiME 2 and CHiME 3) and assess the discriminability of the information encoded by Gabor filterbank features. Additionally, to identify the contribution of low, medium and high temporal modulation frequencies subsets of the Gabor filterbank were used as features (dubbed LTM, MTM and HTM respectively). With temporal modulation frequencies between 16 and 25 Hz, HTM consistently outperformed the remaining ones in every condition, highlighting the robustness of these representations against channel distortions, low signal-to-noise ratios and acoustically challenging real-life scenarios with relative improvements from 11 to 56% against a Mel-filterbank-DNN baseline. To explain the results, a measure of similarity between phoneme classes from DNN activations is proposed and linked to their acoustic properties. We find this measure to be consistent with the observed error rates and highlight specific differences on phoneme level to pinpoint the benefit of the proposed features.", "target": ["音声認識では入力特徴量としてMel filterbankを使うより、脳と同様にGaborフィルタを使った方が認識精度が良いという話。WERが11から56%の向上。Aurora4, ChiME2, CHiME3データセット。なおGaborの方が良いという話はこの論文が初めてではなくて、Gaborのどれが良いかをいろいろ比較したということ。"]} {"source": "In recent years, machine learning techniques based on neural networks for mobile computing become increasingly popular. Classical multi-layer neural networks require matrix multiplications at each stage. Multiplication operation is not an energy efficient operation and consequently it drains the battery of the mobile device. In this paper, we propose a new energy efficient neural network with the universal approximation property over space of Lebesgue integrable functions. This network, called, additive neural network, is very suitable for mobile computing. The neural structure is based on a novel vector product definition, called ef-operator, that permits a multiplier-free implementation. In ef-operation, the \"product\" of two real numbers is defined as the sum of their absolute values, with the sign determined by the sign of the product of the numbers. This \"product\" is used to construct a vector product in R^N. The vector product induces the l_1 norm. The proposed additive neural network successfully solves the XOR problem. The experiments on MNIST dataset show that the classification performances of the proposed additive neural networks are very similar to the corresponding multi-layer perceptron and convolutional neural networks (LeNet).", "target": ["乗算に代わってef-operationというものを導入してエネルギー効率のよいneural netを作ったという話。ef-operationとは符号は普通の乗算と同じだけど、絶対値は単なる和というもの。MNISTとCIFARで従来と同等の精度を出したという。"]} {"source": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily on word similarity tasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of semantic similarity is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.", "target": ["単語ベクトルの評価手法の一つである単語類似性タスクに関する問題点と既存の解決策を示したサーベイ論文。述べられている問題点は以下の7つ。 ①単語のrelatednessとsimilarityを区別していないデータセットが多い ②既存のデータセットではタスク固有のembeddingsを低く評価してしまう ③チューニングをテストセットで行って過適合させていることが多い ④単語類似性タスクの評価結果とNLPタスクに使用して評価した結果の相関性が低い ⑤評価結果に対する統計的有意性の欠如 ⑥コサイン類似度における頻度の影響 ⑦単語の多義性を考慮していない 結論としては単語ベクトルを評価するのに単語類似度タスクは適切ではないとしている。"]} {"source": "The quality of word representations is frequently assessed using correlation with human judgements of word similarity. Here, we question whether such intrinsic evaluation can predict the merits of the representations for downstream tasks. We study the correlation between results on ten word similarity benchmarks and tagger performance on three standard sequence labeling tasks using a variety of word vectors induced from an unannotated corpus of 3.8 billion words, and demonstrate that most intrinsic evaluations are poor predictors of downstream performance. We argue that this issue can be traced in part to a failure to distinguish specific similarity from relatedness in intrinsic evaluation datasets. We make our evaluation tools openly available to facilitate further study.", "target": ["単語表現の質は、単語類似性の人間の判断との相関を用いてよく評価される。しかし、そこでいい結果でも実際のタスクに適用すると良い結果にならないことを説明している。評価には、10の単語類似性の評価セットと3つのNLPタスク(POS, chunking, NER)を用いて、単語類似性の評価結果とNLPタスクの評価結果の相関を分析している。評価した結果、ほとんどのデータセットでは実際のNLPタスクの結果との間に負の相関があることがわかった。SimLex999だけ例外。このような結果となった原因として、ほとんどのデータセットでは単語の類似性と関連性を区別していないことによるものだと結論付けている。"]} {"source": "Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, we experiment with incorporating richer structural distributions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard soft-selection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks: a linear-chain conditional random field and a graph-based parsing model, and describe how these models can be practically implemented as neural network layers. Experiments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a variety of synthetic and real tasks: tree transduction, neural machine translation, question answering, and natural language inference. We further find that models trained in this way learn interesting unsupervised hidden representations that generalize simple attention.", "target": ["Attentionについて、単純にどの地点に注目するかというカテゴリカル分布的な考えでなく、規則性(構造)があると仮定しようという提案。具体的には、CRFの考えを用いてAttentionの候補となる「系列」に対して全体として最適になるように計算を行うという話。これで精度UPを確認。ただし、計算時間が遅くなるというハンデが伴うので注意。"]}